Hello..
Read the following the following webpage:
Concurrency and Parallelism: Understanding I/O
https://blog.risingstack.com/concurrency-and-parallelism-understanding-i-o/
So you have to know that my Parallel Compression Library and my
Parallel Archiver are very efficient in I/O, and here is
what i wrote about my powerful Parallel Compression Library:
Description:
Parallel Compression Library implements Parallel LZ4 , Parallel LZMA ,
and Parallel Zstd algorithms using my Thread Pool Engine.
- It supports memory streams, file streams and files
- 64 bit supports - lets you create archive files over 4 GB , supports
archives up to 2^63 bytes, compresses and decompresses files up to 2^63
bytes.
- Parallel compression and parallel decompression are extremely fast
- Now it supports processor groups on windows, so that it can use more
than 64 logical processors and it scales well.
- It's NUMA-aware and NUMA efficient on windows (it parallelizes the
reads and writes on NUMA nodes)
- It minimizes efficiently the contention so that it scales well.
- It supports both compression and decompression rate indicator
- You can test the integrity of your compressed file or stream
- It is thread-safe, that means that the methods can be called from
multiple threads
- Easy programming interface
- Full source codes available.
Now my Parallel compression library is optimized for NUMA (it
parallelizes the reads and writes on NUMA nodes) and it supports
processor groups on windows and it uses only two threads that do the IO
(and they are not contending) so that it reduces at best the contention,
so that it scales well. Also now the process of calculating the CRC is
much more optimized and is fast, and the process of testing the
integrity is fast.
I have done a quick calculation of the scalability prediction for my
Parallel Compression Library, and i think it's good: it can scale beyond
100X on NUMA systems.
The Dynamic Link Libraries for Windows and Dynamic shared libraries for
Linux of the compression and decompression algorithms of my Parallel Compression Library and for my Parallel archiver were compiled from C
with the optimization level 2 enabled, so they are very fast.
Here are the parameters of the constructor:
First parameter is: The number of cores you have specify to run the
compression algorithm in parallel.
Second parameter is: A boolean parameter that is processorgroups to
support processor groups on windows , if it is set to true it will
enable you to scale beyond 64 logical processors and it will be NUMA
efficient.
Just look at the Easy compression library for example, if you have
noticed it's not a parallel compression library:
http://www.componentace.com/ecl_features.htm
And look at its pricing:
http://www.componentace.com/order/order_product.php?id=4
My parallel compression library costs you 0$ and it's a parallel
compression library..
You can read more about my Parallel Compression Library and download it
from my website here:
https://sites.google.com/site/scalable68/parallel-compression-library
Thank you,
Amine Moulay Ramdane.
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)