• perpetual-compression

    From j.pcalovas@gmail.com@21:1/5 to All on Wed Mar 11 01:52:54 2020
    I can compress random file by two bytes. I nearly finish to make compression.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From j.pcalovas@gmail.com@21:1/5 to All on Wed Mar 11 01:45:48 2020
    I can compress random file of 1MB on 2 bytes. I nearly to finish to make my compression.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From danceswithnumbers@gmail.com@21:1/5 to Jules Gilbert on Mon Apr 27 11:19:53 2020
    On Monday, July 8, 2002 at 1:11:38 PM UTC-6, Jules Gilbert wrote:
    Hello folks:

    I have decided to sell my 'random' compressors.

    This is what I mean when I use the word 'random' or
    'random-appearing'.

    1) My compressor technology ONLY compress'es random-appearing-data.
    It is not able to reduce the filesize of files containing information
    that can be compressed using conventional compressors', say or ARJ or
    GZIP.

    (While one can XOR the input of say, a text file, and that will enable
    the material to be compressed, it is much better to first compress
    such a file with a good quality compressor, say an ARJ compressor or
    other high quality CONVENTIONAL compressor.) After all, XOR'ing does
    not reduce filesize. Compression, even conventional compression,
    does!

    2) Only buffer's of substantial size can be compressed. The amount
    of computer time required to compress smaller files, say 64k, is so
    high so as to make the effort impractical.

    3) As the program exists today, it accepts one or more buffers of 8MB
    of previously compressed data and produces smaller buffers as output.
    It does not seem terribly difficult to me for the method to be
    extended to handle ordinary files. In fact, I do this now (not well, though).


    For various reasons, I am interested in restricting the use of my
    products (now only one program) to the United States. Further, I am
    not enamored of the patenting process -- it is really a mechanism for managing information disclosure, and very little protection is
    provided from really bad people -- people I would not do business with
    in any case. So I am offering to license my program by means of trade
    secret ONLY.

    My program is suited for use with data that is written once and read
    many times, such as movies or other entertainment data, etc. It is
    probably not suitable for data that is written once and read only
    several times unless space is a critical problem constraint.

    Our demonstrating process involves two machines, not connected by wire
    or other means. Compression occurs on one machine, the results are transferred by floppy and the result is de-compressed on another
    machine. These machines can be inspected by a technician, they are
    ordinary PC's running FreeBSD.

    If you represent commercial interests, please contact me to learn
    more.

    Sincerely,
    Jules Gilbert

    Using the randbetween function in excel produced the target of 1 million "random" integers between 0 - 9, I then ran the stream through a transform, copied the results and pasted them into word to remove spaces and carriage returns and any punctuation's.
    I then copied it into word pad, saved the file and compressed it using .rar The file compressed down to 246,040 KB. Thinking there was an error I un zipped and restored it without errors, reversed the transform to its original state.

    I am not a programmer but worked on this project with a co-worker Kelly D. Crawford Ph.D for eight years day and night. He passed away weeks before it could be finished. I finished, and this was the result. I would like to give him and his family any
    posthumous credit but I am left thinking this could be an error considering the various steps.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fibonacci Code@21:1/5 to All on Sun Feb 7 06:18:52 2021
    Neural networks running thru a set of random data will never able to adapt to all data.
    Further, you need to keep the neural network smaller than the output + the errors.

    So, I could only say I admire your persistency, but only hope if you could use it in a better way, then stuck in perpetual compression.


    Regards,
    Raymond

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fibonacci Code@21:1/5 to All on Sun Feb 7 06:19:43 2021
    Neural networks running thru a set of random data will never able to adapt to all data.
    Further, you need to keep the neural network smaller than the output + the errors.

    So, I could only say I admire your persistency, but only hope if you could use it in a better way, then stuck in perpetual compression.

    Regards,
    Fibonacci

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Evert Pot@21:1/5 to danceswi...@gmail.com on Mon Jan 9 11:55:43 2023
    On Monday, April 27, 2020 at 2:19:56 PM UTC-4, danceswi...@gmail.com wrote:
    On Monday, July 8, 2002 at 1:11:38 PM UTC-6, Jules Gilbert wrote:
    Hello folks:

    I have decided to sell my 'random' compressors.

    This is what I mean when I use the word 'random' or
    'random-appearing'.

    1) My compressor technology ONLY compress'es random-appearing-data.
    It is not able to reduce the filesize of files containing information
    that can be compressed using conventional compressors', say or ARJ or GZIP.

    (While one can XOR the input of say, a text file, and that will enable
    the material to be compressed, it is much better to first compress
    such a file with a good quality compressor, say an ARJ compressor or
    other high quality CONVENTIONAL compressor.) After all, XOR'ing does
    not reduce filesize. Compression, even conventional compression,
    does!

    2) Only buffer's of substantial size can be compressed. The amount
    of computer time required to compress smaller files, say 64k, is so
    high so as to make the effort impractical.

    3) As the program exists today, it accepts one or more buffers of 8MB
    of previously compressed data and produces smaller buffers as output.
    It does not seem terribly difficult to me for the method to be
    extended to handle ordinary files. In fact, I do this now (not well, though).


    For various reasons, I am interested in restricting the use of my
    products (now only one program) to the United States. Further, I am
    not enamored of the patenting process -- it is really a mechanism for managing information disclosure, and very little protection is
    provided from really bad people -- people I would not do business with
    in any case. So I am offering to license my program by means of trade secret ONLY.

    My program is suited for use with data that is written once and read
    many times, such as movies or other entertainment data, etc. It is probably not suitable for data that is written once and read only
    several times unless space is a critical problem constraint.

    Our demonstrating process involves two machines, not connected by wire
    or other means. Compression occurs on one machine, the results are transferred by floppy and the result is de-compressed on another
    machine. These machines can be inspected by a technician, they are ordinary PC's running FreeBSD.

    If you represent commercial interests, please contact me to learn
    more.

    Sincerely,
    Jules Gilbert
    Using the randbetween function in excel produced the target of 1 million "random" integers between 0 - 9, I then ran the stream through a transform, copied the results and pasted them into word to remove spaces and carriage returns and any punctuation'
    s. I then copied it into word pad, saved the file and compressed it using .rar The file compressed down to 246,040 KB. Thinking there was an error I un zipped and restored it without errors, reversed the transform to its original state.

    I am not a programmer but worked on this project with a co-worker Kelly D. Crawford Ph.D for eight years day and night. He passed away weeks before it could be finished. I finished, and this was the result. I would like to give him and his family any
    posthumous credit but I am left thinking this could be an error considering the various steps.

    I case you're still curious. If you store a list of numbers in a text file there's a TON of repetition. It's not raw binary data, it's ASCII data for which almost every byte is going be in the range 48-57. Highly compressible even though the numbers
    themselves might be properly random.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Evert Pot on Mon Jan 9 12:26:24 2023
    Evert Pot <evert@badgateway.net> writes:
    On Monday, April 27, 2020 at 2:19:56 PM UTC-4, danceswi...@gmail.com wrote:
    [...]
    Using the randbetween function in excel produced the target of 1
    million "random" integers between 0 - 9, I then ran the stream
    through a transform, copied the results and pasted them into word to
    remove spaces and carriage returns and any punctuation's. I then
    copied it into word pad, saved the file and compressed it using .rar
    The file compressed down to 246,040 KB. Thinking there was an error
    I un zipped and restored it without errors, reversed the transform
    to its original state.

    Some of those numbers must be incorrect. 1 million decimal digits
    represented in ASCII would be about 977 kilobytes (assuming a "kilobyte"
    is 1024 bytes, not 1000 bytes). If the decimal digits are random, I'd
    expect it to compress to about 294 kilobytes *at best*. The reported
    size of "246,040 KB" either indicates an error of a factor of at least
    1000, or uses "," as a decimal separator. Even if it's 246 KB, that's
    much better compression than I'd expect.

    I am not a programmer but worked on this project with a co-worker
    Kelly D. Crawford Ph.D for eight years day and night. He passed away
    weeks before it could be finished. I finished, and this was the
    result. I would like to give him and his family any posthumous
    credit but I am left thinking this could be an error considering the
    various steps.

    I case you're still curious. If you store a list of numbers in a text
    file there's a TON of repetition. It's not raw binary data, it's ASCII
    data for which almost every byte is going be in the range
    48-57. Highly compressible even though the numbers themselves might be properly random.

    To be precise, a byte (using the most common meaning of the term)
    is 8 bits and can can hold any of 256 values. There are only 10
    decimal digit values, so in a sequence of decimal digits represented
    in ASCII, 246 of the 256 possible byte values are never used.
    A very simple compression method, binary-coded decimal, stores each
    decimal digit in 4 bits, leaving 6 unused values for each 4-bit unit.
    A slightly more sophisticated compression method could use 10 bits
    for each b3 decimal digits, using 1000 of 1024 possible values.
    Both of these work only for decimal digits.

    An ideal compression algorithm could store each decimal digit in
    log2(10) bits, or about 3.32 bits per digit, yielding an output about
    30.1% the size of the input (plus metadata). A general-purpose
    compression algorithm might come reasonably close to that, though
    the best I've managed is about 44% with `xz -9`. (I don't have a
    rar compressor.) (I used /dev/urandom, not Excel's randbetween,
    to generate the input.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for XCOM Labs
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)