to wonder: suppose that I had a pseudorandom number generator that
attempted to generate a nonuniform distribution. Suppose for instance
that it was to generate a 0 bit 2/3 of the time, and a 1 bit 1/3 of the
time.
How would one go about testing this PRNG
So, in this case, careful code reviews might be better than
tests. For example, assuming, random.intrange( 0, 2 ) works
as advertised, we can be pretty sure that
0 if random.randint( 0, 2 ) else 1
fulfills the requirement.
Inspired by the recent thread about pseudorandom number generators on python-ideas (where I also mistakenly first wrote this message), I began
to wonder: suppose that I had a pseudorandom number generator that
attempted to generate a nonuniform distribution. Suppose for instance
that it was to generate a 0 bit 2/3 of the time, and a 1 bit 1/3 of the
time.
How would one go about testing this PRNG against an idealized (similarly biased) PRNG?
|One of the oldest interpretations is the /limit frequency/
|interpretation. If the conditioning event /C/ can lead
|to either A or "not A", and if in /n/ repetitions of such
|a situation the event A occurs /m/ times, then it is asserted
|that P(A|C) = lim n-->oo (m/n). This provides not only
|an interpretation of probability, but also a definition
|of probability in terms of a numerical frequency ratio.
|Hence the axioms of abstract probability theory can
|be derived as theorems of the frequency theory.
|
|In spite of its superficial appeal, the limit frequency
|interpretation has been widely discarded, primarily because
|there is no assurance that the above limit really exists for
|the actual sequences of events to which one wishes to apply
|probability theory.
|
"Quantum Mechanics" (1998) - Leslie E. Ballentine
ram@zedat.fu-berlin.de (Stefan Ram) writes:
So, in this case, careful code reviews might be better than
tests. For example, assuming, random.intrange( 0, 2 ) works
as advertised, we can be pretty sure that
0 if random.randint( 0, 2 ) else 1
fulfills the requirement.
In practice, tests should still be done. When such a function
returns the same result a hundred times, it does make sense
to become suspicious and investigate it.
As far as I know, the state-of-the-art in statistical tests against
PRNGs is the TestU01 library, available at
http://simul.iro.umontreal.ca/testu01/tu01.html
On Wed, Dec 07, 2022 at 03:28:47PM -0300, Sabrina Almodóvar wrote:
As far as I know, the state-of-the-art in statistical tests against
PRNGs is the TestU01 library, available at
 http://simul.iro.umontreal.ca/testu01/tu01.html
I'm familiar with this type of test. But as far as I can tell and have
seen, these tests only tst against *uniform* PRNGs. I am not aware of
any written tests against nonuniform PRNGs.
I suspect it would be possible to mirror a lot of the ideas. For
example, one common PRNG statistical test is to make many of matrices
of various sizes and to study the ranks of these matrices. Presumably
one could do a similar statistical analysis against what would be
expected for any particular probability distribution. Running a
candidate PRNG through this test will produce some sort of
distribution, after all.
But it would be much nicer if work on statistical tests against
nonuniform PRNGs had already been done somewhere.
Inspired by the recent thread about pseudorandom number generators on python-ideas (where I also mistakenly first wrote this message), I
began to wonder: suppose that I had a pseudorandom number generator
that attempted to generate a nonuniform distribution. Suppose for
instance that it was to generate a 0 bit 2/3 of the time, and a 1 bit
1/3 of the time.
How would one go about testing this PRNG against an idealized
(similarly biased) PRNG?
- DLD
One thing you could do is to apply von Neumann de-biasing to convert a
string of output bits from your biased PRNG to an unbiased string, and
test the de-biased output.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 62:20:46 |
Calls: | 6,712 |
Files: | 12,244 |
Messages: | 5,355,895 |