Hello,
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
call random_number(u)
n+(m+1-n)*u (where, n = 0 , m =10^-6)
But it didn't work.
can anyone help me with this issue?
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
call random_number(u)
n+(m+1-n)*u (where, n = 0 , m =10^-6)
I want to generate a real random number from [0 10^-6].
On Wednesday, November 23, 2022 at 9:19:04 PM UTC-8, Pratik Patel wrote:.
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
call random_number(u)I believe this formula is right for m and n integers,
n+(m+1-n)*u (where, n = 0 , m =10^-6)
and an integer range.
And more specifically, the result is truncated to an integer value.
In addition, note that for a continuous probability distribution,
the probability of any given (real) value is zero.
There is, then, no difference between that and [0 10^-6), which you would
get from m*u (assuming n=0). And especially, be sure than m and n are not integer variables.
Hello,
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
call random_number(u)
n+(m+1-n)*u (where, n = 0 , m =10^-6)
But it didn't work.
can anyone help me with this issue?
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
call random_number(u)
u = u * 1.0e-6
Am I missing something ??
On Tuesday, November 29, 2022 at 11:49:41 PM UTC-8, pehache wrote:.
(snip)
(snip)I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
call random_number(u)
u = u * 1.0e-6
Am I missing something ??
The OP may, or may not, have wanted a non-zero lower value..
But also, the OP wanted an inclusive interval.
Mostly I believe that the one shouldn't worry about that.
In the case of a continuous distribution, the probability of any
individual value is zero. (Unless there is a delta function
in the distribution.)
In a digital computer, distributions aren't continuous, but they are close. Close enough.
First note that for a range that isn't a half open interval that is a power of two, the actual probability of a given representable value isn't uniform.
There are about 2**52 IEEE doubles between 0 and 1, so you might have
to see 2**52 values before seeing any specific value.
You would need many more that that, to know that a specific value
never occurred.
So, yes, u*1e-6.
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
So, yes, u*1e-6.
try u*1d-6
On Wednesday, November 30, 2022 at 7:19:19 PM UTC-8, Robin Vowels wrote:.
(snip, someone wrote)
(and I wrote)I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
So, yes, u*1e-6.
try u*1d-6
Maybe, or maybe not.
This whole problem depends on rounding more than usual.
First, neither 1e-6 or 1d-6 has an exact binary representation.
First, we don't know the type of u..
Second, we don't know the type needed for the final value..
And finally, we don't know the exact range of values needed..
There will be rounding after the multiply, and possibly again.
on assignment to a result variable.
Either or both of 1e-6 and 1d-6 might be more than the actual
desired value.
It might be important that the result possibly be larger
than either the true or represented value of the constant,
or that it never be larger.
Either or both of 1e-6 and 1d-6 might be more than the actual
desired value.
see above.
On Wednesday, November 30, 2022 at 7:19:19 PM UTC-8, Robin Vowels wrote:
(snip, someone wrote)
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
(and I wrote)
So, yes, u*1e-6.
try u*1d-6
Maybe, or maybe not.
This whole problem depends on rounding more than usual.
First, neither 1e-6 or 1d-6 has an exact binary representation.
(But maybe the OP has a machine with DFP?)
First, we don't know the type of u.
Second, we don't know the type needed for the final value.
And finally, we don't know the exact range of values needed.
There are about 2**52 IEEE doubles between 0 and 1, so you might have
to see 2**52 values before seeing any specific value.
You would need many more that that, to know that a specific value
never occurred.
First, lets assume that the OP meant to use the range [0.0_wp, 1.e-6_wp]
for some yet to be specified working precision. That lower bound has no ambiguity because the mathematical expression maps exactly to the
floating point value. The upper bound has some mapping to an exact
floating point value, so lets assume that that value is the one intended
to be used, and that he wants to return that upper bound value.
The PRNG will return a value u in the range [0.0_wp,1.0_wp). I think
that upper bound is reliable, meaning that u<1.0_wp will always be
satisfied.
That means the product relation u*1.0e-6_wp<1.0e-6_wp will
always be satisfied if all the bits are computed correctly in the fp multiplication.
That means that the multiplication would never return
the desired upper bound. Is that acceptable? Who knows? Fortran does not guarantee correct fp multiplication, of course, but if the upper bound
is not satisfied, then every multiplication must be tested against the
upper bound anyway. If that test is really necessary, then an expression
like
But what if the OP was a little imprecise, and he really wanted the
range [0.0_wp,1.e-6_wp)? That is, he does not want the upper bound value
to ever be generated. In this case, one could remember that we really
aren't working with real numbers, we are working with just a finite
subset of rational numbers. Standard fortran gives us an easy way to
specify what is the correct upper bound to the desired range, and it is
I have always thought it would be interesting if instead PRNGs would
cycle over all possible floating point values. All the fp values in each exponent range would eventually be selected, but with probabilities associated with the exponent value. That is elements from the [.5,1.0)
range would occur with twice the frequency as from the [.25,.5) range,
and so on. I do not keep up with the PRNG literature, so maybe this
approach has been implemented somewhere already. Anyone know?
I don't know that is so obvious. If the multiply rounds, it might be
able to round up. Before IEEE 754, each system decided how it
would do rounding. IEEE 754 allows one to choose the rounding,
but even so, I am not sure about this case.
A fairly common use for random numbers want Gaussian distribution.
There is a formula to convert a uniform distribution to a Gaussian.
I haven't thought about that for a while, but one complication is that
it doesn't always get the tails right. Gaussians should have a small probability of getting very far out. Given the usual [0, 1) values,
they don't get all that far, but it might be that they could do better.
Most PRNGs will produce numbers that are equally distributed in the
interval [0,1). They do this by taking random integers with the
appropriate number of bits and scaling them by the floating point output range. So this means that some subset of the floating point numbers in
that range will be selected, presumably with equal probability, and the remaining subset will never be selected, no matter how long you wait for
them to occur.
I have always thought it would be interesting if instead PRNGs would
cycle over all possible floating point values.
In article <f2dde822-6a92-40ba...@googlegroups.com>,
There are formulae for the conversion.
For Gaussian distributions, generate N uniform-deviate random numbers,
add them and divide by N. That gives you your first Gaussian random
number. Repeat as often as you need to get more Gaussian random
numbers. Efficient? No.
For Gaussian distributions, generate N uniform-deviate random numbers,
add them and divide by N. That gives you your first Gaussian random
number.
On 12/3/22 11:30 AM, Phillip Helbig (undress to reply) wrote:
For Gaussian distributions, generate N uniform-deviate random numbers,
add them and divide by N. That gives you your first Gaussian random number.
This would actually be a binomial distribution. While it is true that binomial distributions approach a gaussian shape for large N, they have
some important differences, e.g. in the tail regions, as mentioned in
some previous posts.
On 12/3/22 11:30 AM, Phillip Helbig (undress to reply) wrote:
For Gaussian distributions, generate N uniform-deviate random
numbers, add them and divide by N. That gives you your first
Gaussian random number.
This would actually be a binomial distribution. While it is true that binomial distributions approach a gaussian shape for large N, they
have some important differences, e.g. in the tail regions, as
mentioned in some previous posts.
$.02 -Ron Shepard
Not a binomial distribution. For example, starting from an exact
uniform distribtion at N=1, for N=2 you get a triangular distribution.
You could find the eaxct formulae for larger N in various books, but
there would be little point in that. To get approximate standard
Gaussian, you would take the sum, subract N/2 divide the result by the
square root of the variance, which is N/12 ... which leads to using the
value N=12 as a common choice when this approach was in use.
I want to generate a real random number from [0 10^-6]. I have been unable to do so using the following logic:
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 159 |
Nodes: | 16 (0 / 16) |
Uptime: | 100:00:00 |
Calls: | 3,209 |
Files: | 10,563 |
Messages: | 3,009,979 |