On Mon, 3 Mar 2014 15:15:52 -0800 (PST), varinsacha@yahoo.fr wrote:implementation of the bootstrap, this method has the advantage of not introducing extra noise due to resampling randomly.
Hi,
It is me again !
I have 2 questions this time about bootstrap.
Many thanks for your precious help.
1) One way of carrying out the bootstrap is to average equally over all possible bootstrap samples from the original data set (where two bootstrap data sets are different if they have the same data points but in different order). Unlike the usual
To carry out this implementation on a data set with n data points, how many bootstrap data sets would we need to average over?
If you are referring to the usual sort of bootstrap,
where N cases are drawn with replacement from the
sample of N, then "all possible samples" is N raised to
the Nth power.
An N of 10 is nearly the max, for modern computers.
Depending on what statistics you are bootstrapping,
you might have to figure what you want to do for
those exceptional samples where the same case is
drawn all 10 times.
2) If we have n data points, what is the probability that a given data point does not appear in a bootstrap sample?
The chance that it is not drawn first is (1-1/N).
Ditto, for each next draw; so raise that quantity to N.
--
Rich Ulrich
2) If we have n data points, what is the probability that a givenThe chance that it is not drawn first is (1-1/N).
data point does not appear in a bootstrap sample?
Ditto, for each next draw; so raise that quantity to N.
--
Rich Ulrich
I dont understand the 2nd part where Professor ulrich said to raise
that quantity to N. I understand that the probability of getting in
draw is 1/N and not getting will be 1-1/N But I dont get what do we
mean by raising it to N
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 285 |
Nodes: | 16 (2 / 14) |
Uptime: | 72:29:37 |
Calls: | 6,489 |
Calls today: | 2 |
Files: | 12,096 |
Messages: | 5,275,731 |