Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Bootstrap and Jackknifing sampling Methods, Lecture notes of Statistics

Brief introduction about some re-sampling methods

Typology: Lecture notes

2018/2019

Uploaded on 03/26/2019

om-parkash-sheoran
om-parkash-sheoran 🇮🇳

1 document

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Resampling Methods
In Statistics we mostly used hypothesis testing which needs that the sample drawn
from population is normally distributed or we can say that sampling distribution is known.
Now-a-days it is claimed that many statistics and hypothesis tests are generally robust in
situation when distribution of a statistic is unknown or no option to normal theory approach
is available. The alternative approach is resampling methods.
Resampling statistics refers to the use of the observed data or of a data generating
mechanism to produce new hypothetical samples (resamples) that mimic or simulate the
underlying population. The samples so generated is then analysed for drawing inferences.
Resampling techniques are widely used in many discipline such as life-sciences because
when parametric approached are difficult to employ or otherwise do not apply.
Resampled data is derived using a manual mechanism to simulate many pseudo-trials.
These approaches were difficult to utilize prior to 1980s since these methods require many
repetitions. With the incorporation of computers, the trials can be simulated in a few minutes
due to which methods have widely used.
Uses of Resampling Techniques
The most practical use of resampling methods is to derive confidence intervals
and test hypotheses. This is accomplished by drawing simulated samples from
the data themselves (resamples) or from a reference distribution based on the
data; afterwards, you are able to observe how the statistic of interest in these
resamples behaves.
Resampling approaches can be used to substitute for traditional statistical
approaches or when a traditional approach is difficult to apply.
These methods are widely used because their ease of use.
They generally require minimal mathematical formulas, needing a small
amount of mathematical (algebraic) knowledge.
These methods are easy to understand and stray away from choosing an
incorrect formula in your diagnostics.
Application of Resampling Techniques
Analysis of Null models, competition, and community structure
Detecting Density Dependence
Characterizing Spatial Patterns and Processes
Estimating Population Size and Vital Rates
Environmental Modeling
Evolutionary Processes and Rates
Phylogeny Analysis
Types of Resampling Methods
I. Monte Carlo Simulation – This is a method that derives data from a mechanism (such as a
population) that models the process you wish to understand (the population). This produces
new samples of simulated data, which can be examined as possible results. After doing many
repetitions, Monte Carlo tests produce exact p-values that can be interpreted as an error rate;
letting the number of repeats sharpens the critical region.
II. Randomization (Permutation) Test – this is a type of statistical significance test, in which
a reference distribution is obtained by calculating all possible values of the test statistic under
pf3
pf4

Partial preview of the text

Download Bootstrap and Jackknifing sampling Methods and more Lecture notes Statistics in PDF only on Docsity!

Resampling Methods In Statistics we mostly used hypothesis testing which needs that the sample drawn from population is normally distributed or we can say that sampling distribution is known. Now-a-days it is claimed that many statistics and hypothesis tests are generally robust in situation when distribution of a statistic is unknown or no option to normal theory approach is available. The alternative approach is resampling methods. Resampling statistics refers to the use of the observed data or of a data generating mechanism to produce new hypothetical samples (resamples) that mimic or simulate the underlying population. The samples so generated is then analysed for drawing inferences. Resampling techniques are widely used in many discipline such as life-sciences because when parametric approached are difficult to employ or otherwise do not apply. Resampled data is derived using a manual mechanism to simulate many pseudo-trials. These approaches were difficult to utilize prior to 1980s since these methods require many repetitions. With the incorporation of computers, the trials can be simulated in a few minutes due to which methods have widely used. Uses of Resampling Techniques ▲ The most practical use of resampling methods is to derive confidence intervals and test hypotheses. This is accomplished by drawing simulated samples from the data themselves (resamples) or from a reference distribution based on the data; afterwards, you are able to observe how the statistic of interest in these resamples behaves. ▲ Resampling approaches can be used to substitute for traditional statistical approaches or when a traditional approach is difficult to apply. ▲ These methods are widely used because their ease of use. ▲ They generally require minimal mathematical formulas, needing a small amount of mathematical (algebraic) knowledge. ▲ These methods are easy to understand and stray away from choosing an incorrect formula in your diagnostics. Application of Resampling Techniques ▲ Analysis of Null models, competition, and community structure ▲ Detecting Density Dependence ▲ Characterizing Spatial Patterns and Processes ▲ Estimating Population Size and Vital Rates ▲ Environmental Modeling ▲ Evolutionary Processes and Rates ▲ Phylogeny Analysis Types of Resampling Methods I. Monte Carlo Simulation – This is a method that derives data from a mechanism (such as a population) that models the process you wish to understand (the population). This produces new samples of simulated data, which can be examined as possible results. After doing many repetitions, Monte Carlo tests produce exact p-values that can be interpreted as an error rate; letting the number of repeats sharpens the critical region. II. Randomization (Permutation) Test – this is a type of statistical significance test, in which a reference distribution is obtained by calculating all possible values of the test statistic under

rearrangements of the labels on the observed data points. Like other the Bootstrap and the Monte Carlo approach, permutation methods for significance testing also produce exact p- values. These tests are the oldest, simplest, and most common form of resampling tests and are suitable whenever the null hypothesis makes all permutations of the observed data equally likely. In this method, data is reassigned randomly without replacement. They are usually based on the Student t and Fisher’s F test. Most non-parametric tests are based on permutations of rank orderings of the data. This method has become practical because of computers; without them, it may be impossible to derive all the possible permutations. This method should be employed when you are dealing with an unknown distribution. III. Bootstrapping – This approach is based on the fact that all we know about the underlying population is what we derived in our samples. Becoming the most widely used resampling method, it estimates the sampling distribution of an estimator by sampling with replacement from the original estimate, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter. Like all Monte Carlo based methods, this approach can be used to define confidence Intervals and in hypothesis testing. This method is beneficial to side step problems with non-normality or if the distribution parameters are unknown. This method can be used to calculate an appropriate sample size for experimental design. IV. Jackknife – This method is used in statistical inference to estimate the bias and standard error in a statistic, when a random sample of observations is used to calculate it. This method provides a systematic method of resampling with a mild amount of calculations. It offers “improved” estimate of the sample parameter to create less sampling bias. The basic idea behind the jackknife estimator lies in systematically re-computing the statistic estimate leaving out one observation at a time from the sample set. From this new “improved" sample statistic can be used to estimate the bias can be variance of the statistic. Bootstrapping It was given by Efron(1979). Bootstrap is a data based simulation technique for drawing statistical inference. This method is based on the idea to compute the statistic of interest and to estimate its sampling distribution without any distributional assumption. This method is described as follows

Suppose is a given sample then we had to draw a bootstrap sample using random sampling n times with replacement from original sample. For Example if n = 5 and then the bootstrap samples are

where B is the number of bootstrap samples. Let us suppose the θ is the population parameter of interest then

on similar way we have

, ,... .where ,, ... are called bootstrap replications. In general

The ith Jackknife Replication is defined as the value of the estimator s(.) evaluated at the ith jackknife sample

The Jackknife Standard Error is define as

Where is the average of the Jackknife replicates and is given by

Jackknife estimates redues bias Let be the original estimator of which has bias. Hence

where A is a function of not of n. Thus

Now the expected value of each pseudo value becomes

Hence it can be seen that the first order bias term vanish. More generally, bias of order 1/n k^ is

removed by using the delete-k Jackknife. Variance of the Estimation

Which is already described.