Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Understanding Bernoulli Trials & Related Distributions: Binomial, Geometric, & Negative Bi, Study notes of Probability and Statistics

Bernoulli trials, a type of experiment with two possible outcomes, and their connection to binomial, geometric, and negative binomial distributions. It covers the rules and parameters of each distribution, providing examples and computations using r. Students will gain a solid understanding of these concepts, essential for advanced statistics and probability courses.

Typology: Study notes

2009/2010

Uploaded on 02/24/2010

koofers-user-4lj
koofers-user-4lj 🇺🇸

10 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Bernoulli Trials and Related Distributions
A single Bernoulli trial is an experiment
with two possible outcomes S and F
such that P(S) = p and P(F) = 1 – p = q.
In practice, the parameter p is often unknown.
Note: Use of p instead of a Greek letter is a
violation of the usual convention. Some books use π,
but that can lead to confusion with π = 3.14159.
If we have several Bernoulli trials in an experiment,
we will assume they are independent and
that p is the same on each trial.
Bernoulli trials are the basis of three families of distributions:
Bernoulli,
Binomial,
Geometric, and
Negative binomial.
A Bernoulli distribution is simply based on one trial:
P(X = 0) = q, P(X = 1) = p. E(X ) = 0(q) + 1(p) = p.
Now we look at Binomial Distributions.
pf3
pf4
pf5

Partial preview of the text

Download Understanding Bernoulli Trials & Related Distributions: Binomial, Geometric, & Negative Bi and more Study notes Probability and Statistics in PDF only on Docsity!

Bernoulli Trials and Related Distributions

A single Bernoulli trial is an experiment

with two possible outcomes S and F

such that P(S) = p and P(F) = 1 – p = q.

In practice, the parameter p is often unknown.

Note: Use of p instead of a Greek letter is a violation of the usual convention. Some books use π, but that can lead to confusion with π = 3.14159.

If we have several Bernoulli trials in an experiment,

we will assume they are independent and

that p is the same on each trial.

Bernoulli trials are the basis of three families of distributions:

ƒ Bernoulli, ƒ Binomial, ƒ Geometric, and ƒ Negative binomial.

A Bernoulli distribution is simply based on one trial:

P( X = 0) = q , P( X = 1) = p. E( X ) = 0( q ) + 1( p ) = p.

Now we look at Binomial Distributions.

Binomial Experiment.

Consists of a known number n of Bernoulli trials.

The random variable of interest is X = # of Ss in n trials.

Explicitly, the rules for a binomial experiment are:

  1. Trials independent.
  2. Two possible outcomes S and F on each trial.
  3. P(S) = p is the same on each trial.
  4. Known number n of trials.
  5. The binomial random variable is X = # of Ss.

Rules 1-3 simply describe repeated Bernoulli trials. Rules 4 & 5 describe how Bernoulli trials are used in a binomial experiment.

The parameters of a binomial experiment are n and p.

In statistical practice, often n is known and p is not.

Both parameters must be known in order to find the

distribution of X.

A Bernoulli distribution is a special case of a binomial distribution in which n = 1.

Here is are the graphs for the distribution that arises from tossing a biased coin with p = 1/3.

Notice that the table in the back of WMS does not give this distribution.

Show that the mean of this distribution BINOM(5, 1/3) is 1.6667.

Examples of distributions that are not binomial. Each of the rules for a binomial experiment is important. Different families of distributions result when rules fail.

A. Suppose there is no fixed n. We toss a fair coin until we see the first Head. X = the number of the trial on which we see the first Head. This is an example of the Geometric Distribution.

B. Suppose there is no fixed n. We toss a fair coin until we see the 3rd Head. X = the number of the trial on which we see the 3rd Head. This is an example of the Negative Binomial Distribution.

C. An urn contains 8 balls, 4 marked S and 4 marked F. We withdraw 5 balls without replacement. The draws are not Bernoulli trials because they are dependent. On each draw the P(S) = 1/2. (Unconditionally on the past.) X = # of Ss drawn. But the distribution of X is not BINOM(5, 1/2) In particular, P( X = 0) = P( X = 5) = 0. This is an example of the Hypergeometric Distribution.

D. A process begins in the AM with P(S) = .99. But during the day P(S) gradually shifts to .95. We sample n = 5 items during the day. The number of Ss observed is not binomial because the p is not the same on all trials. (This is a messy situation, there is no "named" distribution to model it.) Copyright © 2004 by Bruce E. Trumbo. All rights reserved. Department of Statistics, CSU Hayward.WMS=Waclerly, et al.: Mathematical Statistics with Applications, 6th ed., Duxbury, 2002.