Probability distribution: Difference between revisions
imported>Ron Kacmarcik m (This is a work in progress) |
imported>Fredrik Johansson m (Probability distributions moved to Probability distribution: singular) |
(No difference)
|
Revision as of 23:11, 6 May 2007
Random variables have probability distributions which represent the expected results of an experiment repeated multiple times. As a simple example, consider the expected results for a coin toss experiment. While we don't know the results for any individual toss of the coin, we can expect the results to average out to be heads half the time and tails half the time (assuming a fair coin).
The following are several important probability distributions
Bernoulli - Each experiment is either a 1 with probability p or a 0 with probability 1-p. For example, when tossing a fair coin you can assign the value of 1 to either heads or tails. After many coin tosses you would expect the number of results for heads to equal the tails results, thus the heads probability is p=50% and 1-p=50% for the tails probability.
Binomial
Geometric
Negative Binomial
Poisson
Uniform
Exponential
Gaussian (or normal)
Gamma
Rayleigh
Cauchy
Laplacian