Markov chain: Difference between revisions
imported>Brian Napoletano (Basic introduction to Markov chains) |
mNo edit summary |
||
(5 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
A '''Markov chain''' is a [[Markov | {{subpages}} | ||
A '''Markov chain''' is a [[Markov process]] with a discrete time parameter <ref name="Neal1993">Neal, R.M. (1993) Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html</ref>. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time <math>\left(t + 1\right)</math> is solely a function of the state <math>t</math>, and not of any previous states <ref name="Lee2004">Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.</ref>. | |||
==A Formal Model== | ==A Formal Model== | ||
Line 10: | Line 12: | ||
|} | |} | ||
In this model, <math>E</math> is any desired subset of the series <math> \left\{ 0, \ldots, n-1 \right\} </math>. These <math>t</math>indexes commonly represent the time component, and the range of <math>X^{ \left( t \right) }</math> is the Markov chain's state space <ref name="Neal1993" />. | In this model, <math>E</math> is any desired subset of the series <math> \left\{ 0, \ldots, n-1 \right\} </math>. These <math>t</math> indexes commonly represent the time component, and the range of <math>X^{ \left( t \right) }</math> is the Markov chain's ''state space'' <ref name="Neal1993" />. | ||
==Probability Density== | ==Probability Density== | ||
The Markov chain can also be specified using a series of probabilities. If the initial probability of the state <math>x</math> is <math>p_{\left(0\right)}\left(x\right)</math>, then the transition probability for state <math>y</math> | The Markov chain can also be specified using a series of probabilities. If the initial probability of the state <math>x</math> is <math>p_{\left(0\right)}\left(x\right)</math>, then the transition probability for state <math>y</math> occurring at time <math>n + 1</math> can be expressed as: | ||
{|border="0" cellpadding="5" | {|border="0" cellpadding="5" | ||
|valign="middle" align="left"| | |valign="middle" align="left"| | ||
|valign="middle" align="left"|<math>p_{\left(n+1\right)}\left(y\right) = \sum_{x} p_{\left(n\right)}\left(x\right | |valign="middle" align="left"|<math>p_{\left(n+1\right)}\left(y\right) = \sum_{x} p_{\left(n\right)}\left(x\right) p\left(y \mid x\right)</math> | ||
|valign="middle" align="right"|''Eq. 2'' | |valign="middle" align="right"|''Eq. 2'' | ||
|} | |} | ||
In words, this states that the probability of the system entering state <math>y</math> at time < | In words, this states that the probability of the system entering state <math>y</math> at time <math>n + 1</math> is a function of the summed products of the initial probability density and the probability of state <math>y</math> given state <math>x</math> <ref name="Lee2004" />. | ||
==Invariant Distributions== | ==Invariant Distributions== | ||
In many cases, the density will approach a limit <math>p\left(y\right)</math> that is uniquely determined by <math>p\left(y \mid x\right)</math> (and not <math>p_{\left(n\right)}\left(x\right)</math>). This limiting distribution is referred to as the ''invariant'' (or ''stationary'') distribution over the states of the Markov chain. When such a distribution is reached, it persists forever. | In many cases, the density will approach a limit <math>p\left(y\right)</math> that is uniquely determined by <math>p\left(y \mid x\right)</math> (and not <math>p_{\left(n\right)}\left(x\right)</math>). This limiting distribution is referred to as the ''invariant'' (or ''stationary'') distribution over the states of the Markov chain. When such a distribution is reached, it persists forever<ref name="Lee2004" />. | ||
==References== | ==References== | ||
<references /> | <references />[[Category:Suggestion Bot Tag]] | ||
[[Category: | |||
Latest revision as of 06:00, 16 September 2024
A Markov chain is a Markov process with a discrete time parameter [1]. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time is solely a function of the state , and not of any previous states [2].
A Formal Model
The influence of the values of on the distribution of can be formally modelled as:
Eq. 1 |
In this model, is any desired subset of the series . These indexes commonly represent the time component, and the range of is the Markov chain's state space [1].
Probability Density
The Markov chain can also be specified using a series of probabilities. If the initial probability of the state is , then the transition probability for state occurring at time can be expressed as:
Eq. 2 |
In words, this states that the probability of the system entering state at time is a function of the summed products of the initial probability density and the probability of state given state [2].
Invariant Distributions
In many cases, the density will approach a limit that is uniquely determined by (and not ). This limiting distribution is referred to as the invariant (or stationary) distribution over the states of the Markov chain. When such a distribution is reached, it persists forever[2].
References
- ↑ 1.0 1.1 Neal, R.M. (1993) Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html
- ↑ 2.0 2.1 2.2 Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.