If the initial distributino is a stationary distribution, Then is a sequence of identically distributed random variables. But it doesn't mean that the random variables are independent.
Lemma 3.1: Limiting Distributions are stationary Distribution
The reverse is false, 反例:
,
以及
3.2.2 Regular Matrices
一个自然的想法是问,什么样的条件下,一个马尔可夫链有极限分布,而且极限分布就是平稳分布呢。
满足如下性质的马尔可夫链是符合这个要求的
Regular Transition Matrix A transition matrix is said to be regular if some power of is positive. That is , for some
有定理:
Theorem 3.2: A markov chain whose transition matrix is regular has a limiting distribution, which is teh unique, positive, stationary distribution of the chanin.
Stationiary Distribution for Random walk on a weighted graph
Let be a weighted graph with edge weight function . For random walk on G, the stationary distribution is proportion to the sum of teh edge weights incident to each vertex. That is. where
On simple graph
Stationary Distribution for simple Random Walk on a graph
For simple random walk on a weighted graph, set , then, ,which gives
Ex 3.9-3.10 如何计算的案例;
3.2.4 The Eigenvalue Connection
转置,看出与特征值的关联。
Ex 3.10 理论案例: random walk in regular graph.。
3.3 Can you find the way to state
3.3.1 状态可到达与状态互通
Say that state is accessible from state i, if . That is,there is positive probability of reaching from in a finite number of steps. State and communicate if is accessible from and is accessible from
eg 3.11 本例讲述了用 Transition graphs 展示 Communication classes
3.3.2 不可约
Irreducibility A Markov chain is called irreducible if it has exactly one cmmunication class. That is, all states communicate with each other
Ex 3.12 一个不可约链的例子;
3.3.3 Recurrence and Transience
Given a Markov chain , let be the first passage time to state . If , see. Let be the probability started in eventually returns to .
State is said to be recurrent if the Markov chain started in eventually revists . That is
State is said to be transient if there is positive probability that the Markov chain started in j never returns to . That is
Theorem 3.3 The states of a communication class are either all recurrent or all transient. Corollary 3.4 For a finite irreducible Markov chain, all states are recuurent.
Ex 3.13 接下来的例子是简单的一维随机游走;这个例子可以推广到高维。
3.3.4 Canonical Decomposition
Closed Communication Class
Lemma 3.5 A communication class is closed if it consists of all recurrent states. A finite communication class is closed only if it consits of all recurrent states.
反证法即可证得;最后便可以得到,我们想定义的;
The state space S of a finite Markov chain can be partitioned into transient and reccurent states as , where T is the set of all transient states and are closed communiction classes of recurrent states. This is called the canonical decomposition.
注:由等价类的定义可以保障这么重排状态转移矩阵,是与原矩阵等价的。
Given a canonical decomposition, the state space can be reordered so that the Markov transition matrix has the block matrix form
其中
Ex3.14 具体 case;
更进一步有:
edit: 2020-05-06
3.7 Time reversibility
3.7.1 Definition
The property of time reversibility can be explained intuitively as follows. If you were to take a movie of Markov chain moving forward in time and then run the movie backwards, you could not tell the difference between the two.
换成数学上的语言就是,如果假设马尔可夫链处于稳态,这时存在:
由全概率公式可知;
Time Reversibility
An irreducible Markov chain with transition matrix P and stationary distribution , if
Ex 3.23;
Ex 3.24;Simple random walk on a graph is time
3.7.2 Reversible Markov Chains and Radom walk
Every reversible Markov chain can be considered as a random walk on a weighted graph
Ex 3.25 算例;
3.7.3 The key benifit of reversibility
Proposition 3.9 Let be the transition matrix of a Markov chain. If is a probability distribution which satisfies then is the stationary distribution, and the markov chain is reversible.
Ex 3.26 Birth-and-death chain
Case: random walk with a partialy relecting boundary.
3.8 Absorbing Chains
3.8.1 定义
Absorbing State, Absorbing Chain
State is an absorbing state if . A Markov chain is called an absorbing chain if it has at leat one absorbing state.
根据这个定义,一个吸收的马尔可夫链的canoical decompostion可以写为:
Ex 3.31
3.8.2 Expected Number of Visits to Transient States
Theorem 3.11 Consider an absorbing Markov chain with t transient states. Let be a matrix indexed by transient states. where is the expected number of visits to given that the chain starts in , Then,
3.8.3 Expected Time to Absorption
Absorbing Markov Chains For an absorbing Markov chain with all states either transient or absorbing. Let 1. (Absorption probability) The probability that from transient state the chain is absorbed in state is 2. (Absorption time) The expected number of steps from transient state until the chain is absorbed in some absorbing state is