###### Simulate discrete-time Markov chain ######################## # Simulates n steps of a Markov chain # markov(init,mat,n,states) # Generates X0, ..., Xn for a Markov chain with initiial # distribution init and transition matrix mat # Labels can be a character vector of states; default is 1, .... k
markov <-function(init,mat,n,labels){ if(missing(labels)) labels <- 1:length(init) simlist <- numeric(n+1) states <- 1:length(init) simlist[1]<- sample(states,1,prob=init) for(i in2:(n+1)) { simlist[i]<- sample(states,1,prob=mat[simlist[i-1],])} labels[simlist] } #################################################### P <- matrix(c(0.1,0.2,0.4,0.3,0.4,0,0.4,0.2,0.3,0.3,0,0.4,0.2,0.1,0.4,0.3), nrow=4, byrow=TRUE) lab <-c("Aerobics","Massage","Weights","Yoga") rownames(P)<- lab colnames(P)<- lab P init <-c(1/4,1/4,1/4,1/4)# initial distribution states <-c("a","m","w","y") # simulate chain for 100 steps simlist <- markov(init,P,100,states) simlist table(simlist)/100 steps <- 1000000 simlist <- markov(init,P,steps,states) table(simlist)/steps
3.2 Stationary Distribution
3.2.1 定义
注意Stationary Distribution的定义没有出现极限。
If the initial distributino is a stationary distribution, Then X0, X1, ⋯, Xn is a sequence of identically distributed random variables. But it doesn’t mean that the random variables are independent.
Lemma 3.1: Limiting Distributions are stationary Distribution
The reverse is false, 反例:
,
以及
3.2.2 Regular Matrices
一个自然的想法是问,什么样的条件下,一个马尔可夫链有极限分布,而且极限分布就是平稳分布呢。
满足如下性质的马尔可夫链是符合这个要求的
Regular Transition Matrix A transition matrix P is said to be regular if some power of P is positive. That is , for some n ≥ 1
有定理:
Theorem 3.2: A markov chain whose transition matrix P is regular has a limiting distribution, which is teh unique, positive, stationary distribution of the chanin.
Ex 3.3-3.4 具体算例,
3.2.3 Finding the stationary distribution
本质上这是一个特征值问题。
### Stationary distribution of discrete-time Markov chain ### (uses eigenvectors) ### stationary <-function(mat){ x = eigen(t(mat))$vectors[,1] as.double(x/sum(x)) }
Ex 3.5-3.6; 计算技巧,令x1 = 1
Ex 3.7 The Ehrenfest dog-flea model
Ex 3.8 Random walk on a graph;
On weighted graph
Stationiary Distribution for Random walk on a weighted graph
Let G = (V, E) be a weighted graph with edge weight function w(i, j). For random walk on G, the stationary distribution pi is proportion to the sum of teh edge weights incident to each vertex. That is. where w(v) = ∑z ∼ vw(v, z)
On simple graph
Stationary Distribution for simple Random Walk on a graph
For simple random walk on a weighted graph, set , then, w(v) = deg(v),which gives
Ex 3.9-3.10 如何计算的案例;
3.2.4 The Eigenvalue Connection
转置,看出与特征值的关联。
Ex 3.10 理论案例: random walk in regular graph.。
3.3 Can you find the way to state a
3.3.1 状态可到达与状态互通
Say that state j is accessible from state i, if Pijn > 0. That is,there is positive probability of reaching j from i in a finite number of steps. State i and j communicate if i is accessible from j and j is accessible from i
eg 3.11 本例讲述了用 Transition graphs 展示 Communication classes
3.3.2 不可约
Irreducibility A Markov chain is called irreducible if it has exactly one cmmunication class. That is, all states communicate with each other
Ex 3.12 一个不可约链的例子;
3.3.3 Recurrence and Transience
Given a Markov chain X0, X1, …, let Tj = min {n > 0 : Xn = j}be the first passage time to state j. If Xn ≠ j, ∀n > 0, seeTj = ∞. Let fj = P(Tj < ∞|X0 = j) be the probability started in j eventually returns to j.
State j is said to be recurrent if the Markov chain started in j eventually revists j. That is fj = 1
State j is said to be transient if there is positive probability that the Markov chain started in j never returns to j. That is fj < 1
Theorem 3.3 The states of a communication class are either all recurrent or all transient. Corollary 3.4 For a finite irreducible Markov chain, all states are recuurent.
Ex 3.13 接下来的例子是简单的一维随机游走;这个例子可以推广到高维。
3.3.4 Canonical Decomposition
Closed Communication Class
Lemma 3.5 A communication class is closed if it consists of all recurrent states. A finite communication class is closed only if it consits of all recurrent states.
反证法即可证得;最后便可以得到,我们想定义的;
The state space S of a finite Markov chain can be partitioned into transient and reccurent states as S = T ∪ R1 ∪ ⋯Rm, where T is the set of all transient states and Ri are closed communiction classes of recurrent states. This is called the canonical decomposition.
注:由等价类的定义可以保障这么重排状态转移矩阵,是与原矩阵等价的。
Given a canonical decomposition, the state space can be reordered so that the Markov transition matrix has the block matrix form
The property of time reversibility can be explained intuitively as follows. If you were to take a movie of Markov chain moving forward in time and then run the movie backwards, you could not tell the difference between the two.
换成数学上的语言就是,如果假设马尔可夫链处于稳态,这时存在:
P(X0 = i, X1 = j) = P(X0 = j, X1 = i)
由全概率公式可知;
Time Reversibility
An irreducible Markov chain with transition matrix P and stationary distribution π, if πiPij = πjPji, ∀i, j
Ex 3.23;
Ex 3.24;Simple random walk on a graph is time
3.7.2 Reversible Markov Chains and Radom walk
Every reversible Markov chain can be considered as a random walk on a weighted graph
Ex 3.25 算例;
3.7.3 The key benifit of reversibility
Proposition 3.9 Let P be the transition matrix of a Markov chain. If x is a probability distribution which satisfies xiPij = xjPji, ∀i, j then x is the stationary distribution, and the markov chain is reversible.
Ex 3.26 Birth-and-death chain
Case: random walk with a partialy relecting boundary.
3.8 Absorbing Chains
3.8.1 定义
Absorbing State, Absorbing Chain
State i is an absorbing state if Pii = 1. A Markov chain is called an absorbing chain if it has at leat one absorbing state.
根据这个定义,一个吸收的马尔可夫链的canoical decompostion可以写为:
Ex 3.31
3.8.2 Expected Number of Visits to Transient States
Theorem 3.11 Consider an absorbing Markov chain with t transient states. Let F be a t × t matrix indexed by transient states. where Fij is the expected number of visits to j given that the chain starts in i, Then, F = (I − Q) − 1
3.8.3 Expected Time to Absorption
Absorbing Markov Chains For an absorbing Markov chain with all states either transient or absorbing. Let F = (I − Q) − 1 1. (Absorption probability) The probability that from transient state i the chain is absorbed in state j is (FR)ij 2. (Absorption time) The expected number of steps from transient state i until the chain is absorbed in some absorbing state is (F1)i