# Markov Chains

We will now study stochastic processes, experiments in which the outcomes of events depend on the previous outcomes. Such a process or experiment is called a Markov Chain or Markov process. The process was first studied by a Russian mathematician named Andrei A. Markov in the early 1900s.

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The above information can be expressed in a matrix which lists the probabilities of going from one state into another state. This matrix is called a transition matrix. The reader should observe that a transition matrix is always a square matrix because all possible states must have both rows and columns. All entries in a transition matrix are non-negative as they represent probabilities. Furthermore, since all possible outcomes are considered in the Markov process, the sum of the row entries is always 1.

Example 7.4.1

Professor Symons either walks to school, or he rides his bicycle. If he walks to school one day, then the next day, he will walk or cycle with equal probability. But if he bicycles one day, then the probability that he will walk the next day is 1/4. Express this information in a transition matrix.
Solution
We obtain the following transition matrix by properly placing the row and column entries. Note that if, for example, Professor Symons bicycles one day, then the probability that he will walk the next day is 1/4, and therefore, the probability that he will bicycle the next day is 3/4. Example 7.4.2

In the previous example, if it is assumed that the first day is Monday, write a matrix that gives probabilities of a transition from Monday to Wednesday.
Solution

Let W denote that Professor Symons walks and B denote that he rides his bicycle. We use the following tree diagram to compute the probabilities.

 Monday Tuesday Wednesday  The probability that Professor Symons walked on Wednesday given that he walked on Monday can be found from the tree diagram, as listed below.

P(Walked Wednesday | Walked Monday) = P(WWW) + P(WBW) = 1/4 + 1/8 = 3/8.

P(Bicycled Wednesday | Walked Monday) = P(WWB) + P(WBB) = 1/4 + 3/8 = 5/8.

P(Walked Wednesday | Bicycled Monday) = P(BWW) + P(BBW) = 1/8 + 3/16 = 5/16.

P(Bicycled Wednesday | Bicycled Monday) = P(BWB) + P(BBB) = 1/8 + 9/16 = 11/16.

We represent the results in the following matrix: Alternately, this result can be obtained by squaring the original transition matrix.

We list both the original transition matrix T and T2 as follows:     The reader should compare this result with the probabilities obtained from the tree diagram. Consider the following case, for example:

P(Walked Wednesday | Bicycled Monday) = P(BWW) + P(BBW) = 1/8 + 3/16 = 5/16.

It makes sense because to find the probability that Professor Symons will walk on Wednesday given that he bicycled on Monday, we sum the probabilities of all paths that begin with B and end in W. There are two such paths, and they are BWW and BBW.

Certain Markov chains, called regular Markov chains, tend to stabilize in the long run. It so happens that the transition matrix we have used in the the above examples is just such a Markov chain. The next example deals with the long term trend or steady-state situation for that matrix.

Example 7.4.3

Suppose Professor Symons continues to walk and bicycle according to the transition matrix given in Example 17.1. In the long run, how often will he walk to school, and how often will he bicycle?
Solution
As we take higher and higher powers of our matrix T, it should stabilize.   Therefore, in the long run, Professor Symons will walk to school 1/3 of the time and bicycle 2/3 of the time.

When this happens, we say that the system is in steady-state or state of equilibrium. In this situation, all row vectors are equal. If the original matrix is an n by n matrix, we get n vectors that are all the same. We call this vector a fixed probability vector or the equilibrium vector E. In the above problem, the fixed probability vector E is . Furthermore, if the equilibrium vector E is multiplied by the original matrix T, the result is the equilibrium vector E. That is,

ET = E

or, Regular Markov Chains

A Markov chain reaches a state of equilibrium if it is a regular Markov chain. A Markov chain is said to be a regular Markov chain if some power of it has only positive entries.

As we take higher powers of T, Tn, as n becomes large, approaches a state of equilibrium. The equilibrium distribution vector E can be found by letting ET = E.

Example 7.4.4

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The transition matrix is given below. If the initial market share for Mama Bell is 20% and for Papa Bell 80%, we’d like to know the long term market share for each company.

Let matrix T denote the transition matrix for this Markov chain, and M denote the matrix that represents the initial market share. Then T and M are as follows: and Since each month the town’s people switch according to the transition matrix T, after one month the distribution for each company is as follows: After two months, the market share for each company is: After three months the distribution is: After four months the market share is: After 30 months the market share is .

The market share after 30 months has stabilized to .

This means that: Once the market share reaches an equilibrium state, it stays the same, that is, ET = E

Can the equilibrium vector E be found without raising the transition matrix to large powers? The answer to this question provides us with a way to find the equilibrium vector E. The answer lies in the fact that ET = E. Since we have the matrix T, we can determine E from the statement ET = E. The example below illustrates this approach.

Example 7.4.5

Continuing with the transition matrix from the previous example, suppose , then ET = E gives us:     Therefore, # Practice questions

1. A survey of American car buyers indicates that if a person buys a Ford, there is a 60% chance that their next purchase will be a Ford, while owners of a GM will buy a GM again with a probability of 0.80. Express the buying habits of these consumers in a transition matrix.

2. A hockey player decides to either shoot the puck (S) or pass it to a teammate (P) according to the following transition matrix. Find the following:

a. If the player shot on the first play, what is the probability that he will pass on the third play?

b. What is the long-term shoot vs. pass distribution of this player?

3. The local police department conducts a campaign to reduce the rates of texting and driving in the community. The effects of the campaign are summarized in the transition matrix below: If 35% of people in the community reported texting and driving before the campaign:

a. What is the percentage of people in the community that reported texting and driving after the campaign?

b. If the campaign were to be repeated multiple times, what is the long-range trend in terms of the lowest rate that texting and driving can be reduced to in this community?

4. A large company conducted a training program with their employees to reduce the incidence of slips, trips and falls in the workplace. About 15% of workers reported a slip, trip or fall accident the previous year (year 1). After the training program (year 2), 75% of those who previously reported an accident reported no further accidents, while 5% of those who didn’t report a previous accident reported one this year.

a. Create a transition matrix for this scenario.

b. If the company employs 8500 workers, how many slip, trip and fall accidents were reported in year 2?

c. If the program continued for another year, how many accidents would be reported in year 3?

d. If the training program were to be repeated for many years, what is the lowest prevalence of slip, trip or fall accidents that could be achieved? 