Custom Homework

Chapter 1

Problem C1.1

Consider the Markov chain with state space $S = \{0, 1, 2, \ldots \}$ and transition probabilities: \[ p(i, j) =\begin{cases} p, &j = i + 1\\ q, &j = 0\\ 0, &\text{otherwise} \end{cases} \] where $p, q > 0$ and $p + q = 1.1$. This Markov chain counts the lengths of runs of heads in a sequence of independent coin tosses. Let \[ T_y = \min\{n\ge 1: X_n=y\} \] be the time of the first return to $y$.

  1. Show that the chain is irreducible.
  2. Find $P_0(T_0 = n)$ for $n = 1, 2, \ldots$. What is the name of this distribution?
  3. Is the chain recurrent? Explain.

Problem C1.2

Consider the Ehrenfest chain with $N$ particles. Suppose $X_0 = 0$, i.e., initially all the particles are on the right. Let $T_0$ be the first time all the particles end up on the right again. Let \[ T_y = \min\{n\ge 1: X_n=y\} \] be the time of the first return to $y$. Calculate $E_0(T_0) = \E(T_0 | X_0 = 0)$, the conditional expectation of $T_0$ given $X_0 = 0$.

Hint. For this problem, you may assume that the stationary distribution of the Ehrenfest chain is the binomial distribution $Bin(N , 1/2)$. This is can be checked directly from the definition of stationary distributions. It also pertains to the subject of the detailed balance condition.

Problem C1.3

Consider a Markov chain $X_n$ with state space $S = \{0, 1, 2, \ldots \}$ and transition probabilities \[ \begin{align*} p(x, x + 1) &= p_x,\\ p(x, x − 1) &= q_x \text{ if $x > 0$,}\\ p(0, 0) &= q_0 \text{ if $x > 0$.} \end{align*} \] (All other transitions have zero probability.) In the above, $p_0, p_1, \ldots$ are numbers in $(0, 1)$, and $q_x = 1 − p_x$ . This generalizes the reflected random walk example.
  1. Suppose the chain has a stationary distribution π. What equation does it satisfy?
  2. Using the fact that $\lim_{x\to\infty} \pi(x) = 0$ (because $\sum_{x=0}^\infty \pi(x) = 1$), show that if there is a stationary distribution, then it must satsify detailed balance.
  3. In terms of the ratios $p_x /q_{x+1}$, give a condition that guarantees the existence of a stationary distribution.

Problem C1.4

For the reflected random walk with $p = q = 1/2$ , let $N > 0$ be fixed, and consider \[ g(x) = E_x V_{0,N} \] where $E_x(\cdot) = \E(\cdot | X_0=x)$ and \[ V_{0,N} = \min\{n \ge 0 | X_n = 0\text{ or } X_n = N \}.\] Show that $g(x) = N x − x^2$ for $0 < x < N$.

Problem C1.5

Show that if a Markov chain with transition probability function $p(x, y)$ is irreducible and has a stationary distribution $\pi(x)$, then $\pi(x) > 0$ for all states $x$.

Problem C1.6

For this problem, it is useful to know that two events $A$ and $B$ are conditionally independent given a third event $C$ if \[ \P(A \cap B|C) = \P(A|C) \cdot \P(B|C).\]

  1. Show that if $X_0, X_1, \ldots , X_n, \ldots$ is a Markov chain, then for every $m, n \ge 0$ and $x_i \in S$, \[ \begin{align*} &\P(\overbrace{X_0 = x_0, X_1 = x_1,\cdots,X_{n−1}=x_{n−1}}^{\text{the past}}, \overbrace{X_{n+1}=x_{n+1},\cdots,X_{n+m}=x_{n+m}}^{\text{the future}} |\overbrace{X_n=x_n}^{\text{the present}})\\ &= \P(\underbrace{X_0 = x_0, X_1 = x_1, \cdots , X_{n−1}= x_{n−1}}_{\text{the past}} | \underbrace{X_n = x_n}_{\text{the present}})\\ &\times \P(\underbrace{X_{n+1} = x_{n+1}, \cdots , X_{n+m} = x_{n+m}}_{\text{the future}} | \underbrace{X_n = x_n}_{\text{the present}}). \end{align*} \] In words: for a Markov chain, the future and the past are conditionally independent given the present.
  2. Let \[ T_y = \min\{n\ge 1: X_n=y\} \] be the time of the first return to $y$.
  3. Suppose a chain $X_n$ is irreducible. Show that for all $n \ge 0$ and states $x, y, z$ with $y \neq x$, the events $\{T_x > n\}$ and $\{X_{n+1} = z\}$ are conditionally independent given $\{X_n = y\}$.

Problem C1.7

Let $P$ be a stochastic matrix with the given eigenvalues. Say whether the Markov chain described by $P$

  1. has a unique stationary distribution, and
  2. converges to equilibrium.
Briefly justify.
  1. $\{\pm1, \frac{1}{2}\pm\frac{i\sqrt{3}}{2}, -\frac{1}{2}\pm\frac{i\sqrt{3}}{2}\}$
  2. $\{ 1, 1, \pm\frac{1}{2}\}$
  3. $\{\pm 1, \pm \frac{1}{3}\}$
  4. $\{1, -\frac{1}{2},-\frac{1}{2}\}$

Problem C1.8

Let $P$ be an $N \times N$ doubly stochastic matrix, and suppose the corresponding Markov chain $(X_n)$ is irreducible. Show that $(X_n)$ satisfies the detailed balance condition if and only if $P$ is a symmetric matrix.

Chapter 2

Problem C2.1

This generalizes Example 2.2 from Durrett's book

A submarine has $n$ parts which fail according to the exponential distributions with rates $\lambda_j)$, $j=1,2,\ldots,n$. The submarine fails when two or more of those parts fail.

Problem C2.2

A gadget has $4$ parts which fail according to the exponential distributions with rates $\lambda_j$, $j=1,2,3,4$. The gadget fails when all 4 parts fail.

Problem C2.3

Betty needs to get to work quickly. She runs to the bus stop where she can catch a bus, or take a taxi. The time to the next bus arrival is uniformly distributed and the buses arrive every 12 minutes. The taxis arrive according to the Poisson process with the rate of 5 taxicabs per hour. Thus, the time to the next taxicab arrival is exponentially distributed.

Let $T_1$ and $T_2$ be the time to the arrival of the next bus and the next taxicab, respectively. Let $V$ be the time Betty waited for her transportation and $I=1,2$ depending whether she took the bus or the taxicab.

  1. Find the joint pdf of $T_1$ and $T_2$.
  2. Express $V$ in terms of $T_1$ and $T_2$.
  3. Find the pdf of $V$.
  4. Find the probabilities $I=1$ and $I=2$.
  5. Find the conditional probability $\P(I=i|V=v)$, $0\le v\le 12$.
  6. Is $\P(I=i|V=v)$ defined for $v>12$?
  7. Betty upon arriving in the office told her friend Ann that she waited at the bus stop for 3 minutes. What is the probability that Betty took the bus to work?
  8. Another day Betty told Ann that she waited at least 5 minutes at the bus stop. What is the probability that that day Betty took the bus to work?
  9. Are random variables $V$ and $I$ independent? Justify your answer.

Problem C2.9

For any integer $n > 0$ and real $t > 0$, let $U_1, U_2,\ldots , U_n$ be independent uniform random variables on $[0, t]$. Let $V_n = \max(U_1,U_2, \ldots, U_n)$.

  1. Compute $\P(V_n \le s)$ for $0 \le s \le t$.
  2. Find $\E(V_n)$.

Programming Problems

Problem C100.1

A crooked die is one that yields the score on one of its 6 faces with uneven probability. We roll the die repeatedly and independently, modeled by an IID sequence $X_1$, $X_2$, $\ldots$. Let \[ S_n = \sum_{j=1}^n X_j \] be the total score of $n$ rolls. Let \[ p_k = P(X_1 = k), \quad k=1,2,\ldots,6 \] be the relevant portion of the pmf of $X_n$ and, for fixed $n$, let \[ q_k = \P(S_n=k),\quad j=n,n+1,\ldots,6n \] be the relevant portion of the pmf of $S_n$.

Write a function in R or Python or Octave which finds all probabilities in several independent rolls of this die.

The function accepts two arguments:

  1. a vector $(p_j)_{j=0}^6$ holding the probabilities of the $6$ faces on the die, and
  2. a positive integer $n$, the number of rolls
and it returs a vector $(q_j)_{j=n}^{6n}$ such that $q_j$ yields the probability of the total of $n$ rolls to be $j$.

As Python has 0-based indexing, assume that the vectors passed are actually $(p_j)_{j=0}^6$ and $(q_j)_{j=0}^{6n}$, padded with zeros at the beginning. For instance, a standard fair die has


      p == [0, 1/6, 1/6, 1/6, 1/6, 1/6, 1/6]
    
    
Both R and Octave/MATLAB have 1-based indexing. However, You will be still passed the padding, i.e. for both R and Octave the array $p$ can be defined by: \[ p[k] = \P(X_1=k-1),\quad k=1,2,\ldots. \] Since $\P(X_1=0) = 0$, this means that $p[1] == 0$.

When returning $q$ from your function you must observe a similar convention, whereby the first $n$ entries of $q$ are $0$ (regardless of the programming language).

Your file must be called "crooked.py" or "crooked.R" or "crooked_die.m", and have the following function in it in Python (auxillary functions may also be present):


      import numpy;
      def crooked_die(p, n):
      # Your code here
      # ...
      return q
    
and in R:
crooked.die <- function(p, n) {
      # Your code here
      # ...
      q
      }
    
And in Octave:

      function ret = crooked_die(p, n)
      % Your code here
      % ...
      ret = something
      end
    

Your code will not be able to install or require any add-on packages, only the base code of R and Python, with one exception: for Python I provided the 'numpy' library.

Gradescope submission instructions

Submit the file to Gradescope assignment 'Week0-programming-py' or 'Week0-programming-r' depending on the language of your choice. Upon submission, your program will be autograded. You will see the results and will be able to resubmit until all errors go away.

Problem C100.2

Let $X_n$, $n=0,1,2,\ldots$, be the branching process of Example 1.8 on page 6 in the book.

Write a function in your chosen programming language that finds the probability of becoming absorbed at $0$, assuming that generation $0$ has $m$ members. That is, what is the probability of the family name becoming extinct in generation $n$, given that $m$ males have the family name at time $0$?

Assume that an individual may have $Y$ descendents and that the probability \[P(Y=k) = p_k\] where $k=0,1,2,\ldots,K$ where $K$ is finite. For testing purposes, you may assume $K\le 10$.

The signature of your function should be:


import numpy;
  def prob_of_extinction(p, m, n):
      # Your code here
      # ...
  return q
and in R:
prob.of.extinction <- function(p, m, n) {
      # Your code here
      # ...
      q
      }
    
And in Octave:

  function ret = prob_of_extinction(p, m, n)
  % Your code here
  % ...
  ret = something
  end

Your submission file should be named "branching.py" (Python) or "branching.R" (R) or "prob_of_extinction.m" (Octave).

Problem C100.3

NOTE: This problem is related to Example 1.31 on page 36 in the book.

The geometric distribution (one of the two variants) is given by the formula for its pmf: \[ \P(X = x) = \pi(x)=\theta^x(1-\theta),\qquad x=0,1,2,\ldots. \] where $\theta \in (0,1)$ is the probability of failure.

The book outlines the method of sapling from this distribution when $\theta$ is close to $1$. The essence of the method is to use Metropolis-Hastings algorithm with the uniform proposal distribution \[ q(x, x+i) = \frac{1}{2L+1}, \qquad i=-L,-L+1,\ldots,L-1,L. \] where $L = O\left(\frac{\theta}{1-\theta}\right)$ is the expectation of $X$.

Find the formula for the transition matrix $p(x,y)$, $x,y\in\ZZ$ (thus, design a Markov chain with infinite state space $S=\ZZ$) such that $\pi(x)$ satisfies the detailed balance condition for $p(x,y)$: \[ \pi(x)\,p(x,y) = \pi(y)\,p(y,x). \]

Coding part

Your solution will be in a file "geomsampler.py" or "geomsampler.R". You will write two functions.

The first function will find the probability $p(x,y)$, with signature in Python:


      import numpy as np;
      def geom_sampler_p(theta, x, y, L = np.round(theta/(1-theta))):
      # Your code here that makes ret = p(x,y) as described
      # 
      # 
      return ret
    
and in R:
geom.sampler.p <- function(theta, x, y, L = round(theta/(1-theta))) {
      # Your code here that makes ret = p(x,y) as described
      # ...
      ret
      }
    

The second function will sample from the distribution $\pi(x)$.


      import numpy as np;
      def geom_sampler_sample(theta, n, L = np.round(theta/(1-theta))):
      # Your code here that makes ret
      # to be a vector of n random numbers distributed
      # according to pi(x).
      # 
      # 
      return ret
    
and in R:
geom.sampler.sample <- function(theta, n, L = round(theta)/(1-theta)) {
      # Your code here that makes ret
      # to be a vector of n random numbers distributed
      # according to pi(x).
      # 
      # 
      ret
      }
    

Seeding your sampler

Somewhere you will store the current state, say $x$. Start with $x=0$. If $n\le 0$ is passed to your sampler function, instead of generating random sample, reset the state to $x=0$ and return empty vector. This is a way to seed your sampler without introducing another function.

Random number generation

In your code you must may use the Linear Congruential Generator provided by the instructor, so that the results are reproducible in both Python and R with all (binary) digits identical. When choosing the jump size $i$ to jump from $x$ to $x+i$, where $i\in\{l,l+1,\ldots,r\}$, follow this procedure:

Note that for $l=-1$, $r=1$ I used precisely this method in my demo code. NOTE: One exception to the above rule is that I handled $j=0$ last.

Important do-not's and do's

Do not seed the random number generator, I will seed it for you. Do not use burnin and do not skip samples (unlike in my demo code for Example 1.31). Do start at state $x=0$.

Problem C100.4

Consider the following variation of M/M/Q queue. A bank has $s$ tellers. Customers arrive at a rate $\lambda$. The service time is exponentially distributed with rate $\mu$. When all servers are busy, a customer waits in line for the next available teller.

The new rule is that a customer that waits in line may choose to leave at any time with rate $\eta$. That is, if a customer is currently waiting in-line, she may leave after time $t \sim Expon(\eta)$.

Write a function markovsim, which simulates this continuout-time Markov chain. Follow the calling convention of the provided simulator for the standard M/M/Q queue.

Detailed function prototypes

The R code should have a function


      markovsim <- function(y0=0, t=1, lambda=1, mu=1, eta = 1, n.servers = 2) {
      ...
      list(T=T,Y=Y)
      }
    
    
implemented in the file "markovsim.R" which shall be submitted through Gradescope.

Likewise, Python should have a function


      def markovsim(y0=0, t=1, lambd=1, mu=1, eta=1, n_servers = 2):
      ...
      return [T,Y]
    
    

in the file "markovsim.py" to be submitted through Gradescope.

NOTE: the name "lambd" is not misspelled! Now "lambda" is a reserved keyword in Python, so you cannot use it as a variable.

The returned vectors $T$ and $Y$ should contain the event times and states as in the starter code.