


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
(4) says that if p > 0.5 (each gamble is in his favor), then there is a positive probability that the gambler will never get ruined but instead will become ...
Typology: Exercises
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1−p respectively. Let Rn denote the total fortune after the nth^ gamble. The gambler’s objective is to reach a total fortune of $N , without first getting ruined (running out of money). If the gambler succeeds, then the gambler is said to win the game. In any case, the gambler stops playing after winning or getting ruined, whichever happens first. There is nothing special about starting with $1, more generally the gambler starts with $i where 0 < i < N. While the game proceeds, {Rn : n ≥ 0 } forms a simple random walk
Rn = ∆ 1 + · · · + ∆n, R 0 = i,
where {∆n} forms an i.i.d. sequence of r.v.s. distributed as P (∆ = 1) = p, P (∆ = −1) = q = 1 − p, and represents the earnings on the succesive gambles. Since the game stops when either Rn = 0 or Rn = N , let
τi = min{n ≥ 0 : Rn ∈ { 0 , N }|R 0 = i},
denote the time at which the game stops when R 0 = i. If Rτi = N , then the gambler wins, if Rτi = 0, then the gambler is ruined. Let Pi = P (Rτi = N ) denote the probability that the gambler wins when R 0 = i. Clearly P 0 = 0 and PN = 1 by definition, and we next proceed to compute Pi, 1 ≤ i ≤ N − 1. The key idea is to condition on the outcome of the first gamble, ∆ 1 = 1 or ∆ 1 = −1, yielding
Pi = pPi+1 + qPi− 1. (1)
The derivation of this recursion is as follows: If ∆ 1 = 1, then the gambler’s total fortune increases to R 1 = i+1 and so by the Markov property the gambler will now win with probability Pi+1. Similarly, if ∆ 1 = −1, then the gambler’s fortune decreases to R 1 = i − 1 and so by the Markov property the gambler will now win with probability Pi− 1. The probabilities corresponding to the two outcomes are p and q yielding (1). Since p + q = 1, (1) can be re-written as pPi + qPi = pPi+1 + qPi− 1 , yielding
Pi+1 − Pi =
q p (Pi − Pi− 1 ).
In particular, P 2 − P 1 = (q/p)(P 1 − P 0 ) = (q/p)P 1 (since P 0 = 0), so that P 3 − P 2 = (q/p)(P 2 − P 1 ) = (q/p)^2 P 1 , and more generally
Pi+1 − Pi = ( q p
i P 1 , 0 < i < N.
Thus
Pi+1 − P 1 =
∑^ i
k=
(Pk+1 − Pk)
∑^ i
k=
q p
k P 1 ,
yielding
Pi+1 = P 1 + P 1
∑^ i
k=
q p
k = P 1
∑^ i
k=
q p
k
1 −( qp )i+ 1 −( qp ) ,^ if^ p^6 =^ q; P 1 (i + 1), if p = q = 0.5.
(Here we are using the “geometric series” equation
∑i n=0 a i (^) = 1 −ai+ 1 −a ,^ for any number^ a^ and any integer i ≥ 1.) Choosing i = N − 1 and using the fact that PN = 1 yields
1 −( qp )N 1 −( qp ) ,^ if^ p^6 =^ q; P 1 N, if p = q = 0.5,
from which we conclude that
1 − qp 1 −( qp )N^ ,^ if^ p^6 =^ q; 1 N ,^ if^ p^ =^ q^ = 0.5,
thus obtaining from (2) (after algebra) the solution
Pi =
1 −( qp )i 1 −( (^) pq )N^ ,^ if^ p^6 =^ q; i N ,^ if^ p^ =^ q^ = 0.5.
(Note that 1 − Pi is the probability of ruin.)
If p > 0 .5, then qp < 1 and thus from (3)
lim N →∞ Pi = 1 − ( q p )i^ > 0 , p > 0. 5. (4)
If p ≤ 0 .5, then qp ≥ 1 and thus from (3)
lim N →∞ Pi = 0, p ≤ 0. 5. (5)
To interpret the meaning of (4) and (5), suppose that the gambler starting with X 0 = i wishes to continue gambling forever until (if at all) ruined, with the intention of earning as much money as possible. So there is no winning value N ; the gambler will only stop if ruined. What will happen? (4) says that if p > 0 .5 (each gamble is in his favor), then there is a positive probability that the gambler will never get ruined but instead will become infinitely rich. (5) says that if p ≤ 0 .5 (each gamble is not in his favor), then with probability one the gambler will get ruined.
p(a) =
1 −( qp )b 1 −( qp )a+b^ ,^ if^ p^6 =^ q; b a+b ,^ if^ p^ =^ q^ = 0.5.
p(a) =
1 − ( qp )b 1 − ( qp )a+b^
When we restrict the random walk to remain within the set of states { 0 , 1 ,... , N }, {Rn} yields a Markov chain (MC) on the state space S = { 0 , 1 ,... , N }. The transition probabilities are given by P (Rn+1 = i + 1|Rn = i) = pi,i+i = p, P (Rn+1 = i − 1 |Rn = i) = pi,i−i = q, 0 < i < N , and both 0 and N are absorbing states, p 00 = pN N = 1.^1 For example, when N = 4 the transition matrix is given by
q 0 p 0 0 0 q 0 p 0 0 0 q 0 p 0 0 0 0 1
Thus the gambler’s ruin problem can be viewed as a special case of a first passage time problem: Compute the probability that a Markov chain, initially in state i, hits state j 1 before state j 2.
(^1) There are three communication classes: C 1 = { 0 }, C 2 = { 1 ,... , N − 1 }, C 3 = {N }. C 1 and C 3 are recurrent whereas C 2 is transient.