



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
One method to solve and analyze nonlinear dynamic stochastic models is to approximate the nonlinear equations characterizing the equilibrium with log-.
Typology: Study notes
1 / 5
This page cannot be seen from the preview
Don't miss anything!
One method to solve and analyze nonlinear dynamic stochastic models is to approximate the nonlinear equations characterizing the equilibrium with log- linear ones. The strategy is to use a first order Taylor approximation around the steady state to replace the equations with approximations, which are linear in the log-deviations of the variables. Let Xt be a strictly positive variable, X its steady state and
xt ≡ log Xt − log X (1)
the logarithmic deviation. First notice that, for X small, log(1 + X) ' X, thus:
xt ≡ log(Xt) − log(X) = log(
Xt X
) = log(1 + %change) ' %change.
Suppose that we have an equation of the following form:
f (Xt, Yt) = g(Zt). (2)
where Xt, Yt and Zt are strictly positive variables. This equation is clearly also valid at the steady state:
f(X, Y ) = g(Z). (3)
To find the log-linearized version of (2), rewrite the variables using the iden- tity Xt = exp(log(Xt))^1 and then take logs on both sides:
log(f (elog(Xt), elog(Yt))) = log(g(elog(Zt))). (4)
Now take a first order Taylor approximation around the steady state (log(X), log(Y ), log(Z)). After some calculations, we can write the left hand side as
log(f (X, Y )) +
f (X, Y )
[f 1 (X, Y )X(log(Xt) − log(X)) + f 2 (X, Y )Y (log(Yt) − log(Y ))]. (5) (^1) This procedure allows us to obtain an equation in the log-deviations.
Similarly, the right hand side can be written as
log(g(Z)) +
g(Z)
[g^0 (Z)Z(log(Zt) − log(Z))]. (6)
Equating (5) and (6), and using (3) and (1), yields the following log-linearized equation:
[f 1 (X, Y )Xxt + f 2 (X, Y )Y yt] ' [g^0 (Z)Zzt]. (7)
Notice that this is a linear equation in the deviations! Generalizing, the log-linearization of an equation of the form
f (x^1 t , ..., xnt ) = g(y^1 t , ..., ynt )
is:
X^ n
i=
fi(x^1 , ..., xn)xixit '
X^ m
j=
gj (y^1 , ..., ym)yj^ yjt.
However, in the large majority of cases, there is no need for explicit differenti- ation of the function f and g. Instead, the log-linearized equation can usually be obtained with a simpler method. Let’s see. Notice first that you can write
Xt = X(
Xt X
) = Xelog(Xt^ /X)^ = Xext
Taking a first order Taylor approximation around the steady state yields
Xext^ ' Xe^0 + Xe^0 (xt − 0) ' X(1 + xt)
By the same logic, you can write
XtYt ' X(1 + xt)Y (1 + yt) ' XY (1 + xt + yt + xtyt)
where xtyt ' 0 , since xt and yt are numbers close to zero. Second, notice that
f(Xt) ' f(X) + f^0 (X)(Xt − X) ' f(X) + f^0 (X)X(Xt/X − 1) ' f(X) + f(X)η(1 + xt − 1) ' f(X)(1 + ηxt)
Notice that at the steady state
Rσ−^1 βσ^ = 1 − Π.
and
Π = 1 − Rσ−^1 βσ.
Using (8) and (9) we can write the nonlinear difference equation as
Rσ−^1 βσ(1 + (σ − 1)rt+1 + πt − πt+1) ' 1 − (1 − Rσ−^1 βσ)(1 + πt).
Canceling out constants yields
Rσ−^1 βσ[(σ − 1)rt+1 + πt − πt+1] ' −(1 − Rσ−^1 βσ)πt.
Rearranging, we obtain
Rσ−^1 βσ^ − 1 Rσ−^1 βσ^
πt ' (σ − 1)rt+1 + πt − πt+
and, finally,
πt ' Rσ−^1 βσ^ [(1 − σ)rt+1 + πt+1].
2.1.3 The Euler equation
The consumption Euler equation is
1 = Rt+1β(Ct+1/Ct)−γ^.
Using (9) and (10) we can write it as
1 ' Rβ(1 + rt+1 − γ(ct+1 − ct)).
Canceling out constants yields
0 ' rt+1 − γ(ct+1 − ct)
and, rearranging,
ct ' −σrt+1 + ct+
where σ = 1/γ is the intertemporal elasticity of substitution.
2.1.4 Multiplicative equations
If the equation to log-linearize contains only multiplicative terms, there is a faster procedure. Suppose we have the following equation:
XtYt Zt = α
where α is a constant. To log-linearize divide first by the steady state variables:
( X Xt )( Y Yt ) ( Z Zt )
α α
Now take logs:
log( Xt X
) + log( Yt Y
) − log( Zt Z
) = log(1) = 0.
Using (1) we arrive then easily to the log-linearized equation:
xt + yt − zt = 0.
Notice that in this case the log-linearized equation is not an approximation!