Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Multivariable Static Optimization: Finding Extreme Values of Functions with Many Variables, Study notes of Economics

A lecture note from the fall semester '07-'08 for a course on multivariable static optimization. It covers the concepts of univariate and multivariate optimization, first and second order conditions for extreme points, concavity, convexity, and global extrema. The lecture also includes examples and calculations for finding extreme values of functions with many variables.

Typology: Study notes

Pre 2010

Uploaded on 08/19/2009

koofers-user-mtp
koofers-user-mtp 🇺🇸

10 documents

1 / 7

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Fall Semester ’07-’08
Akila Weerapana
LECTURE 8: MULTIVARIABLE STATIC OPTIMIZATION
I. INTRODUCTION
Many economic models are derived from the behavior of individuals, firms or policy makers
who are trying to maximize some measure of welfare subject to constraints on their behavior.
Optimization is, therefore, the most important mathematical concept in economics
Our study of optimization encompasses three categorizations of problems The first distinction
is between univariate and multivariate optimization, i.e. finding extreme values of func-
tions with one variable vs. finding extreme values of functions with many variables. Read
Chapter 9 of Klein and bring yourself back up to speed quickly on the first and second order
conditions for a solution to a univariate maximization problem.
The second distinction is between constrained and unconstrained optimization. You
should be familiar with both multivariate optimization from Econ 201. What we do here
should add to that knowledge.
The final distinction is between static and dynamic optimization, i.e. between one shot
optimization decisions and decisions in which your current choice may affect a subsequent
optimization decision.
II. UNIVARIATE OPTIMIZATION
Optimization lies at the heart of economic behavior. Therefore, it is vital that we be able to
figure out what the extreme values of a function are. Here, I will first derive the necessary first
order (FOC) and sufficient second order (SOC) conditions for finding the extreme value(s) of
a univariate function.
I will be using differentials to find the extreme value(s) of a function. This will allow for an
easier extension of this analysis from the univariate case to the multivariate case.
First Order (Necessary) Conditions
Given a function of the form y=f(x) we can calculate the differential as dy =f0(x)dx. If
xis a candidate for an extreme point then the value of yshould not change for any small
change dx in the vicinity of x. In other words we should see that dy f0(x)dx = 0 for all
dx in the vicinity of x. This only holds true when f0(x) = 0, which is the traditional first
order condition
Keep in mind that the FOC is only a necessary condition for an extreme point, not a sufficient
one. It is possible that f0(x) = 0 even when xis neither a minimum nor a maximum.
Consider the function y=x3. Since f0(x) = 3x2, we can show that f0(0) = 0. However, zero
is neither a local minimum nor a local maximum of f(x).
pf3
pf4
pf5

Partial preview of the text

Download Multivariable Static Optimization: Finding Extreme Values of Functions with Many Variables and more Study notes Economics in PDF only on Docsity!

Fall Semester ’07-’ Akila Weerapana

LECTURE 8: MULTIVARIABLE STATIC OPTIMIZATION

I. INTRODUCTION

  • Many economic models are derived from the behavior of individuals, firms or policy makers who are trying to maximize some measure of welfare subject to constraints on their behavior. Optimization is, therefore, the most important mathematical concept in economics
  • Our study of optimization encompasses three categorizations of problems The first distinction is between univariate and multivariate optimization, i.e. finding extreme values of func- tions with one variable vs. finding extreme values of functions with many variables. Read Chapter 9 of Klein and bring yourself back up to speed quickly on the first and second order conditions for a solution to a univariate maximization problem.
  • The second distinction is between constrained and unconstrained optimization. You should be familiar with both multivariate optimization from Econ 201. What we do here should add to that knowledge.
  • The final distinction is between static and dynamic optimization, i.e. between one shot optimization decisions and decisions in which your current choice may affect a subsequent optimization decision.

II. UNIVARIATE OPTIMIZATION

  • Optimization lies at the heart of economic behavior. Therefore, it is vital that we be able to figure out what the extreme values of a function are. Here, I will first derive the necessary first order (FOC) and sufficient second order (SOC) conditions for finding the extreme value(s) of a univariate function.
  • I will be using differentials to find the extreme value(s) of a function. This will allow for an easier extension of this analysis from the univariate case to the multivariate case.

First Order (Necessary) Conditions

  • Given a function of the form y = f (x) we can calculate the differential as dy = f ′(x)dx. If x∗^ is a candidate for an extreme point then the value of y should not change for any small change dx in the vicinity of x∗. In other words we should see that dy ≡ f ′(x∗)dx = 0 for all dx in the vicinity of x∗. This only holds true when f ′(x∗) = 0, which is the traditional first order condition
  • Keep in mind that the FOC is only a necessary condition for an extreme point, not a sufficient one. It is possible that f ′(x∗) = 0 even when x∗^ is neither a minimum nor a maximum. Consider the function y = x^3. Since f ′(x) = 3x^2 , we can show that f ′(0) = 0. However, zero is neither a local minimum nor a local maximum of f (x).

Second Order (Sufficient) Conditions

  • The FOC does not tell us whether the extreme point is a minimum or a maximum. For that, we need to think about SOCs. To determine the nature of the extreme point, we look at the behavior of the differential around the extreme point. To do this we calculate the second order differential of y as d^2 y = d[f ′(x)dx] = f ′′(x)dx^2.
  • If y = f (x) has a local maximum at x = x∗^ then not only will dy be zero at x∗, dy has to also be declining when x changes in the neighborhood of x∗. In other words in the vicinity of a peak we should have d^2 y ≡ f ′′(x∗)dx^2 < 0. So the second order sufficient condition for a local maximum is that f ′′(x∗) < 0.
  • If y = f (x) has a local minimum at x = x∗^ then not only will dy be zero at x∗, dy has to also be increasing when x changes in the neighborhood of x∗. In other words in the vicinity of a valley we should have d^2 y ≡ f ′′(x∗)dx^2 > 0. So the second order sufficient condition for a local minimum is that f ′′(x∗) > 0.
  • If x∗^ is neither a local maximum or a local minimum then the value of dy will be increasing when x changes in one direction and decreasing when x changes in the other direction, in the neighborhood of x∗. We have d^2 y ≡ f ′′(x∗)dx^2 = 0.

Concavity, Convexity and Global Extrema

  • We can also distinguish between local and global extreme points. If the function f (x) is concave, then any extreme point is a global maximum. In other words if f ′′(x) ≥ 0 everywhere, then x∗^ is a global minimum.
  • Furthermore, if the function f (x) is strictly concave, then any extreme point is a unique global maximum. In other words if f ′′(x) > 0 everywhere, then x∗^ is a unique global minimum.
  • Similarly, if the function f (x) is convex, then any extreme point is a global maximum. In other words if f ′′(x) ≤ 0 everywhere, then x∗^ is a global maximum.
  • Furthermore, if the function f (x) is strictly convex then any extreme point is a unique global maximum. In other words if f ′′(x) < 0 everywhere, then x∗^ is a unique global maximum.

III. MULTIVARIATE OPTIMIZATION

First Order (Necessary) Conditions

  • We can extend this analysis to thinking about multivariate optimization decision problems using differentials.
  • Given a generic multivariate function of the form z = f (x 1 · · · xn) we can calculate the differential of z as dz =

∑n i=

fi(x 1 , x 2 , · · · , xn)dxi. At an extreme point z∗^ = (x∗ 1 , x∗ 2 , · · · x∗ n),

we must have dz ≡

∑n i=

fi(x∗ 1 , x∗ 2 , · · · , x∗ n)dxi = 0 or else we would be able to reach a higher or lower point in the immediate vicinity.

  • The function z = f (x 1 , x 2 , · · · , xn) has a local minimum at x∗^ ≡ (x∗ 1 , x∗ 2 , · · · , x∗ n) if d^2 z > 0 at x∗. In this case d^2 z is said to be positive definite.
  • The extreme point is neither a minimum nor a maximum (a saddle point, the multivariable analog to a point of inflection) if d^2 z > 0 for some values and d^2 z < 0 for others near x∗.
  • The intuition is as follows. Suppose we have found a prospective candidate for an extreme point, i.e. some point z∗^ = f (x∗ 1 , x∗ 2 , · · · , x∗ n) at which dz∗^ = 0. Suppose we move away from z∗^ in all directions and find that dz becomes negative. Since dz was zero at z∗^ and < 0 in the vicinity of z∗, we know that z∗^ is a local maximum. In other words, if dz is declining in the neighborhood of z∗, given that it is zero at z∗, then z∗^ is a peak. We can also express this as follows: if d^2 z < 0 at z∗^ then z∗^ is a local maximum.
  • Conversely, suppose we move away from z∗^ in all directions and find that dz becomes positive. Since dz was zero at z∗^ and > 0 in the vicinity of z∗, we know that z∗^ is a local minimum. In other words, if dz is increasing in the neighborhood of z∗, given that it is zero at z∗, then z∗^ is a valley. We can also express this as follows: if d^2 z > 0 at z∗^ then z∗^ is a local minimum.
  • If dz could be either increasing or decreasing in value depending on which direction we move in, given that it equaled zero at z∗, then z∗^ is said to be a saddle-point (the multivariable analog to a point of inflection). We can also express this as follows: if d^2 z <> 0 at z∗ (depending on which direction we move in) then z∗^ is a saddle-point.
  • To check the second order condition, we need to calculate

d^2 z ≡

∑^ n

i=

fiidx^2 i + 2

∑^ n

i=

∑^ n

j=i+

fij dxidxj

and check its value in the vicinity of (x∗ 1 , x∗ 2 , · · · , x∗ n). This can be pretty tedious so there is an easier way to test the SOC by using a special matrix called the Hessian matrix.

  • Given a function z = f (x 1 , x 2 , · · · , xn), the Hessian matrix associated with that function is

H =

f 11 f 12 · · · f 1 n f 21 f 22 · · · f 2 n .. .

fn 1 fn 2 · · · fnn

  • From this Hessian matrix, we can construct a sequence of determinants known as Principal Minors. These are distinct from the minors that we talked about earlier. The n principal minors of H are

|H 1 | =

f 11

, |H 2 | =

f 11 f 12 f 21 f 22

∣ ,^ |H^3 |^ =

f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33

∣∣ ,^ · · ·^ ,^ |Hn|^ =

f 11 f 12 · · · f 1 n f 21 f 22 · · · f 2 n .. .

fn 1 fn 2 · · · fnn

  • Basically the jth principal minor is the determinant of the matrix constructed by taking the first j rows and j columns of the Hessian. The SOC for multivariable static optimization is (the proof is very nasty, come see me if you are masochistic)
  • z∗^ is a local maximum (d^2 z is negative definite) if |H 1 | < 0 , |H 2 | > 0 , |H 3 | < 0 , · · · , (−1)n^ |Hn| > 0 i.e. the principal minors, evaluated at x = x∗, are of alternating signs with the 1st one being negative.
  • z∗^ is a local minimum (d^2 z is positive definite) if |H 1 | > 0 , |H 2 | > 0 , |H 3 | > 0 , · · · , |Hn| > 0 , i.e. the principal minors, evaluated at x = x∗, are all positive.

Example:

In the case z = −x^2 + xy − y^2 + 3x, we showed that the stationary points of z were x = 2 and y = 1. The Hessian is

H =

[

zxx zxy zyx zyy

]

[

]

The principal minors are |H 1 | = − 2 , |H 2 | =

∣ = 3. Given the alternating signs of the principal minors, we can conclude that z* is a maximum.

Concavity, Convexity and Global Extrema

  • The final piece of theory relates to calculating the concavity and convexity of a multivariate function. Recall that in the univariate case, we said that a function f (x) was said to be concave if the following identity holds for any two points x 1 and x 2 in the function’s domain, where 0 < λ < 1 f (λx 1 + (1 − λ)x 2 ) ≥ λf (x 1 ) + (1 − λ)f (x 2 )
  • Similarly, a function f(x) was said to be convex if the following identity holds.

f (λx 1 + (1 − λ)x 2 ) ≤ λf (x 1 ) + (1 − λ)f (x 2 )

  • The same definition holds in the multivariate case, except that x is now a vector (x 1 , x 2 , · · · , xn) instead of a single variable. Calculating whether a multivariate function is concave or convex is difficult. We can’t draw a graph very easily and it is hard to think about points of a func- tion lying above planes which would be the multivariate analog to how we visually identified whether a univariate function was concave or not.
  • We can use the Hessian and the principal minors to test for concavity or convexity of a function. The definitions are as follows:
  • A function z = f (x 1 , x 2 , · · · , xn) is said to be concave if d^2 z is everywhere negative semi- definite, i.e. if |H 1 | ≤ 0 , |H 2 | ≥ 0 , |H 3 | ≤ 0 , · · · , (−1)n^ |Hn| ≥ 0
  • A function z = f (x 1 , x 2 , · · · , xn) is said to be convex if d^2 z is everywhere positive semi- definite, i.e. if |H 1 | ≥ 0 , |H 2 | ≥ 0 , |H 3 | ≥ 0 , · · · , |Hn| ≥ 0
  • A concave function has a global maximum, but it it is not necessarily unique, i.e. it may have more than 1 peak. A convex function has a global minimum, but it it is not necessarily unique so it may have more than 1 low point.

Price Discrimination

  • Consider a slightly different variation. This time we are dealing with a firm that sells a single good in two different markets, and needs to decide how much of that good to sell in each market. An example might be an airline which sells a single product, e.g. seats for air travel, in two markets: business fliers and leisure fliers.
  • e will also use a general cost function and revenue function with parameters rather than specific numeric values. Finally, we will assume that the market is in fact a monopoly (so Air France may be the more appropriate example instead of Delta Airlines).
  • Assume that the revenue functions of the firm in the two markets are R(QA) and R(QB ). The total cost function is given by C(Q) where Q = QA + QB. Also assume that the cost function is convex in Q and that the revenue functions are concave.
  • The profit function of the airline can be written as Π (QA, QB ) = RA(QA) + RB (QB ) − C(Q)
  • The producer’s maximization decision is maxQA,QB Π (QA, QB ) and the FOC are

R′ A(QA) − C′(Q)

∂Q

∂QA

≡ R′ A(QA) − C′(Q) = 0

R′ B (QB ) − C′(Q)

∂Q

∂QB

≡ R′ B (QB ) − C′(Q = 0

  • The first order condition state that marginal revenue in each market equal marginal cost of production, which is as we would expect with any maximizing decision.
  • We can check the SOC by calculating the Hessian as H =

[

R′′ A(QA) − C′′(Q) −C′′(Q)

−C′′(Q) R B′′ (QB ) − C′′(Q)

]

which in turn implies that the principal minors are∣ |H 1 | = R′′ A(QA) − C′′(Q) and |H 2 | = ∣ ∣∣ R

′′ A(QA)^ −^ C

′′(Q) −C′′(Q)

−C′′(Q) R′′ B (QB ) − C′′(Q)

∣∣ = R′′ AR′′ B − C′′^ (R′′ A + R′′ B ).

  • Given the assumptions that the cost function was strictly convex, and revenue was strictly concave we know that C′′^ > 0 and R′′ A, R′′ B < 0 everywhere.
  • Then the sign of |H 1 | will be negative and the sign of |H 2 | is positive. Thus if we have a strictly concave revenue function and a strictly convex cost function we would get a unique global solution to our maximization problem.
  • If we had a concave revenue function and a convex cost function we would get a global solution to our maximization problem, that may not be unique.