





















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Cost functions, marshallin and hicksian demand functions, shephard lemma, Roys identity, walra's law, homogeneity negativity, correspondence fucntional, structures separaility and aggregation
Typology: Study notes
1 / 29
This page cannot be seen from the preview
Don't miss anything!
On special offer
1.1. Understanding and representing technology. Economists are interested in the technology used by
the firm. This is usually represented by a set or a function. For example we might describe the technology
in terms of the input correspondence which maps output vectors in R m
into subsets of 2 R
n
are vectors of inputs that will produce the given output vector. This correspondence is given by
m
R n
Alternatively we could represent the technology by a function such as the production function y = f(x 1 ,
x 2 ,... , xn)
We typically postulate or assume certain properties for various representations of technology. For exam-
ple we typically make the following assumptions on the input correspondence.
1.1.1. V.1 No Free Lunch.
a: V(0) = R
n
b: 0 6 ∈ V(y), y > 0.
1.1.2. V.2 Weak Input Disposability. ∀ y R m
, x V (y) and λ ≥ 1 ⇒ λx V (y)
1.1.3. V.2.S Strong Input Disposability. ∀ y R m
x V (y) and x ′ ≥ x ⇒ x ′ V (y)
1.1.4. V.3 Weak Output Disposability. ∀ y R
m
1.1.5. V.3.S Strong Output Disposability. ∀ y, y
′ R
m
, y
′ ≥ y ⇒ V (y
′ ) ⊆ V (y)
1.1.6. V.4 Boundedness for vector y. If ‖y ‖ → +∞ as
→ +∞,
+∞ `= V (y
` ) = ∅
If y is a scalar,
y(0,+∞)
V (y) = ∅
1.1.7. V.5 T(x) is a closed set. V: R m
R +n^ is a closed correspondence.
1.1.8. V.6 Attainability. If x V(y), y ≥ 0 and x ≥ 0, the ray {λx: λ ≥ 0 } intersects all V(θy), θ ≥ 0.
1.1.9. V.7 Quasi-concavity. V is quasi-concave on R
m
m
V(θy + (1-θ)y’)
1.1.10. V.8 Convexity of V(y). V(y) is a convex set for all y R m
1.1.11. V.9 Convexity of T(x). V is convex on R m
which means ∀ y, y’ R m
, 0 ≤ θ ≤ 1, θV(y)+(1-θ)V(y’) ⊆
V(θy+ (1-θ)y’)
Date : October 4, 2005.
1
1.2. Understanding and representing behavior. Economists are also very interested in the behavioral re-
lationships that arise from firms’ optimizing decisions. These decisions result in reduced form expressions
such as supply functions y
∗ = x(p, w), input demands x
∗ = x(p, w), or Hicksian (cost minimizing) demand
functions x
∗ = x(w, y). One of the most basic of these decisions is the cost minimizing decision.
2.1. Definition of cost function. The cost function is defined for all possible output vectors and all positive
input price vectors w = (w 1 , w 2 ,... , wn). An output vector, y, is producible if y belongs to the effective
domain of V(y), i.e,
Dom V = {y ∈ R
m
The cost function does not exist it there is no technical way to produce the output in question. The cost
function is defined by
C(y, w) = min x
{wx : xV (y)}, y ∈ Dom V, w > 0 , (2)
or in the case of a single output
C(w, y) = min x
{wx : f(x) ≥ y} (3)
The cost function exists because a continuous function on a nonempty closed bounded set achieves
a minimum in the set (Debreu [6, p. 16]). In figure 1,the set V(y) is closed and nonempty for y in the
producible output set. The function wx is continuous. Because V(y) is non-empty if contains at least one
input bundle x’. We can thus consider cost minimizing points that satisfy wx ≤ wx’ But this set is closed
and bounded given that w is strictly positive. Thus the function wx will attain a minimum in the set at x”.
2.2. Solution to the cost minimization problem. The solution to the cost minimization problem 2 is a
vector x which depends on output vector y and the input vector w. We denote this solution by x(y,w). This
demand for inputs at for a fixed level of output and input prices is often called a Hicksian demand curve.
2.3. Properties of the cost function.
2.3.1. C.1.1. Non-negativity: C(y, w) ≥ 0 for w > 0.
2.3.2. C.1.2. Nondecreasing in w: If w ≥ w’ then C(y, w) ≥ C(y, w’)
2.3.3. C.2. Positively linearly homogenous in w
C(y, λw) = λ C(y, w), w > 0.
2.3.4. C.3. C is concave and continuous in w
2.3.5. C.4.1. No fixed costs: C(0, w) = 0, ∀ w > 0. We sometimes assume we have a long run problem.
2.3.6. C.4.2. No free lunch: C(y, w) > 0, w > 0, y > 0.
2.3.7. C.5. Nondecreasing in y (proportional): C(θy, w) ≤ C(y, w), 0 ≤ θ ≤ 1, w > 0.
2.3.8. C.5.S. Nondecreasing in y: C(y’, w) ≤ C(y, w), y’ ≤ y, w > 0.
2.3.9. C.6. For any sequence y such that || y
|| → ∞ as → ∞ and w > 0, C(y
, w) → ∞ as ` → ∞.
2.3.10. C.7. C(y,w) is lower semi-continuous in y, given w > 0.
2.4.3. C.2. Positively linearly homogenous in w
C(y, λw) = λ C(y, w), w > 0.
Let the cost minimization problem with prices w be given by
C(y, w) = min x
{wx : xV (y)}, y ∈ Dom V, w > 0 , (4)
The x vector that solves this problem will be a function of y and w, and is usually denoted x(y,w). The
cost function is then given by
C(y, w) = w x(y, w) (5)
Now consider the problem with prices tw (t >0)
C(y, tw) = min x
{twx : xV (y)}, y ∈ Dom V, w > 0
= t min x
{wx : xV (y)}, y ∈ Dom V, w > 0
The x vector that solves this problem will be the same as the vector which solves the problem in equation
4, i.e., x(y,w). The cost function for the revised problem is then given by
C^ ˆ(y, tw) = tw x(y, w) = tC(y, w) (7)
2.4.4. C.3. C is concave and continuous in w
To demonstrate concavity let (w, x) and (w’, x’) be two cost-minimizing price-factor combinations and let
w”= tw + (1-t)w’ for any 0 ≤ t ≤ 1. Concavity implies that C(w” y) ≥ tC(w, y) + (1-t) C(w’, y). We can prove
this as follows. We have that C(w” y) = w”· x”= tw · x”+ (1-t)w’ · x” where x” is the optimal choice of x at
prices w” Because x” is not necessarily the cheapest way to produce y at prices w’ or w,we have w · x”≥
C(w, y) and w’· x” ≥ C(w’ y) so that by substitution C(w” y) ≥ tC(w, y) + (1-t) C(w’, y). The point is that if w
· x” and w’· x” are each larger than the corresponding term in the linear combination then C(w”,y) is larger
than the linear combination. Because C(y, w)is concave it will be continuous by the property of concavity.
Rockafellar [14, p. 82] shows that a concave function defined on an open set (w > 0) is continuous.
Consider figure 2. Let x
∗ be the cost minimizing bundle at prices w
∗
. Let the price of x
∗ i change. At input
prices (w
∗ ), costs are at the level C(w
∗ ). If we hold input levels fixed at (x
∗ ) and change wi, we move along
the tangent line denoted by
C(wi, w¯, y), where w¯
∗ and ¯x
∗ represent all the prices and inputs other than the
ith. Costs are higher along this line than along the cost function because we are not adjusting inputs. Along
the cost function, as the price of input i increases, we probably use less of input xi and more of other inputs.
2.4.5. C.4.1. No fixed costs: C(0, w) = 0, ∀ w > 0.
We assume this axiom if the problem is long run, in the short run fixed costs may be positive with zero
output. Specifically V.1a implies that to produce zero output, any input vector in R n
will do, including the
zero vector with zero costs.
2.4.6. C.4.2. No free lunch: C(y, w) > 0, w > 0, y > 0.
Because we cannot produce outputs without inputs (V.1b: no free lunch with the technology), costs for
any positive output are positive for strictly positive input prices.
FIGURE 2. The Cost Function in Concave
i
i
2.4.7. C.5. Nondecreasing in y (proportional): C(θy, w) ≤ C(y, w), 0 ≤ θ ≤ 1, w > 0.
If outputs go down proportionately, costs cannot rise. This is clear because V(y 1 ) is a subset of V(y 2 ) if
y 1 ≥ y 2 from V.3, then C(y 1 , w) = min wx|x∈V(y 1 ) ≥ min wx | x∈ V(y 2 ) = C(y 2 , w). The point is that if we
have a smaller set of possible x’s to choose from then cost must increase.
2.4.8. C.5.S. Nondecreasing in y: C(y’, w) ≤ C(y, w), y’ ≤ y, w > 0.
If any output goes down, costs cannot increase.
2.4.9. C.6. For any sequence y
` such that || y
|| → ∞ as
→ ∞ and w > 0, C(y
, w) → ∞ as
→ ∞.
This axiom implies that if outputs increase without bound, so will costs.
2.4.10. C.7. C(y,w) is lower semi-continuous in y, given w > 0.
The cost function may not be continuous in output as it is in input prices, but if there are any jump points,
it will take the lower value at these points.
2.4.11. C.8. If the graph of the technology (GR) or T, is convex, C(y,w) is convex in y, w > 0.
If the technology is convex (more or less decreasing returns to scale as long as V.1 holds), costs will rise
at an increasing rate as y increases.
2.5. Shephard’s Lemma.
∂C(y, w)
∂wi
= xi (y, w) (18)
2.5.3. A Silberberg [17] type proof of Shephard’s lemma. Set up a function L as follows
L(y, w, xˆ) = w xˆ − C(w, y) (19)
where xˆ is the cost minimizing choice of inputs for prices wˆ. Because C(w, y) is the cheapest way to
produce y at prices w, L ≥ 0. If w = wˆ, then L will be equal to zero. Because this is the minimum value of L,
the derivative of L at this point is zero so
∂L(y, w,ˆ xˆ)
∂wi
= ˆxi −
∂C( ˆw, y)
∂wi
⇒ xˆi =
∂C( ˆw, y)
∂wi
The second order necessary conditions for minimizing L imply that
2 C
∂wi∂wj
is positive semi-definite so that
2 C
∂wi ∂wj
is negative semi-definite which implies that C is concave.
2.5.4. Graphical representation of Shephard’s lemma. In figure 3 we hold all input prices except the jth fixed at
w ˆ. We assume that when the jth price is wˆj , the optimal input vector is (xˆ 1 , ˆx 2 ,... , ˆxj ,... ,xˆn). The cost
function lies below the tangent line in figure 3 but conincides with the tangent line at wˆj By differentiability
the slope of the cost function at this point is the slope of its tangent i.e.,
∂wj
∂(tangent)
∂wj
= ˆxj (23)
2.6. Sensitivity analysis.
2.6.1. Demand slopes. If the Hessian of the cost function is negative semi-definite, the diagonal elements all
must be non-positive(Hadley [11, p. 260-262]) so we have
2 C(y, w)
∂w 2 i
∂xi (y, w)
∂wi
≤ 0 ∀i (24)
This implies then that Hicksian demand curves slope down because the diagonal elements of the Hessian
of the cost function are just the derivatives of input demands with respect to their own prices.
FIGURE 3. Shephard’s Lemma
j
j
i¹j
j
j
i¹j
i
i
j
2.6.2. Cross price effects. By homogeneity of degree zero of input demands and Euler’s theorem we have
n j=
∂xi
∂wj
wj = Σj 6 =i
∂xi
∂wj
wj +
∂xi
∂wi
wi = 0 (25)
And we know that
∂xi
∂wi
by the concavity of C. Therefore
Σj 6 =i
∂xi
∂wj
wj ≥ 0 (27)
This implies that at least one cross price effect is positive.
2.6.3. Symmetry of input demand response to input prices. By Young’s theorem
2 C
∂wi ∂wj
2 C
∂wj ∂wi
∂xi
∂wj
∂xj
∂wi
So cross derivatives in input prices are symmetric.
n i=
∂f
∂xi
xi = f(x) (37)
Applying this to marginal cost we obtain
∂M C(y, w)
∂wi
∂λ(y, w)
∂wi
2 C(y, w)
∂wi ∂y
2 C(y, w)
∂y ∂wi
∂ xi(y, w)
∂y
⇒ Σi
∂λ(y, w)
∂wi
wi = Σi
2 C(y, w)
∂wi ∂y
wi = Σi
2 C(y, w)
∂y ∂wi
wi
∂y
Σi
∂ C(y, w)
∂wi
wi
∂y
C(y, w ), by homogeneity of C(y,w) in w
= M C(y, w) = λ(y, w)
3.1. First order conditions for cost minimization. In the case of a single output, the cost function can be
obtained by carrying out the maximization problem
C(y, w) = min x
wx : f(x) − y = 0 (39)
with associated Lagrangian function
L = wx − λ(f(x) − y) (40)
The first order conditions are as follows
∂x 1
= w 1 − λ
∂f
∂x 1
∂x 2
= w 2 − λ
∂f
∂x 2
∂xn
= wn − λ
∂f
∂xn
(41a)
∂λ
∂λ
= − (f(x) − y) = 0 (41b)
If we take the ratio of any of the first order conditions we obtain
∂f ∂xj ∂f ∂xi
wj
wi
This implies that the RTS between inputs i and j is equal to the negative inverse factor price ratio because
∂xi
∂xj
∂f ∂xj ∂f ∂xi
Substituting we obtain
∂xi
∂xj
wj
wi
Graphically this implies that the slope of an isocost line is equal to the slope of the lower boundary of
V(y). Note that an isocost line is given by
cost = w 1 x 1 + w 2 x 2 +... + wn xn
⇒ wi xi = cost − w 1 x 1 − w 2 x 2 −... − wi− 1 xi− 1 − wi+1 xi+1 −... − wj xj −... − wn xn
⇒ xj =
cost
wi
w 1
wi
x 1 −
w 2
wi
x 2 −... −
wi− 1
wi
xi− 1 −
wi+
wi
xi+1 −... −
wj
wi
xj −... −
wn
wi
xn
⇒ Slope of isocost = −
wj
wi
(45)
FIGURE 5. Cost Minimizing Input Combinations at Alternative Output Levels
10 20 30 40 50 60 70
x i
5
10
15
20
25
30
35
xj
VHy
3 L
VHy
L
VHy
L
i
i
i
3.2. Notes on quasiconcavity.
Definition 1. A real valued function f, defined on a convex set X ⊂ < n , a said to be quasiconcave if
f( λ x
1
2 ) ≥ min[f(x
1 ), f(x
2 ) ] (46)
A function f is said to be quasiconvex if - f is quasiconcave.
Theorem 1. Let f be a real valued function defined on a convex set X ⊂ R
n
. The upper contour sets { (x, y): x ∈ S, α
≤ f(x) } of f are convex for every α ∈ R if and only if f is a quasiconcave function.
Proof. Suppose that S(f,α) is a convex set for every α ∈ R and let x
1 ∈ X, x
2 ∈ X, α¯ = min[f(x
1 ), f(x
2 )]. Then
x
1 ∈ S(f,α¯) and x
2 ∈ S(f,α¯), and because S(f,α¯) is convex, (λx
1
2 ) ∈ S(f,α¯) for arbitrary λ. Hence
f(λ x
1
2 ) ≥ α¯ = min[f(x
1 ), f(x
2 ) ] (47)
Conversely, let S(f,α) be any level set of f. Let x
1 ∈ S(f,α) and x
2 ∈ S(f,α). Then
f(x
1 ) ≥ α, f(x
2 ) ≥ α (48)
and because f is quasiconcave, we have
f(λ x
1
2 ) ≥ α (49)
and (λx
1
2 ) ∈ S(f,α).
Theorem 2. Let f be differentiable on an open convex set X ⊂ R
n
. Then f is quasiconcave if and only if for any x
1 ∈
X, x
2 ∈ X such that
f(x
1 ) ≥ f(x
2 ) (50)
we have
(x
1 − x
2 )
′ ∇f(x
2 ) ≥ 0 (51)
Definition 2. The kth-ordered bordered determinant Dk(f,x) of a twice differentiable function f at at point
x ∈ R
n is defined as
Dk (f, x) = det
∂^2 f ∂x 2 1
∂^2 f ∂x 1 ∂x 2
∂^2 f ∂x 1 ∂xk
∂f ∂x 1
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x 2 2
∂ 2 f ∂x 2 ∂xk
∂f ∂x 2
. . .
∂ 2 f ∂xk∂x 1
∂ 2 f ∂xk∂x 2
∂ 2 f ∂x 2 k
∂f ∂xk ∂f ∂x 1
∂f ∂x 2
∂f ∂xk
k = 1, 2 ,... , n (52)
Definition 3. Some authors define the kth-ordered bordered determinant Dk(f,x) of a twice differentiable
function f at at point x ∈ R
n in a different fashion where the first derivatives of the function f border the
Hessian of the function on the top and left as compared to in the bottom and right as in equation 52.
Dk (f, x) = det
∂f ∂x 1
∂f ∂x 2
∂f ∂xk
∂f ∂x 1
∂ 2 f ∂x^2 1
∂ 2 f ∂x 1 ∂x 2
∂ 2 f ∂x 1 ∂xk
∂f ∂x 2
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x^2 2
∂ 2 f ∂x 2 ∂xk
. . .
∂f ∂xk
∂ 2 f ∂xk∂x 1
∂ 2 f ∂xk∂x 2
∂ 2 f ∂x^2 k
k = 1, 2 ,... , n (53)
The determinant in equation 52 and the determinant in equation 53 will be the same. If we interchange
any two rows or any two columns of a determinant, the determinant will change sign but keep its absolute
where g(x) = 0 denotes a constraint, We can also write this as
max x 1 , x 2 ,...xn
f(x 1 , x 2 ,... , xn)
subject to
g (x 1 , x 2 ,... , xn) = 0 (57)
The solution can be obtained using the Lagrangian function
L(x; λ) = f(x 1 , x 2 ,.. .) − λ g(x) (58)
Notice that the gradient of L will involve a set of derivatives, i.e.
∇x L = ∇xf(x) − λ ∇xg(x)
There will be one equation for each x. There will also an equation involving the derivative of L with
respect to λ. The necessary conditions for an extremum of f with the equality constraints g(x) = 0 are that
∇L(x
∗ , λ
∗ ) = 0 (59)
where it is implicit that the gradient in (59) is with respect to both x and λ. The typical sufficient con-
ditions for a maximum or minimum of f(x 1 , x 2 ,... , xn) subject to g(x 1 , x 2 ,... , xn) = 0 require that f and g
be twice continuously differentiable real-valued functions on R
n
. Then if there exist vectors x
∗ ∈ R
n , and
λ
∗ ∈ R
1 such that
∇L(x
∗ , λ
∗ ) = 0 (60)
and for every non-zero vector z ∈ R
n satisfying
z
′ ∇g(x
∗ ) = 0 (61)
it follows that
z
′ ∇
2 x L(x
∗ , λ
∗ )z > 0 (62)
then f has a strict local minimum at x
∗ , subject to g(x) = 0. If the inequality in (62) is reversed, then f
has strict local maximum at x
∗ .
For the cost minimization problem the Lagrangian is given by
L = wx − λ(f(x) − y) (63)
where the objective function is wx and the constraint is f(x) - y = 0. Differentiating equation 63 twice
with respect to x we obtain
2 x L(x
∗ , λ
∗ ) =
−λ
2 f(x)
∂xi ∂xj
= − λ
2 f(x)
∂xi ∂xj
And so the condition in equation 62 will imply that
−λ z
′
2 f(x)
∂xi∂xj
z > 0
⇒ z
′
2 f(x)
∂xi∂xj
z < 0
for all z satisfying z’∇f(x) = 0. This is also the condition for f to be a quasi-concave function. (Avriel [3,
p.149]. Thus these sufficient conditions imply that f must be quasi-concave.
3.3.2. Checking the sufficient conditions for cost minimization. Consider the general constrained minimization
problem where f and g are twice continuously differentiable real valued functions. If there exist vectors
x ∗ R n , λ ∗ R m , such that
∇L(x
∗ , λ
∗ ) = 0 (66)
and if
D(p) = (−1) det
2 L(x ∗ , λ ∗ )
∂x 1 ∂x 1
2 L(x ∗ , λ ∗ )
∂x 1 ∂xp
∂g(x ∗ )
∂x 1
· · · ·
· · · ·
· · · ·
∂
2 L(x
∗ , λ
∗ )
∂xp∂x 1
2 L(x
∗ , λ
∗ )
∂xp∂xp
∂g(x
∗ )
∂xp
∂g(x ∗ )
∂x 1
∂g(x ∗ )
∂xp
for p = 2, 3 ,.. ., n, then f has a strict local minimum at x
∗ , such that
g(x
∗ ) = 0 (68)
We can also write this as follows where after multiplying both sides by negative one.
D(p) = det
2 L(x
∗ , λ
∗ )
∂x 1 ∂x 1
2 L(x
∗ , λ
∗ )
∂x 1 ∂xp
∂g(x
∗ )
∂x 1
· · · ·
· · · ·
· · · ·
∂
2 L(x
∗ , λ
∗ )
∂xp∂x 1
2 L(x
∗ , λ
∗ )
∂xp∂xp
∂g(x
∗ )
∂xp
∂g(x
∗ )
∂x 1
∂g(x
∗ )
∂xp
for p = 2, 3 ,.. ., n, then f has a strict local minimum at x ∗ , such that
g(x
∗ ) = 0 (70)
We check the determinants in (69) starting with the one that has 2 elements in each row and column of
the Hessian and 1 element in each row or column of the derivative of the constraint with respect to x.
For the cost minimization problem, D(p) is given by
p+
∂ 2 f ∂x^21
∂ 2 f ∂x 1 ∂x 2
∂ 2 f ∂x 1 ∂xp
∂f ∂x 1
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x^2 2
∂ 2 f ∂x 2 ∂xp
∂f ∂x 2
. . .
∂ 2 f ∂xp∂x 1
∂ 2 f ∂xp∂x 2
∂ 2 f ∂x^2 p
∂f ∂xp ∂f ∂x 1
∂f ∂x 2
∂f ∂xp
or
p
∂ 2 f ∂x 2 1
∂ 2 f ∂x 1 ∂x 2
∂ 2 f ∂x 1 ∂xp
∂f ∂x 1
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x^22
∂ 2 f ∂x 2 ∂xp
∂f ∂x 2
. . .
∂ 2 f ∂xp∂x 1
∂ 2 f ∂xp∂x 2
∂ 2 f ∂x^2 p
∂f ∂xp ∂f ∂x 1
∂f ∂x 2
∂f ∂xp
This condition in equation 76 is the condition for the quasi-concavity of the production function f from
equation 55. (Avriel [3, p. 149].
3.3.3. Example problem with two variable inputs. The Lagrangian function is given by
L = w 1 x 1 + w 2 x 2 − λ(f(x 1 , x 2 ) − y) (77)
The first order conditions are as follows
∂x 1
= w 1 − λ
∂f
∂x 1
= 0 (78a)
∂x 2
= w 2 − λ
∂f
∂x 2
= 0 (78b)
− λ (f(x) − y) = 0 (78c)
The bordered Hessian for the Lagrangian is given by
2 L(x ∗ , λ ∗ )
∂x 1 ∂x 1
2 L(x ∗ , λ ∗ )
∂x 1 ∂x 2
∂g(x ∗ )
∂x 1
2 L(x ∗ , λ ∗ )
∂x 2 ∂x 1
2 L(x ∗ , λ ∗ )
∂x 2 ∂x 2
∂g(x ∗ )
∂x 2
∂g(x
∗ )
∂x 1
∂g(x
∗ )
∂x 2
−λ∂ 2 f ∂x^21
−λ∂ 2 f ∂x 1 ∂x 2
∂f ∂x 1
−λ∂ 2 f ∂x 2 ∂x 1
−λ∂ 2 f ∂x^2 2
∂f ∂x 2 ∂f ∂x 1
∂f ∂x 2
The determinant of this matrix must be negative for this solution to be a minimum. To see how this
relates to the bordered Hessian of the production function multiply the last row and column by -λ and the
whole determinant by
− 1 λ
as follows
−λ∂
2 f ∂x 2 1
−λ∂
2 f ∂x 1 ∂x 2
∂f ∂x 1
−λ∂ 2 f ∂x 2 ∂x 1
−λ∂ 2 f ∂x^22
∂f ∂x 2 ∂f ∂x 1
∂f ∂x 2
λ
−λ∂
2 f ∂x 2 1
−λ∂
2 f ∂x 1 ∂x 2 −λ
∂f ∂x 1
−λ∂ 2 f ∂x 2 ∂x 1
−λ∂ 2 f ∂x^22
−λ
∂f ∂x 2
−λ
∂f ∂x 1
−λ
∂f ∂x 2
= (−λ)
3
λ
∂
2 f ∂x 2 1
∂
2 f ∂x 1 ∂x 2
∂f ∂x 1
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x^22
∂f ∂x 2 ∂f ∂x 1
∂f ∂x 2
With λ > 0 this gives
3
∂ 2 f ∂x^21
∂ 2 f ∂x 1 ∂x 2
∂f ∂x 1
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x^22
∂f ∂x 2 ∂f ∂x 1
∂f ∂x 2
so that ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣
∂
2 f ∂x 2 1
∂
2 f ∂x 1 ∂x 2
∂f ∂x 1
∂ 2 f ∂x 2 ∂x 1
∂ 2 f ∂x^22
∂f ∂x 2 ∂f ∂x 1
∂f ∂x 2
for a minimum. This is the condition for a quasi-concave function with two variables. If there were three
variables, the determinant of the next bordered Hessian would be negative.