Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture Notes on linear System Theory, Lecture notes of Informatics Engineering

Linear system Theory in define linear spaces, linear maps, linear maps generated by matrix, varying and invariant linear system and inner product spaces.

Typology: Lecture notes

2021/2022

Uploaded on 03/31/2022

explain
explain 🇺🇸

4

(2)

230 documents

1 / 167

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture Notes on Linear System Theory
John Lygerosand Federico A. Ramponi
Automatic Control Laboratory, ETH Zurich
CH-8092, Zurich, Switzerland
lygeros@control.ee.ethz.ch
Department of Information Engineering, University of Brescia
Via Branze 38, 25123, Brescia, Italy
federico.ramponi@unibs.it
January 3, 2015
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Lecture Notes on linear System Theory and more Lecture notes Informatics Engineering in PDF only on Docsity!

Lecture Notes on Linear System Theory

John Lygeros∗^ and Federico A. Ramponi†

∗Automatic Control Laboratory, ETH Zurich

CH-8092, Zurich, Switzerland

lygeros@control.ee.ethz.ch

†Department of Information Engineering, University of Brescia

Via Branze 38, 25123, Brescia, Italy

federico.ramponi@unibs.it

January 3, 2015

Contents

  • 1 Introduction
    • 1.1 Objectives of the course
    • 1.2 Proof methods
    • 1.3 Functions and maps
  • 2 Introduction to Algebra
    • 2.1 Groups
    • 2.2 Rings and fields
    • 2.3 Linear spaces
    • 2.4 Subspaces and bases
    • 2.5 Linear maps
    • 2.6 Linear maps generated by matrices
    • 2.7 Matrix representation of linear maps
    • 2.8 Change of basis
  • 3 Introduction to Analysis
    • 3.1 Norms and continuity
    • 3.2 Equivalent norms
    • 3.3 Infinite-dimensional normed spaces
    • 3.4 Completeness
    • 3.5 Induced norms and matrix norms
    • 3.6 Ordinary differential equations
    • 3.7 Existence and uniqueness of solutions
      • 3.7.1 Background lemmas
      • 3.7.2 Proof of existence
      • 3.7.3 Proof of uniqueness
  • 4 Time varying linear systems: Solutions
    • 4.1 Motivation: Linearization about a trajectory
    • 4.2 Existence and structure of solutions
    • 4.3 State transition matrix
  • 5 Time invariant linear systems: Solutions and transfer functions
    • 5.1 Time domain solution
    • 5.2 Semi-simple matrices
    • 5.3 Jordan form
    • 5.4 Laplace transforms
  • 6 Stability
    • 6.1 Nonlinear systems: Basic definitions
    • 6.2 Linear time varying systems
    • 6.3 Linear time invariant systems
    • 6.4 Systems with inputs and outputs
    • 6.5 Lyapunov equation
  • 7 Inner product spaces
    • 7.1 Inner product
    • 7.2 The space of square-integrable functions
    • 7.3 Orthogonal complement
    • 7.4 Adjoint of a linear map
    • 7.5 Finite rank lemma
    • 7.6 Application: Matrix pseudo-inverse
  • 8 Controllability and observability
    • 8.1 Nonlinear systems
    • 8.2 Linear time varying systems: Controllability
    • 8.3 Linear time varying systems: Minimum energy control
    • 8.4 Linear time varying systems: Observability and duality
    • 8.5 Linear time invariant systems: Observability
    • 8.6 Linear time invariant systems: Controllability
    • 8.7 Kalman decomposition
  • 9 State Feedback and Observer Design
    • 9.1 Revision: Change of basis
    • 9.2 Linear state feedback for single input systems
    • 9.3 Linear state observers for single output systems
    • 9.4 Output feedback and the separation principle
    • 9.5 The multi-input, multi-output case
  • A Notation
    • A.1 Shorthands
    • A.2 Sets
    • A.3 Logic
  • B Basic linear algebra
  • C Basic calculus

Chapter 1

Introduction

1.1 Objectives of the course

This course has two main objectives. The first (and more obvious) is for students to learn something about linear systems. Most of the course will be devoted to linear time varying systems that evolve in continuous time t ∈ R+. These are dynamical systems whose evolution is defined through state space equations of the form

x˙(t) = A(t)x(t) + B(t)u(t), y(t) = C(t)x(t) + D(t)u(t),

where x(t) ∈ Rn^ denotes the system state, u(t) ∈ Rm^ denotes the system inputs, y(t) ∈ Rp^ denotes the system outputs, A(t) ∈ Rn×n, B(t) ∈ Rn×m, C(t) ∈ Rp×n, and D(t) ∈ Rp×m^ are matrices of appropriate dimensions, and where, as usual, ˙x(t) = dx dt (t) denotes the derivative of x(t) with respect to time.

Time varying linear systems are useful in many application areas. They frequently arise as models of mechanical or electrical systems whose parameters (for example, the stiffness of a spring or the inductance of a coil) change in time. As we will see, time varying linear systems also arise when one linearizes a non-linear system around a trajectory. This is very common in practice. Faced with a nonlinear system one often uses the full nonlinear dynamics to design an optimal trajectory to guide the system from its initial state to a desired final state. However, ensuring that the system will actually track this trajectory in the presence of disturbances is not an easy task. One solution is to linearize the nonlinear system (i.e. approximate it by a linear system) around the optimal trajectory; the approximation is accurate as long as the nonlinear system does not drift too far away from the optimal trajectory. The result of the linearization is a time varying linear system, which can be controlled using the methods developed in this course. If the control design is done well, the state of the nonlinear system will always stay close to the optimal trajectory, hence ensuring that the linear approximation remains valid.

A special class of linear time varying systems are linear time invariant systems, usually referred to by the acronym LTI. LTI systems are described by state equations of the form

x˙(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t),

where the matrices A ∈ Rn×n, B ∈ Rn×m, C ∈ Rp×n, and D ∈ Rp×m^ are constant for all times t ∈ R+. LTI systems are somewhat easier to deal with and will be treated in the course as a special case of the more general linear time varying systems.

community, that let us say that a proposition is true, given that others are true. A “Theorem” is indeed a logical statement that can be proven: This means that the truth of such statement can be established by applying our proof methods to other statements that we already accept as true, either because they have been proven before, or because we postulate so (for example the “axioms” of logic), or because we assume so in a certain context (for example, when we say “Let V be a vector space... ” we mean “Assume that the set V verifies the axioms of a vector space... ”). Theorems of minor importance, or theorems whose main point is to establish an intermediate step in the proof of another theorem, will be called “Lemmas”, “Facts”, or “Propositions”; An immediate consequence of a theorem that deserves to be highlighted separately is usually called a “Corollary”. And a logical statement that we think may be true but cannot prove so is called a “Conjecture”.

The logical statements we will most be interested in typically take the form

p ⇒ q

(p implies q). p is called the hypothesis and q the consequence.

Example (No smoke without fire) It is generally accepted that when there is smoke, there must be some a fire somewhere. This knowledge can be encoded by the logical implication

If there is smoke then there is a fire p ⇒ q.

This is a statement of the form p ⇒ q with p the statement “there is smoke” and q the statement “there is a fire”.

Hypotheses and consequences may typically depend on one or more free variables, that is, objects that in the formulation of hypotheses and consequences are left free to change.

Example (Greeks) Despite recent economic turbulence, it is generally accepted that Greek citizens are also Europeans. This knowledge can be encoded by the logical implication

If X is a Greek then X is a European p(X) ⇒ q(X).

A sentence like “X is a... ” is the verbal way of saying something belongs to a set; for example the above statement can also be written as

X ∈ Greeks ⇒ X ∈ Europeans,

where “Greeks” and “Europeans” are supposed to be sets; the assertion that this implication is true for arbitrary X (∀X, X ∈ Greeks ⇒ X ∈ Europeans) is equivalent to the set-theoretic statement of inclusion: Greeks ⊆ Europeans.

You can visualize the implication and its set-theoretic interpretation in Figure 1.1.

There are several ways of proving that logical statements are true. The most obvious one is a direct proof: Start from p and establish a finite sequence of intermediate implications, p 1 , p 2 ,... , pn leading to q p ⇒ p 1 ⇒ p 2 ⇒... ⇒ pn ⇒ q.

We illustrate this proof technique using a statement about the natural numbers.

Definition 1.1 A natural number n ∈ N is called odd if and only if there exists k ∈ N such that n = 2k + 1. It is called even if and only if there exists k ∈ N such that n = 2k.

Greeks

Europeans

People

Figure 1.1: Set theoretic interpretation of logical implication.

One can indeed show that all natural numbers are either even or odd, and no natural number is both even and odd (Problem 1.1).

Theorem 1.2 If n is odd then n^2 is odd.

Proof:

n is odd ⇔ ∃k ∈ N : n = 2k + 1 ⇒ ∃k ∈ N : n^2 = (2k + 1)(2k + 1) ⇒ ∃k ∈ N : n^2 = 4k^2 + 4k + 1 ⇒ ∃k ∈ N : n^2 = 2(2k^2 + 2k) + 1 ⇒ ∃l ∈ N : n^2 = 2l + 1 (namely, l = 2k^2 + 2k ∈ N) ⇒ n^2 is odd

This proof principle can also be exploited to perform proof by induction. Proof by induction concerns propositions, pk, indexed by the natural numbers, k ∈ N, and statements of the form

∀k ∈ N, pk is true.

One often proves such statements by showing that p 0 is true and then establishing and infinite sequence of implications p 0 ⇒ p 1 ⇒ p 2 ⇒....

Clearly proving these implications one by one is impractical. It suffices, however, to establish that pk ⇒ pk+1 for all k ∈ N, or in other words

[p 0 ∧ (pk ⇒ pk+1, ∀k ∈ N)] ⇒ [pk, ∀k ∈ N].

We demonstrate this proof style with another statement about the natural numbers.

Definition 1.2 The factorial, n!, of a natural number n ∈ N is the natural number n! = n · (n − 1) ·

... · 2 · 1. By convention, if n = 0 we set n! = 1.

Theorem 1.3 For all m, k ∈ N, (m + k)! ≥ m!k!.

Proof: It is easy to check that the statement holds for the special cases m = k = 0, m = 0 and k = 1, and m = 1 and k = 0. For the case m = k = 1, (m + k)! = 2! ≥ 1!1! = m!k!.

Another common method that can be used to indirectly prove that p ⇒ q is to suppose that p is true, to suppose that q is false, and to apply other proof methods to derive a contradiction. A contradiction is a proposition of the form r ∧ ¬r (like “There is smoke and there is no smoke”, or “n is even and n is odd”); all such statements are postulated to be false by virtue of their mere structure, and irrespective of the proposition r. If, by assuming p is true and q is false we are able to reach a false assertion, we must admit that if p is true the consequence q cannot be false, in other words that p implies q. This method is known as proof by contradiction.

Example (Greeks and Chinese) Suppose the following implications: for all X,

X is a Greek ⇒ X is a European X is a Chinese ⇒ X is an Asian X is an Asian ⇒ X is not a European

We show by contradiction that every Greek is not a Chinese, more formally

If X is a Greek then X is not a Chinese p(X) ⇒ q(X)

Indeed, suppose p(X) and the converse of q(X), that is, X is a Chinese. By direct deduction,

X is a Greek ∧ X is a Chinese ⇓ X is a European ∧ X is an Asian ⇓ X is a European ∧ X is not a European

Since the conclusion is a contradiction for all X, we must admit that p(X) ∧ ¬q(X) is false or, which is the same, that p(X) ⇒ q(X). The set-theoretic interpretation is as follows: By postulate,

Europeans ∩ non-Europeans = ∅

On the other hand, by deduction,

(Greeks ∩ Chinese) ⊆ (Europeans ∩ non-Europeans)

It follows that Greeks ∩ Chinese is also equal to the empty set. Therefore (here is the point of the above proof), Greeks ⊆ non-Chinese.

Exercise 1.2 Visualize this set theoretic interpretation by a picture similar to Figure 1.1.

We will illustrate this fundamental proof technique with another statement, about rational numbers.

Definition 1.3 The real number x ∈ R is called rational if and only if there exist integers n, m ∈ Z with m 6 = 0 such that x = n/m.

Theorem 1.5 (Pythagoras)

2 is not rational.

Proof: (Euclid) Assume, for the sake of contradiction, that

2 is rational. Then there exist n, m ∈ Z with m 6 = 0 such that

2 = n/m. Since

2 > 0, without loss of generality we can take n, m ∈ N; if they happen to be both negative multiply both by −1 and replace them by the resulting numbers.

Without loss of generality, we can further assume that m and n have no common divisor; if they do, divide both by their common divisors until there are no common divisors left and replace m and n by the resulting numbers. Now

√ 2 =

n m

n^2 m^2 ⇒ n^2 = 2m^2 ⇒ n^2 is even ⇒ n is even (Theorem 1.4 and Problem 1.1) ⇒ ∃k ∈ N : n = 2k ⇒ ∃k ∈ N : 2m^2 = n^2 = 4k^2 ⇒ ∃k ∈ N : m^2 = 2k^2 ⇒ m^2 is even ⇒ m is even (Theorem 1.4 and Problem 1.1).

Therefore, n and m are both even and, according to Definition 1.1, 2 divides both. This contradicts the fact that n and m have no common divisor. Therefore

2 cannot be rational.

Exercise 1.3 What is the statement p in Theorem 1.5? What is the statement q? What is the statement r in the logical contradiction r ∧ ¬r reached at the end of the proof?

Two statements are equivalent if one implies the other and vice versa,

(p ⇔ q) is the same as (p ⇒ q) ∧ (q ⇒ p)

Usually showing that two statements are equivalent is done in two steps: Show that p ⇒ q and then show that q ⇒ p. For example, consider the following statement about the natural numbers.

Theorem 1.6 n^2 is odd if and only if n is odd.

Proof: n is odd implies that n^2 is odd (by Theorem 1.2) and n^2 is odd implies that n is odd (by Theorem 1.4). Therefore the two statements are equivalent.

This is argument is related to the canonical way of proving that two sets are equal, by proving two set inclusions A ⊆ B and B ⊆ A. To prove these inclusions one proves two implications:

X ∈ A ⇒ X ∈ B X ∈ B ⇒ X ∈ A

or, in other words, X ∈ A ⇔ X ∈ B.

Finally, let us close this brief discussion on proof techniques with a subtle caveat: If p is a false statement then any implication of the form p ⇒ q is true, irrespective of what q is.

Example (Maximal natural number) Here is a proof that there is no number larger than 1.

Theorem 1.7 Let N ∈ N be the largest natural number. Then N = 1.

Proof: Assume, for the sake of contradiction, that N > 1. Then N 2 is also a natural number and N 2 > N. This contradicts the fact that N is the largest natural number. Therefore we must have N = 1.

Obviously the “theorem” in this example is saying something quite silly. The problem, however, is not that the proof is incorrect, but that the starting hypothesis “let N be the largest natural number” is false, since there is no largest natural number.

X

X X

Y

Y Y

f

f f

gL gR

gL ◦ f = 1X f ◦ gR = 1Y

g

g ◦ f = 1X f ◦ g = 1Y

Figure 1.3: Commutative diagram of function inverses.

Definition 1.6 Consider a function f : X → Y.

  1. The function gL : Y → X is called a left inverse of f if and only if gL ◦ f = 1X.
  2. The function gR : Y → X is called a right inverse of f if and only if f ◦ gR = 1Y.
  3. The function g : Y → X is called an inverse of f if and only if it is both a left inverse and a right inverse of f , i.e. (g ◦ f = 1X ) ∧ (f ◦ g = 1Y ).

f is called invertible if an inverse of f exists.

The commutative diagrams for the different types of inverses are shown in Figure 1.3. It turns out that these different notions of inverse are intimately related to the infectivity and subjectivity of the function f.

Theorem 1.8 Consider two sets X and Y and a function f : X → Y.

  1. f has a left inverse if and only if it is injective.
  2. f has a right inverse if and only if it is surjective.
  3. f is invertible if and only if it is bijective.
  4. If f is invertible then any two inverses (left-, right- or both) coincide.

Proof: Parts 1 - 3 are left as an exercise (Problem 1.2). For part 4 , assume, for the sake of contra- diction, that f is invertible but there exist two different inverses, g 1 : Y → X and g 2 : Y → X (a similar argument applies to left- and right- inverses). Since the inverses are different, there must exist y ∈ Y such that g 1 (y) 6 = g 2 (y). Let x 1 = g 1 (y) and x 2 = g 2 (y) and note that x 1 6 = x 2. Then

x 1 = g 1 (y) = 1X ◦ g 1 (y) = (g 2 ◦ f ) ◦ g 1 (y) (g 2 inverse of f ) = g 2 ◦ (f ◦ g 1 )(y) (composition is associative) = g 2 ◦ (^1) Y (y) (g 1 inverse of f ) = g 2 (y) = x 2.

This contradicts the assumption that x 1 6 = x 2.

Problems for chapter 1

Problem 1.1 (Even and odd numbers) Show that every n ∈ N is either even or odd, but not both.

Problem 1.2 (Inverses of functions) Consider two sets X and Y and a function f : X → Y. Show that:

  1. f has a left inverse if and only if it is injective.
  2. f has a right inverse if and only if it is surjective.
  3. f is invertible if and only if it is bijective.

Proof: To show the first statement, assume, for the sake of contradiction, that there exist two identity elements e, e′^ ∈ G with e 6 = e′. Then for all a ∈ G, e ∗ a = a ∗ e = a and e′^ ∗ a = a ∗ e′^ = a. Then: e = e ∗ e′^ = e′

which contradicts the assumption that e 6 = e′.

To show the second statement, assume, for the sake of contradiction, that there exists a ∈ G with two inverse elements, say a 1 and a 2 with a 1 6 = a 2. Then

a 1 = a 1 ∗ e = a 1 ∗ (a ∗ a 2 ) = (a 1 ∗ a) ∗ a 2 = e ∗ a 2 = a 2 ,

which contradicts the assumption that a 1 6 = a 2.

2.2 Rings and fields

Definition 2.2 A ring (R, +, ·) is a set R equipped with two binary operations, + : R × R → R (called addition) and · : R × R → R (called multiplication) such that:

  1. Addition satisfies the following properties:
    • It is associative: ∀a, b, c ∈ R, a + (b + c) = (a + b) + c.
    • It is commutative: ∀a, b ∈ R, a + b = b + a.
    • There exists an identity element: ∃ 0 ∈ R, ∀a ∈ R, a + 0 = a.
    • Every element has an inverse element: ∀a ∈ R, ∃(−a) ∈ R, a + (−a) = 0.
  2. Multiplication satisfies the following properties:
    • It is associative: ∀a, b, c ∈ R, a · (b · c) = (a · b) · c.
    • There exists an identity element: ∃ 1 ∈ R, ∀a ∈ R, 1 · a = a · 1 = a.
  3. Multiplication is distributive with respect to addition: ∀a, b, c ∈ R, a · (b + c) = a · b + a · c and (b + c) · a = b · a + c · a.

(R, +, ·) is called commutative if in addition ∀a, b ∈ R, a · b = b · a.

Example (Common rings)

(R, +, ·) is a commutative ring.

(Rn×n, +, ·) with the usual operations of matrix addition and multiplication is a non-commutative ring.

The set of rotations (^) ({ [ cos(θ) − sin(θ) sin(θ) cos(θ)

]∣

∣∣ θ ∈ (−π, π]

with the same operations is not a ring, since it is not closed under addition.

(R[s], +, ·), the set of polynomials of s with real coefficients, i.e. ansn^ + an− 1 sn−^1 +... + a 0 for some n ∈ N and a 0 ,... , an ∈ R is a commutative ring.

(R(s), +, ·), the set of rational functions of s with real coefficients, i.e.

amsm^ + am− 1 sm−^1 +... + a 0 bnsn^ + bn− 1 sn−^1 +... + b 0

for some n, m ∈ N and a 0 ,... , am, b 0 ,... , bn ∈ R with bn 6 = 0 is a commutative ring. We implicitly assume here that the numerator and denominator polynomials are co-prime, that is they do not have any common factors; if they do one can simply cancel these factors until the two polynomials are co-prime. For example, it is easy to see that with such cancellations any rational function of the form 0 bnsn^ + bn− 1 sn−^1 +... + b 0

can be identified with the rational function 0/1, which is the identity element of addition for this ring.

(Rp(s), +, ·), the set of proper rational functions of s with real coefficients, i.e.

ansn^ + an− 1 sn−^1 +... + a 0 bnsn^ + bn− 1 sn−^1 +... + b 0

for some n ∈ N with a 0 ,... , an, b 0 ,... , bn ∈ R with bn 6 = 0 is a commutative ring. Note that an = 0 is allowed, i.e. it is possible for the degree of the numerator polynomial to be less than or equal to that of the denominator polynomial.

Exercise 2.1 Show that for every ring (R, +, ·) the identity elements 0 and 1 are unique. Moreover, for all a ∈ R the inverse element (−a) is unique.

Fact 2.2 If (R, +, ·) is a ring then:

  1. For all a ∈ R, a · 0 = 0 · a = 0.
  2. For all a, b ∈ R, (−a) · b = −(a · b) = a · (−b).

Proof: To show the first statement note that

a + 0 = a ⇒ a · (a + 0) = a · a ⇒ a · a + a · 0 = a · a ⇒ −(a · a) + a · a + a · 0 = −(a · a) + a · a ⇒ 0 + a · 0 = 0 ⇒ a · 0 = 0.

The second equation is similar. For the second statement note that

0 = 0 · b = (a + (−a)) · b = a · b + (−a) · b ⇒ −(a · b) = (−a) · b.

The second equation is again similar.

Definition 2.3 A field (F, +, ·) is a commutative ring that in addition satisfies

  • Multiplication inverse: ∀a ∈ F with a 6 = 0, ∃a−^1 ∈ F , a · a−^1 = 1.

Example (Common fields)

(R, +, ·) is a field.

(Rn×n, +, ·) is not a field, since singular matrices have no inverse.

({A ∈ Rn×n^ | Det(A) 6 = 0}, +, ·) is not a field, since it is not closed under addition.

The set of rotations (^) ({ [ cos(θ) − sin(θ) sin(θ) cos(θ)

]∣

∣ θ^ ∈^ (−π, π]

is not a field, it is not even a ring.

2.3 Linear spaces

We now come to the algebraic object of greatest interest for linear systems, namely linear (or vector) spaces.

Definition 2.4 A linear space (V, F, ⊕, ⊙) is a set V (of vectors) and a field (F, +, ·) (of scalars) equipped with two binary operations, ⊕ : V × V → V (called vector addition) and ⊙ : F × V → V (called scalar multiplication) such that:

  1. Vector addition satisfies the following properties:
    • It is associative: ∀x, y, z ∈ V , x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z.
    • It is commutative: ∀x, y ∈ V , x ⊕ y = y ⊕ x.
    • There exists an identity element: ∃θ ∈ V , ∀x ∈ V , x ⊕ θ = x.
    • For every element there exists an inverse element: ∀x ∈ V , ∃(⊖x) ∈ V , x ⊕ (⊖x) = θ.
  2. Scalar multiplication satisfies the following properties:
    • It is associative: ∀a, b ∈ F , x ∈ V , a ⊙ (b ⊙ x) = (a · b) ⊙ x.
    • Multiplication by the multiplication identity of F leaves elements unchanged: ∀x ∈ V , 1 ⊙ x = x.
  3. Scalar multiplication is distributive with respect to vector addition: ∀a, b ∈ F , ∀x, y ∈ V , (a + b) ⊙ x = (a ⊙ x) ⊕ (b ⊙ x) and a ⊙ (x ⊕ y) = (a ⊙ x) ⊕ (a ⊙ y).

Exercise 2.4 Let (F, +, ·) be a field. Show that (F, F, +, ·) is a linear space.

As for groups, rings and fields the following fact is immediate.

Exercise 2.5 For every linear space (V, F, ⊕, ⊙) the identity element θ is unique. Moreover, for all x ∈ V there exists a unique inverse element ⊖x.

The following relations can also be established.

Fact 2.4 If (V, F, ⊕, ⊙) is a linear space and 0 is the addition identity element of F then for all x ∈ V , 0 ⊙ x = θ. Moreover, for all a ∈ F , x ∈ V , (−a) ⊙ x = ⊖(a ⊙ x) = a ⊙ (⊖x).

The proof is left as an exercise (Problem 2.3).

Once we have found a few linear spaces we can always generate more by forming the so called product spaces.

Definition 2.5 If (V, F, ⊕V , ⊙V ) and (W, F, ⊕W , ⊙W ) are linear spaces over the same field, the product space (V × W, F, ⊕, ⊙) is the linear space comprising all pairs (v, w) ∈ V × W with ⊕ defined by (v 1 , w 1 ) ⊕ (v 2 , w 2 ) = (v 1 ⊕V v 2 , w 1 ⊕W w 2 ), and ⊙ defined by a ⊙ (v, w) = (a ⊙V v, a ⊙W w).

Exercise 2.6 Show that (V ×W, F, ⊕, ⊙) is a linear space. What is the identity element for addition? What is the inverse element?

Two types of linear spaces will play a central role in these notes. The first is constructed by taking repeatedly the product of a field with itself.

Example (Finite product of a field) For any field, (F, +, ·), consider the product space F n. Let x = (x 1 ,... , xn) ∈ F n, y = (y 1 ,... , yn) ∈ F n^ and a ∈ F and define ⊕ : F n^ × F n^ → F n^ by

x ⊕ y = (x 1 + y 1 ,... , xn + yn)

and ⊙ : F × F n^ → F n^ by a ⊙ x = (a · x 1 ,... , a · xn).

Note that both operations are well defined since a, x 1 ,... , xn, y 1 ,... , yn all take values in the same field, F.

Exercise 2.7 Show that (F n, F, ⊕, ⊙) is a linear space. What is the identity element θ? What is the inverse element ⊖x of x ∈ F n?

The most important instance of this type of linear space in these notes will be (Rn, R, +, ·) with the usual addition and scalar multiplication for vectors. The state, input, and output spaces of linear systems will be linear spaces of this type.

The second class of linear spaces that will play a key role in linear system theory are function spaces.

Example (Function spaces) Let (V, F, ⊕V , ⊙V ) be a linear space and D be any set. Let F(D, V ) denote the set of functions of the form f : D → V. Consider f, g ∈ F(D, V ) and a ∈ F and define ⊕ : F(D, V ) × F(D, V ) → F(D, V ) by

(f ⊕ g) : D → V such that (f ⊕ g)(d) = f (d) ⊕V g(d) ∀d ∈ D

and ⊙ : F × F(D, V ) → F(D, V ) by

(a ⊙ f ) : D → V such that (a ⊙ f )(d) = a ⊙V f (d) ∀d ∈ D

Note that both operations are well defined since a ∈ F , f (d), g(d) ∈ V and (V, F, ⊕V , ⊙V ) is a linear space.

Exercise 2.8 Show that (F(D, V ), F, ⊕, ⊙) is a linear space. What is the identity element? What is the inverse element?

The most important instance of this type of linear space in these notes will be (F([t 0 , t 1 ], Rn), R, +, ·) for real numbers t 0 < t 1. The trajectories of the state, input, and output of the dynamical systems we consider will take values in linear spaces of this type. The state, input and output trajectories will differ in terms of their “smoothness” as functions of time. We will use the following notation to distinguish the level of smoothness of the function in question:

  • C([t 0 , t 1 ], Rn) will be the linear space of continuous functions f : [t 0 , t 1 ] → Rn.
  • C^1 ([t 0 , t 1 ], Rn) will be the linear space of differentiable functions f : [t 0 , t 1 ] → Rn.
  • Ck([t 0 , t 1 ], Rn) will be the linear space of k-times differentiable functions f : [t 0 , t 1 ] → Rn.
  • C∞([t 0 , t 1 ], Rn) will be the linear space of infinitely differentiable functions f : [t 0 , t 1 ] → Rn.
  • Cω^ ([t 0 , t 1 ], Rn) will be the linear space of analytic functions f : [t 0 , t 1 ] → Rn, i.e. functions which are infinitely differentiable and whose Taylor series expansion converges for all t ∈ [t 0 , t 1 ].

Exercise 2.9 Show that all of these sets are linear spaces. You only need to check that they are closed under addition and scalar multiplication. E.g. if f and g are differentiable, then so is f ⊕ g.

Exercise 2.10 Show that for all k = 2, 3 ,...

Cω^ ([t 0 , t 1 ], Rn) ⊂ C∞([t 0 , t 1 ], Rn) ⊂ Ck([t 0 , t 1 ], Rn) ⊂ Ck−^1 ([t 0 , t 1 ], Rn) ⊂ C([t 0 , t 1 ], Rn) ⊂ (F([t 0 , t 1 ], Rn), R, +, ·).