# Chapter 5 - Eigenstuff

## Section 5.1

We can calculate these things, and Sage has the usual approximation techniques of any such software.

M = matrix([[1,2,3],[1,2,3],[1,2,3]]); M
 [1 2 3] [1 2 3] [1 2 3] [1 2 3] [1 2 3] [1 2 3]
M.eigenvalues()
 [6, 0, 0] [6, 0, 0]
M.eigenspaces_right()
 [ (6, Vector space of degree 3 and dimension 1 over Rational Field User basis matrix: [1 1 1]), (0, Vector space of degree 3 and dimension 2 over Rational Field User basis matrix: [ 1 0 -1/3] [ 0 1 -2/3]) ] [ (6, Vector space of degree 3 and dimension 1 over Rational Field User basis matrix: [1 1 1]), (0, Vector space of degree 3 and dimension 2 over Rational Field User basis matrix: [ 1 0 -1/3] [ 0 1 -2/3]) ]

Note that eigenvectors come with their full information in Sage.

M.eigenvectors_right()
 [(6, [ (1, 1, 1) ], 1), (0, [ (1, 0, -1/3), (0, 1, -2/3) ], 2)] [(6, [ (1, 1, 1) ], 1), (0, [ (1, 0, -1/3), (0, 1, -2/3) ], 2)]
for EV in M.eigenvectors_right(): print EV print EV print EV
 6 [ (1, 1, 1) ] 1 0 [ (1, 0, -1/3), (0, 1, -2/3) ] 2 6 [ (1, 1, 1) ] 1 0 [ (1, 0, -1/3), (0, 1, -2/3) ] 2

The "RDF" is more or less like what Matlab would do.

M = random_matrix(RDF,10); html(M)
 $\left(\begin{array}{rrrrrrrrrr} 0.329872532495 & -0.420533138537 & -0.658386203799 & 0.870434534339 & 0.502001566665 & -0.246070917128 & -0.415079953845 & -0.391654733139 & 0.0580135488508 & -0.270763379612 \\ -0.855092691239 & 0.607355068356 & 0.208719503527 & -0.617886289905 & -0.95089560658 & -0.665074322799 & 0.873821578156 & -0.664084077377 & -0.309640718321 & -0.526386643411 \\ 0.418507328358 & 0.268616824422 & -0.877455834077 & 0.947112196213 & 0.4740079877 & 0.280479453052 & 0.480670822843 & 0.15799466204 & -0.263086993325 & 0.672364042753 \\ -0.590865372645 & 0.831336818951 & -0.342414951559 & -0.242974015657 & 0.912028936094 & 0.830701347073 & -0.673517319285 & -0.960134837715 & -0.293910580928 & 0.0958946817302 \\ 0.130858568653 & 0.640913913939 & 0.463223394906 & 0.702778615839 & -0.877076039244 & -0.0033618670685 & -0.807164645002 & -0.174458857417 & 0.0290971840375 & -0.271452990016 \\ -0.447997647598 & 0.243296311371 & -0.49110902528 & -0.157654591415 & 0.81385536933 & 0.566143392424 & 0.592783788426 & -0.779978649249 & -0.098583604082 & -0.02729234436 \\ -0.28757796022 & 0.23452129879 & -0.540506398037 & 0.759715186578 & -0.653023257057 & -0.45840467107 & -0.587132249508 & 0.899425312731 & 0.0808087921085 & 0.968188535014 \\ 0.803531187599 & -0.491852937428 & -0.563757937178 & -0.536396721092 & -0.462901505189 & -0.677882862037 & 0.714870182516 & 0.61232795648 & 0.337867371671 & -0.94500191728 \\ -0.368422990241 & 0.394641952429 & -0.979050805736 & -0.112300068039 & 0.0464510286991 & 0.652480590441 & 0.449412416278 & 0.285046619027 & -0.946675356487 & -0.510121920517 \\ 0.362427947348 & -0.355636196788 & -0.444743091423 & -0.0934359983862 & -0.786666110382 & -0.155260180966 & -0.955286155844 & 0.635547317026 & -0.166260205419 & 0.315556474189 \end{array}\right)$
M.eigenvalues()
 [1.444320996077252 + 1.161659437598388*I, 1.444320996077252 - 1.161659437598388*I, -0.741602708299991 + 1.5194873545168985*I, -0.741602708299991 - 1.5194873545168985*I, 0.8885901062798977, 0.4827226030226191, -0.614618801174736 + 0.7367283355849592*I, -0.614618801174736 - 0.7367283355849592*I, -1.5093649218691567, -1.1382048316681947] [1.444320996077252 + 1.161659437598388*I, 1.444320996077252 - 1.161659437598388*I, -0.741602708299991 + 1.5194873545168985*I, -0.741602708299991 - 1.5194873545168985*I, 0.8885901062798977, 0.4827226030226191, -0.614618801174736 + 0.7367283355849592*I, -0.614618801174736 - 0.7367283355849592*I, -1.5093649218691567, -1.1382048316681947]

Here is #16.

A = matrix([[5,0,-1,0],[1,3,0,0],[2,-1,3,0],[4,-2,-2,4]]); A
 [ 5 0 -1 0] [ 1 3 0 0] [ 2 -1 3 0] [ 4 -2 -2 4] [ 5 0 -1 0] [ 1 3 0 0] [ 2 -1 3 0] [ 4 -2 -2 4]

Let's pick out the eigenvalue mentioned in the problem.

for ES in A.eigenspaces_right(): if ES==4: ES
 (4, Vector space of degree 4 and dimension 2 over Rational Field User basis matrix: [1 1 1 0] [0 0 0 1]) (4, Vector space of degree 4 and dimension 2 over Rational Field User basis matrix: [1 1 1 0] [0 0 0 1])

But there are other (complex) eigenvalues.

A.eigenspaces_right()
 [ (4, Vector space of degree 4 and dimension 2 over Rational Field User basis matrix: [1 1 1 0] [0 0 0 1]), (3.500000000000000? - 0.866025403784439?*I, Vector space of degree 4 and dimension 1 over Algebraic Field User basis matrix: [ 1 0.500000000000000? + 0.866025403784439?*I 1.500000000000000? + 0.866025403784439?*I 3.000000000000000? + 1.732050807568878?*I]), (3.500000000000000? + 0.866025403784439?*I, Vector space of degree 4 and dimension 1 over Algebraic Field User basis matrix: [ 1 0.500000000000000? - 0.866025403784439?*I 1.500000000000000? - 0.866025403784439?*I 3.000000000000000? - 1.732050807568878?*I]) ] [ (4, Vector space of degree 4 and dimension 2 over Rational Field User basis matrix: [1 1 1 0] [0 0 0 1]), (3.500000000000000? - 0.866025403784439?*I, Vector space of degree 4 and dimension 1 over Algebraic Field User basis matrix: [ 1 0.500000000000000? + 0.866025403784439?*I 1.500000000000000? + 0.866025403784439?*I 3.000000000000000? + 1.732050807568878?*I]), (3.500000000000000? + 0.866025403784439?*I, Vector space of degree 4 and dimension 1 over Algebraic Field User basis matrix: [ 1 0.500000000000000? - 0.866025403784439?*I 1.500000000000000? - 0.866025403784439?*I 3.000000000000000? - 1.732050807568878?*I]) ]

## Section 5.2

Sage can calculate characteristic polynomials, too.

A.characteristic_polynomial()
 x^4 - 15*x^3 + 85*x^2 - 216*x + 208 x^4 - 15*x^3 + 85*x^2 - 216*x + 208
A.characteristic_polynomial().subs(x=A)
 [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]
A^4-15*A^3+85*A^2-216*A+208*identity_matrix(4)
 [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]

Let's think of Smiley Guy in terms of eigenstuff.

t = var('t') @interact() def _(A=matrix(RDF,[[1,0],[0,1]]),auto_update=False): ID = matrix(RDF,[[1,0],[0,1]]) def makeface(M): pll=M*vector((-0.5,0.5)) plr=M*vector((-0.3,0.5)) prl=M*vector((0.3,0.5)) prr=M*vector((0.5,0.5)) left_eye=line([pll,plr])+point(pll,size=5)+point(plr,size=5) right_eye=line([prl,prr],color='green')+point(prl,size=5,color='green')+point(prr,size=5,color='green') mouth=parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t, -0.5, 0),color='red')+parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t,0,0.5),color='orange') face=parametric_plot(M*vector([cos(t),sin(t)]), (t,0,pi/2),color='black')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi/2,pi),color='lavender')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi,3*pi/2),color='cyan')+parametric_plot(M*vector([cos(t),sin(t)]),(t,3*pi/2,2*pi),color='sienna',figsize=3) return right_eye+left_eye+face+mouth pretty_print(html('smiley guy, transformed by $A$')) G = graphics_array([[makeface(ID),makeface(A)]]) for ev in A.eigenvalues(): try: RR(ev) pretty_print(html("We have an eigenvalue $%s$ - can you see the eigenspace?"%ev)) break except: pass G.show(aspect_ratio=1)

## Click to the left again to hide and once more to show the dynamic interactive window

Eigenvalues can be hard to come by symbolically.  This is from # 12

var('lambda_') show(((-1-lambda_)*(1-lambda_)*(2-lambda_)+6).solve(lambda_))
 \newcommand{\Bold}{\mathbf{#1}}\lambda = {\left(\frac{1}{3} \, \sqrt{58} + \frac{71}{27}\right)}^{\frac{1}{3}} + \frac{7}{9 \, {\left(\frac{1}{3} \, \sqrt{58} + \frac{71}{27}\right)}^{\frac{1}{3}}} + \frac{2}{3}

We can talk about the characteristic polynomial too!

A = matrix([[5,0,-1,0],[1,3,0,0],[2,-1,3,0],[4,-2,-2,4]]); A
 [ 5 0 -1 0] [ 1 3 0 0] [ 2 -1 3 0] [ 4 -2 -2 4] [ 5 0 -1 0] [ 1 3 0 0] [ 2 -1 3 0] [ 4 -2 -2 4]
A.characteristic_polynomial()
 x^4 - 15*x^3 + 85*x^2 - 216*x + 208 x^4 - 15*x^3 + 85*x^2 - 216*x + 208
solve(SR(A.characteristic_polynomial()),x)
 [x == -1/2*I*sqrt(3) + 7/2, x == 1/2*I*sqrt(3) + 7/2, x == 4] [x == -1/2*I*sqrt(3) + 7/2, x == 1/2*I*sqrt(3) + 7/2, x == 4]

The algebraic multiplicity of $\lambda=4$ is two.

factor(A.characteristic_polynomial())
 (x - 4)^2 * (x^2 - 7*x + 13) (x - 4)^2 * (x^2 - 7*x + 13)
A.eigenvectors_right()
 [(4, [ (1, 1, 1, 0), (0, 0, 0, 1) ], 2), (3.500000000000000? - 0.866025403784439?*I, [(1, 0.500000000000000? + 0.866025403784439?*I, 1.500000000000000? + 0.866025403784439?*I, 3.000000000000000? + 1.732050807568878?*I)], 1), (3.500000000000000? + 0.866025403784439?*I, [(1, 0.500000000000000? - 0.866025403784439?*I, 1.500000000000000? - 0.866025403784439?*I, 3.000000000000000? - 1.732050807568878?*I)], 1)] [(4, [ (1, 1, 1, 0), (0, 0, 0, 1) ], 2), (3.500000000000000? - 0.866025403784439?*I, [(1, 0.500000000000000? + 0.866025403784439?*I, 1.500000000000000? + 0.866025403784439?*I, 3.000000000000000? + 1.732050807568878?*I)], 1), (3.500000000000000? + 0.866025403784439?*I, [(1, 0.500000000000000? - 0.866025403784439?*I, 1.500000000000000? - 0.866025403784439?*I, 3.000000000000000? - 1.732050807568878?*I)], 1)]

And here's the Cayley-Hamilton Theorem.

A.characteristic_polynomial().subs(x=A)
 [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]
A^4-15*A^3+85*A^2-216*A+208*identity_matrix(4)
 [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]

## Sections 5.3 and 5.4

Here we come to the crux of it, and where I differ from the text in how it should be motivated.  What do eigenvalues give us, under the best circumstances?

t = var('t') @interact() def _(A=matrix([[1,0],[0,1]]),auto_update=False): ID = matrix(RDF,[[1,0],[0,1]]) def makeface(M): pll=M*vector((-0.5,0.5)) plr=M*vector((-0.3,0.5)) prl=M*vector((0.3,0.5)) prr=M*vector((0.5,0.5)) left_eye=line([pll,plr])+point(pll,size=5)+point(plr,size=5) right_eye=line([prl,prr],color='green')+point(prl,size=5,color='green')+point(prr,size=5,color='green') mouth=parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t, -0.5, 0),color='red')+parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t,0,0.5),color='orange') face=parametric_plot(M*vector([cos(t),sin(t)]), (t,0,pi/2),color='black')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi/2,pi),color='lavender')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi,3*pi/2),color='cyan')+parametric_plot(M*vector([cos(t),sin(t)]),(t,3*pi/2,2*pi),color='sienna') return right_eye+left_eye+face+mouth pretty_print(html('smiley guy, transformed by $A$')) IDface = makeface(ID) Aface = makeface(A) for ev in A.eigenvectors_right(): val = ev ev = ev for vect in ev: try: vect.change_ring(RR) pretty_print(html("There is an eigenvector with eigenvalue $%s$"%val)) IDface += plot(vect) Aface += plot(A*vect) except ValueError: pass G = graphics_array([[IDface,Aface]]) G.show(aspect_ratio=1)

## Click to the left again to hide and once more to show the dynamic interactive window

Notice that now we can see Smiley Guy's eigenvectors and how they are moved.

• With some matrices, of course, there is only one eigenvector or none, but the premise remains. (Try a shear or rotation.)
• Also note that if we change things to Matlab-style RDF, we can get extraneous eigenvectors!  (Try RDF of a shear matrix.)

But if we have two linearly independent eigenvectors (a basis), we can see exactly how things change.  You stretch each basis vector by its eigenvalue, and everything else follows suit!

• A nice example of this is $$\left[\begin{array}{cc}0 & 1\\-3 & 4\end{array}\right]$$ (this is #13 in section 5.4)
• Another one $$\left[\begin{array}{cc}2 & 3\\3 & 2\end{array}\right]$$ (this is #14 in section 5.4)
• By the way, I wonder why the basis in the second example is at right angles, but not the first... that couldn't possibly be important...

Now, try out a matrix like $$\left[\begin{array}{cc}5 & 0\\0 & -1\end{array}\right]$$  That seems to basically act the same way, right?  Only... it stretches a different basis.

If only there were a way to make our transformation look that simple.  If only...

Now let's try to look at a typical example.

t = var('t') A=matrix([[7,2],[-4,1]]) ID = matrix(RDF,[[1,0],[0,1]]) def makeface(M): pll=M*vector((-0.5,0.5)) plr=M*vector((-0.3,0.5)) prl=M*vector((0.3,0.5)) prr=M*vector((0.5,0.5)) left_eye=line([pll,plr])+point(pll,size=5)+point(plr,size=5) right_eye=line([prl,prr],color='green')+point(prl,size=5,color='green')+point(prr,size=5,color='green') mouth=parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t, -0.5, 0),color='red')+parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t,0,0.5),color='orange') face=parametric_plot(M*vector([cos(t),sin(t)]), (t,0,pi/2),color='black')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi/2,pi),color='lavender')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi,3*pi/2),color='cyan')+parametric_plot(M*vector([cos(t),sin(t)]),(t,3*pi/2,2*pi),color='sienna') return right_eye+left_eye+face+mouth pretty_print(html('smiley guy, transformed by $A$')) IDface = makeface(ID) Aface = makeface(A) for ev in A.eigenvectors_right(): val = ev ev = ev for vect in ev: try: vect.change_ring(RR) pretty_print(html("There is an eigenvector with eigenvalue $%s$"%val)) IDface += plot(vect) Aface += plot(A*vect) except ValueError: pass G = graphics_array([[IDface,Aface]]) G.show(aspect_ratio=1)
 smiley guy, transformed by $A$ There is an eigenvector with eigenvalue $5$ There is an eigenvector with eigenvalue $3$ smiley guy, transformed by There is an eigenvector with eigenvalue There is an eigenvector with eigenvalue Notice how the images of the vectors are about the same length.  But that is sheer coincidence; the important point is that the eigenvectors are expanded by the eigenvalue.  Here is a different basis.

t = var('t') A=matrix([[7,2],[-4,1]]) ID = matrix(RDF,[[1,0],[0,1]]) def makeface(M): pll=M*vector((-0.5,0.5)) plr=M*vector((-0.3,0.5)) prl=M*vector((0.3,0.5)) prr=M*vector((0.5,0.5)) left_eye=line([pll,plr])+point(pll,size=5)+point(plr,size=5) right_eye=line([prl,prr],color='green')+point(prl,size=5,color='green')+point(prr,size=5,color='green') mouth=parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t, -0.5, 0),color='red')+parametric_plot(M*vector([t, -0.15*sin(2*pi*t)-0.5]), (t,0,0.5),color='orange') face=parametric_plot(M*vector([cos(t),sin(t)]), (t,0,pi/2),color='black')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi/2,pi),color='lavender')+parametric_plot(M*vector([cos(t),sin(t)]), (t,pi,3*pi/2),color='cyan')+parametric_plot(M*vector([cos(t),sin(t)]),(t,3*pi/2,2*pi),color='sienna') return right_eye+left_eye+face+mouth pretty_print(html('smiley guy, transformed by $A$')) IDface = makeface(ID) Aface = makeface(A) R = -1 for ev in A.eigenvectors_right(): val = ev ev = ev for vect in ev: try: vect.change_ring(RR) vect = R*vect R = R+2 pretty_print(html("There is an eigenvector with eigenvalue $%s$"%val)) IDface += plot(vect) Aface += plot(A*vect) except ValueError: pass G = graphics_array([[IDface,Aface]]) G.show(aspect_ratio=1)
 smiley guy, transformed by $A$ There is an eigenvector with eigenvalue $5$ There is an eigenvector with eigenvalue $3$ smiley guy, transformed by There is an eigenvector with eigenvalue There is an eigenvector with eigenvalue The point is that, in any eigenbasis, the matrix really should just be $$\left[\begin{array}{cc}5 & 0\\0 & 3\end{array}\right]$$  That is what diagonalization will do for us.  So all we have to do is figure out how to interpret this correctly - and that is by finding the matrix of a linear transformation with respect to a given basis, just like we found the representation of a vector with respect to different bases.

M = matrix([[1,1,1],[0,1,1],[0,0,1]])
M^-1
 [ 1 -1 0] [ 0 1 -1] [ 0 0 1] [ 1 -1 0] [ 0 1 -1] [ 0 0 1]
M*matrix([[0,1,1],[0,0,2],[0,0,0]])*M^-1
 [0 1 2] [0 0 2] [0 0 0] [0 1 2] [0 0 2] [0 0 0]
M^-1*matrix([[0,1,0],[0,0,2],[0,0,0]])*M
 [ 0 1 -1] [ 0 0 2] [ 0 0 0] [ 0 1 -1] [ 0 0 2] [ 0 0 0]