MAT 232 Chapter One

320 days ago by kcrisman

Welcome to Linear Algebra!

Here is a really typical application.

import pylab A_image = pylab.mean(pylab.imread(DATA + 'Jenks.png'), 2) @interact def svd_image(i=(5,(1..100))): u,s,v = pylab.linalg.svd(A_image) A = sum(s[j]*pylab.outer(u[0:,j],v[j,0:]) for j in range(i)) g = graphics_array([[matrix_plot(A),matrix_plot(A_image)]]) show(g,axes=False, figsize=[8,10]) pretty_print(html('<h2>Compressed using %s eigenvalues</h2>'%i)) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Section One.I.1

So what kinds of solutions can happen, anyway?  As in Section One.I.2, we write things in the convenient matrix/vector format.

var('x1,x2') @interact def _(M = matrix(RR,[[1,5],[-2,-7]]), V = matrix(RR,[[7,-5]]),range=(5,(1,100)),show_sol=False): a, b, c, d = M[0,0],M[0,1],M[1,0],M[1,1] e, f = V[0,0],V[0,1] E1 = a*x1+b*x2==e E2 = c*x1+d*x2==f eq1 = solve(E1,x2)[0].rhs() eq2 = solve(E2,x2)[0].rhs() pretty_print(html("Solving:")) pretty_print(html('Eq 1: '+latex(E1))) pretty_print(html('Eq 2: '+latex(E2))) P = plot(eq1,(x1,-range,range))+plot(eq2,(x1,-range,range),color='red',figsize=5) show(P) if show_sol: try: pretty_print(html(solve([E1,E2],[x1,x2])[0])) except: pretty_print(html('No solution?')) 
       

Click to the left again to hide and once more to show the dynamic interactive window

var('x1,x2,x3') @interact def _(M = matrix(RR,[[1,5,2],[-2,-7,3],[0,1,1]]), V = matrix(RR,[[7,-5,0]]),range=(5,(1,100)),show_sol=False): a1, a2, a3, b1, b2, b3, c1, c2, c3 = M[0,0],M[0,1],M[0,2],M[1,0],M[1,1],M[1,2],M[2,0],M[2,1],M[2,2] e, f, g = V[0,0],V[0,1],V[0,2] E1 = a1*x1+a2*x2+a3*x3==e E2 = b1*x1+b2*x2+b3*x3==f E3 = c1*x1+c2*x2+c3*x3==g eq1 = solve(E1,x3)[0].rhs() eq2 = solve(E2,x3)[0].rhs() eq3 = solve(E3,x3)[0].rhs() pretty_print(html('Eq 1: '+latex(E1))) pretty_print(html('Eq 2: '+latex(E2))) pretty_print(html('Eq 3: '+latex(E3))) P = plot3d(eq1,(x1,-range,range),(x2,-range,range))+plot3d(eq2,(x1,-range,range),(x2,-range,range),color='red')+plot3d(eq3,(x1,-range,range),(x2,-range,range),color='green') #show(P,viewer='canvas3d') show(P) if show_sol: try: pretty_print(html(solve([E1,E2,E3],[x1,x2,x3])[0])) except: pretty_print(html('No solution?')) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Section One.I.2

We can solve these systems, including parameters, using Sage.  Here is the syntax; you define your variables, and then solve using one list for the equations and one for the variables you want to solve for.

var('x1,x2,x3') solve([6*x1+5*x2+3*x3==2,x1-4*x3==8],[x1,x2,x3]) 
       
[[x1 == 4*r1 + 8, x2 == -27/5*r1 - 46/5, x3 == r1]]
[[x1 == 4*r1 + 8, x2 == -27/5*r1 - 46/5, x3 == r1]]

The "r1" syntax means "any real number" - that is the parameter. We use "==" to indicate equality because Python (the language underlying Sage) uses it for something else (variable assignment).

Note our example from class (correctly) has $x1$ as being fixed at $9/5$.

solve([2*x1+x2-x3==5,3*x1-x2+x3==4],[x1,x2,x3]) 
       
[[x1 == (9/5), x2 == r2 + 7/5, x3 == r2]]
[[x1 == (9/5), x2 == r2 + 7/5, x3 == r2]]

If we want to assume a certain variable is a parameter, we can indicate that by leaving it out of the list of variables to be solved for.

solve([2*x1+x2-x3==5,3*x1-x2+x3==4],[x1,x2]) 
       
[[x1 == (9/5), x2 == x3 + 7/5]]
[[x1 == (9/5), x2 == x3 + 7/5]]

Section One.I.3

It's not as easy to demonstrate the "General=Particular+Homogeneous" principle using a computer system, because it will not automatically check for equality of solution sets (this could be an arbitrarily hard problem).

However, there is a way to represent at least some of this using Sage.

M = matrix([[2,1,-1],[3,-1,1]]) V = vector([5,4]) M\V 
       
(9/5, 7/5, 0)
(9/5, 7/5, 0)

That was a particular solution; the next cell gives us the vector from the homogeneous solution.

M.right_kernel() 
       
Free module of degree 3 and rank 1 over Integer Ring
Echelon basis matrix:
[0 1 1]
Free module of degree 3 and rank 1 over Integer Ring
Echelon basis matrix:
[0 1 1]

As you probably are already suspecting, we don't yet have the theoretical chops to deal with all of this yet, but at least you can see that it's possible to compute these things programmatically.

Section One.II.1

Sage makes it easy to define vectors and do things with them.

V = vector([1,2]) 
       
V+V 
       
(2, 4)
(2, 4)
-3*V 
       
(-3, -6)
(-3, -6)

Be sure to use the right syntax - brackets inside parentheses.

Here are a couple plots from class.

plot(vector([1,2]))+plot(vector([-2,1]))+plot(vector([-1,3]),color='green') 
       
plot(vector([-2,1]))+plot(vector([-6,3]),color='red') 
       

However, for the purposes of this text, it might be more interesting to plot some of the solutions that we put in vector notation in the text.  This is tricky, because Sage (unlike Hefferon) treats vectors as pretty much always being based at the origin.  So instead we can just use ordered pairs or points to plot some vectors.

parametric_plot((1-2*x,2+x),(x,-3,3))+parametric_plot((-2*x,x),(x,-3,3),color='red',linestyle='--') 
       

I've put the homogeneous solution in red dashes.

We can do the same thing in three dimensions.

parametric_plot3d((9/5,7/5+x,x),(x,-3,3))+parametric_plot3d((0,x,x),(x,-3,3),color='red')+point((0,0,0),color='green',size=20) 
       

That was the "line as vectors" interpretation of the general and homogeneous solutions.

var('y,z') parametric_plot3d((3-2*y-z,y,z),(y,-3,3),(z,-3,3))+parametric_plot3d((-2*y-z,y,z),(y,-3,3),(z,-3,3),color='red')+point((0,0,0),color='green',size=20) 
       

Section One.II.2

Vectors have lengths.  And you can multiply them in this weird way.

V = vector([1,2]) W = vector([2,-3]) 
       

Sage calls this the "norm" for good reasons ("normalization", for instance).

norm(V) 
       
sqrt(5)
sqrt(5)
norm(W) 
       
sqrt(13)
sqrt(13)

Dot product is the only good product we can define on vectors, so:

V*W 
       
-4
-4

Sage also allows the dot product in this way, but we haven't done much programming yet so the notation may seem odd.

V.dot_product(W) 
       
-4
-4

Here's an example similar to one from the book.  Are these vectors orthogonal?

V = vector([1,3,2]) W = vector([5,-2,0]) 
       
plot(V)+plot(W) 
       

Gosh, they could be... is the angle $\pi/2$?

N(arccos((V*W)/(norm(V)*norm(W)))) 
       
1.62044588932890
1.62044588932890
N(pi/2) 
       
1.57079632679490
1.57079632679490

Looks like it's a bit bigger.  But you might not be able to tell that from the picture, eh?

Section One.III.1

Sage has the row-reduced echelon form built in, but you have to do a little more work to get it.  Also, one tends to do it with just the matrix of coefficients, not the augmented one, for reasons which will become clear when we revisit these systems later.

M = matrix([[2,2],[4,3]]) 
       
       
[2 2]
[4 3]
[2 2]
[4 3]

This syntax gets you an echelon form, but not the reduced one.

M.echelon_form() 
       
[2 0]
[0 1]
[2 0]
[0 1]

For full echelon reduced form, you use ".rref()" for "row-reduced echelon form".

M.rref() 
       
[1 0]
[0 1]
[1 0]
[0 1]

We can confirm that all the examples in question are equivalent.

M1 = matrix([[2,2],[0,-1]]) M2 = matrix([[1,1],[0,-1]]) M3 = matrix([[2,0],[0,-1]]) 
       
M1.rref(); M2.rref(); M3.rref() 
       
[1 0]
[0 1]
[1 0]
[0 1]
[1 0]
[0 1]
[1 0]
[0 1]
[1 0]
[0 1]
[1 0]
[0 1]

Topic: Accuracy of Computations

Suppose we try to solve some system as in the topic in the book.

var('x,y') @interact def _(number_zeros=(8,[1..20])): pretty_print(html("Solving $x+2y=3$ and $1."+"0"*number_zeros+"1x+2y=3."+"0"*number_zeros+"1$")) pretty_print(html(solve([x+2*y==3, (1+.1^number_zeros)*x+2*y== (3+.1^number_zeros)],[x,y]))) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Notice that this goes from fine at 13 zeros to weird at 14 zeros to a free variable at 15 zeros!  What the heck?

solve([x+2*y==3, (1+.1^14)*x+2*y== (3+.1^14)],[x,y]) 
       
[[x == (100079991719344/97904339725445), y ==
(193633027456991/195808679450890)]]
[[x == (100079991719344/97904339725445), y == (193633027456991/195808679450890)]]

What follows is essentially the example on page 67 of what can happen if you try to do row reduction.

M = matrix([[.00000000000000000001,1,1],[1,-1,0]]) 
       
       
[1.00000000000000e-20     1.00000000000000     1.00000000000000]
[    1.00000000000000    -1.00000000000000    0.000000000000000]
[1.00000000000000e-20     1.00000000000000     1.00000000000000]
[    1.00000000000000    -1.00000000000000    0.000000000000000]
M.add_multiple_of_row(1,0,-100000000000000000000) 
       
       
[1.00000000000000e-20     1.00000000000000     1.00000000000000]
[   0.000000000000000 -1.00000000000000e20 -1.00000000000000e20]
[1.00000000000000e-20     1.00000000000000     1.00000000000000]
[   0.000000000000000 -1.00000000000000e20 -1.00000000000000e20]

Oops, looks like $y=1$, eh?  But if $x=0$ it won't solve the original system, where clearly $x=y$.

Here is how we might do the examples in the book, which have much less accuracy of the computing assumed:

M = matrix(RealField(10),[[0.001,1,1],[1,-1,0]]) 
       
 
       

I created a matrix with only ten bits of accuracy for each number, which corresponds to maybe three digits of accuracy.  Not enough, apparently:

M.add_multiple_of_row(1,0,-1000) 
       
       
[ 0.0010     1.0     1.0]
[0.00098  -1000.  -1000.]
[ 0.0010     1.0     1.0]
[0.00098  -1000.  -1000.]

What's the moral of this story?  YOU NEED TO KNOW THE WHY, NOT JUST THE HOW.