This is archived information for Math 221 Sect 101 (Fall, 2002).
Here's a summary of what I remember talking about for the review session tonight (Monday, September 30). I'll try to fill in more detail tomorrow (Tuesday).
Update: I've added a more detailed explanation of when you use coefficient matrices and when you use augmented matrices below. I've also put together a PDF file of examples.
There was a question about whether or not there is a faster way to tell if a vector w belongs to a span of some other vectors, say u and v, than writing out an augmented matrix and reducing it.
My answer was "sometimes". The span of those two vectors contains all their linear combinations including all scalar multiples of each vector and sums and differences of scalar multiples of those vectors. If you can see, just by inspecting the vectors, a way of making a linear combination out of u and v that adds up to w, you can conclude that w is in the span (for example, if you can see that w=u+v). In most cases though, unless it's obvious, you'll need to use the augmented matrix method.
Then, there was a question about checking if a bunch of vectors are linearly independent or not. It seems like you throw these vectors together in a matrix and then you just sort of, out of nowhere, add the zero vector on the right. Someone wondered if you could always do this: if you have a coefficient matrix that isn't augmented, can you just augment it with zero as a kind of "default"?
The answer is that you can't do that in all situations. The reason the zero vector is being added in this case is because the definition of linear dependence depends on whether the solution set of a homogeneous system consists of a unique solution (only the trivial solution, so the vectors are linearly independent) or infinitely many solutions (including nontrivial solutions, so the vectors are linearly dependent). Because it's the homogeneous equation we're looking at, its right-hand side really is the zero vector, and that's why we construct an augmented matrix with a zero vector on the far right. I'll try to write up a better explanation tomorrow.
Update: To make things a little clearer, we really have at least four different problems where row reducing a matrix to find its pivot positions is important:
If we have a linear system (or a vector or matrix equation), we might want to know if it is consistent and---if so---whether it has one solution or an infinite number of solutions. Usually, we want to go further and explicitly determine the set of all solutions, writing it either as a general solution or in vector parametric form.
To do this, we construct an augmented matrix. Here, it's usually obvious what to use for the rightmost column. It's the right-hand side of the linear system or vector or matrix equation. We row reduce this matrix to echelon form. If the rightmost column has a pivot position, we can stop because the equation is inconsistent. Otherwise, we continue to reduce the matrix to reduced echelon form, and we get the general solution: the pivot positions correspond to basic variables, and all other variables are free.
Sometimes, we want to know if a vector is in the span of some other vectors. For example, we might want to know if the vector w is in the span of u and v. There isn't an obvious equation here, but there's one going on behind the scenes. Asking if w is in the span of u and v is exactly the same as asking if w can be written as a linear combination of u and v; in other words, are there some weights x1 and x2 such that the equation:
x1 u + x2 v = w
is true? But that's exactly the same as asking if that vector equation has a solution (any solution).
The bottom line is: to determine if one vector is in the span of some others, make an augmented matrix whose first few columns are the vectors doing the spanning and whose rightmost column is the vector that may or may not be in the span. Reduce this matrix to echelon form, and the vector is in the span iff the system is consistent.
For a good example, see the Quiz #1 Solutions problem 4(b). Note that the augmented matrix is formed with its first two columns the vectors doing the spanning and the third column the vector that may or may not be in the span. All we care about is whether or not the system is consistent.
In problems of the next type, we're interested in whether a set of vectors, say {v1,v2,v3} is linearly independent or dependent. Again, it doesn't look like there's an obvious equation involved, but what we really want to know, from the definition of linear dependence, is whether or not
x1 v1 +x2 v2 +x3 v3 = 0
has only the trivial solution or also has at least one nontrivial solution. Here, unlike for the equation above, the right-hand side is always 0. And here, unlike the problem type above, we're not interested in whether or not the equation is consistent. We're only interested in whether it has one or an infinite number of solutions.
The bottom line is: to determine if a set of vectors are linearly dependent or independent, make an augmented matrix whose first columns are the vectors and whose rightmost column is all zeros. Reduce this matrix to echelon form, and the vectors are linearly independent if the solution is unique (no free variables) or linearly dependent if the solution is not unique (at least one free variable and maybe more).
For two good examples, see the Quiz #2 Solutions problems 2(b) and 2(c). Make sure you understand where the columns of the matrix come from. In 2(b), the first and second columns are the vectors we're interested in, and the last column is all zeros. In 2(c), the first, second, and third columns are the vectors we're interested in, and the rightmost column is all zeros. This time, we aren't interested in whether or not the system is consistent (because it's always consistent). All we care about is if the system has only the trivial solution (as in 2(b)) or nontrivial solutions as well (as in 2(c)).
Actually, in 2(c), we also want to find a linear dependence relation between the vectors. That is, we don't just want to know that there is a nontrivial solution (which would tell us the vectors are linearly dependent), we actually want to get a specific nontrivial solution. That's why we reduce the matrix all the way to reduced echelon form and find its general solution. We can then use the general solution to generate a specific, nontrivial solution---and any one would do; we could have used x3=2 or x3=5 instead of x3=1---and that will give us weights for a linear dependence relation.
The final type of problem is a special case. This time, there's no augmented matrix visible. There's only a coefficient matrix.
Sometimes, instead of wanting to know if a specific vector is in the span of some other vectors, we want to know if all vectors (of the appropriate size) are in the span of those vectors. Usually, you'll see this problem stated in the form: does a given collection of vectors in R3 span all of R3? In other words, can every vector of size 3 be written as a linear combination of those vectors?
In this special case, Theorem 4 in the textbook applies. You don't have to make an augmented matrix. Instead, you just collect the vectors together into the columns of a coefficient matrix, you reduce the matrix to echelon form, and you say that your vectors span all of R3 iff every row of your matrix contains a pivot position.
Notice that we don't care about consistency or uniqueness or the number of solutions or the solution set. We don't even have an augmented matrix, so we aren't even talking about a linear system. We're just talking about a matrix that's been reduced using our algorithm, and we're only interested in whether every row contains a pivot position or if there are one or more rows that are lacking pivot positions.
The bottom line is: to determine if a set of vectors in Rm span all of Rm, make a coefficient matrix whose columns are the vectors, row reduce the matrix to echelon form, and find the pivot positions. Your vectors span all of Rm iff there's a pivot position in every row.
A tricky example is solved in the Quiz #2 Solutions problem 5(a). We want to know when the columns of a matrix A fail to span R2. In other words, we want to know if A doesn't have a pivot position in every row. There's no linear system to solve or consistent/inconsistent business to check. Just reduce the matrix, and we discover that there will be a pivot position in every row unless the last row is all zeros, and that only happens if h=4/3 and k=6.
Then there was a question about how you work with an augmented matrix whose first column is all zeros, like 5(a) on Quiz #3, so I did a sample problem to show how to solve such a system and write its solution in parametric vector form. I'll post a complete solution to another example tomorrow.
Update: See example #1 in the file above.
There was a question about working with augmented matrices where some of the entries aren't known for sure (like matrices with "h" and "k" entries). We did a couple of example, like showing that all vectors (h,k) were in the span of some other vectors and determining for what values of k the vector (k,1,2) was in the span of some different vectors. I'll try to write up some examples tomorrow.
Update: See example #2 in the file above.
Finally, though I probably missed some things, there was a question about how you quickly figure out if a linear transformation is one-to-one or onto. The key is Theorem 12 on page 82. If someone gives you a linear transformation and tells you its standard matrix (or if you can figure it out using Theorem 10, say), you're just about set. All you need to do is reduce the standard matrix to echelon form. (This is one of those rare cases where you're reducing a matrix that isn't an augmented matrix to echelon form.) Then, find the pivot positions and:
The hard part is keeping those straight. I'll try to write up some examples tomorrow to illustrate.
Update: See example #3 in the file above.
This is archived information for Math 221 Sect 101 (Fall, 2002).
Last revised: Wed Jan 29 12:48:08 PST 2003