Perpframes, Aligners and Hangers
Linear Systems, Pseudo-Inverse
Matrix Norm, Rank One
Rank One Decomposition
Matrix Norm and
Rank One Decomposition
When you add up the squares of each entry of A you get
When you add the squares of the singular values you get
In math there are no accidents!
The Frobenius norm of a matrix A, , is defined as the square root of the sum of the squares of all its entries.
Theorem: If A has singular values , then .Proof:
Let be an SVD of A.
First note that for any matrix
given in terms of its columns,
(1) Let be the columns of (stretcher)(aligner).
Since the hanger matrix simply rotates the columns of without changing their lengths, the two sides are equal.
(2) The matrix simply rotates the columns of without changing their lengths.
You've seen that the reduced SVD is
Here's one more way to write A, called the rank one decomposition.
How do you know this equals ?
As usually it's the agreement on the basis .
Here's what happens when the rank one decomposition hits :
(Since is orthonormal)
Since the rank one decomposition agrees with A on , it must equal A.
The rank one decomposition gets its name from that fact that each of the summand
is a rank one matrix.
When you compute the determinant of A you find Det[A]=16, so you know that A has rank three.
This big fat juicy determinants tells you that for every y, the system A x = y, has the unique solution .
Now check out the matrix B
and note that A is approximately equal to B.
In fact you can compute
But when you check the determinant of B you find Det[B]=0.
A zero determinant tells you that for some y's, B x = y, will
have infinitely many solutions and for other y's there'll be no
solutions at all.
Is there a way to predict that the seemingly well-behaved matrix A has
such a nasty neighbor B so nearby?
Here's an SVD for A
The three non-zero singular values tell you that the matrix has rank 3.
But the value 0.01 is so small that A is nearly a rank two matrix.
In fact the matrix B was created by setting that last singular value
Now the rank one decomposition of A is
and the rank one decomposition of B is
So you see that if A has a small singular value, then you can get a lower rank matrix B close to A by setting the small singular value to zero.
The next theorem says that this is the best way to find nearby neighbors of lower rank.
Theorem: Suppose A is an mxn matrix with singular values . If A has rank r, then for any k<r, a best rank k approximation to A is .For a proof of this theorem see page 414 of Steven J. Leon's excellent book, Linear Algebra with Applications, 5th Edition.
Best means that is minimized.
2. The first two singular values of