SVD-在最小二乘估计中的应用
All the information in this passage is from course (ROBOTICS:PERCEPTION,made by university of pennsylvania) on Coursera. If by any chance it violates the rights, I will delete it upon notification as soon as possible.

16:18
So now, let's look at this example of singular varying decomposition. In the following matrix, we will reconstruct this matrix in the later lectures. And this matrix called a Fundamental Matrix. It's a very simple matrix, in three by three,
16:35
relating two pictures taken from different position in the world.
16:43
Take this matrix F. We can take the singular value decomposition SVD as the standard matlab.
16:49
We can display the value U and V. And we see those values are perpendicular to each other.
16:57
We can also look at the diagonal elements on the D.
17:01
We see the first values are pretty large, the second value is not small but it's not 0. And the last value is e to the minus 16. In Matlab, this is a value close enough to 0. And we will use this fact as a way of cleaning up matrix in the future. That if I want to make a matrix of rank two, I simply compute the SVD and set the last element to 0 and reconstruct the matrix F.
17:32
If I want to take an inversion of a matrix A, this also can be done fairly easily. Take SVD of A matrix first into U D V transpose and always take those inverse of a matrix A by simply inverting the diagnom elements itself.
17:51
Taking the sigma to the one over sigma if sigma is larger than 0 and said zero otherwise.
18:03
I would take the matrix V. D that transpose and that is the reverse of A.

18:13
So now, we are ready to return to our topic of V square problems. There are two type of least square problems. One type, we'll have the form of matrix A times x equal to b or written to Ax-b and minimize the norm of that.
18:31
Here the b is not zero.
18:34
The second type of least square problem we have is the form of Ax, A is the matrix itself times x and we want to minimize the norm such as it is as close as possible to zero. So, the problem we are solving is ax equal to zero.
18:51
For this problem, the the simple solution is actually very straight forward, where x equals zero. But we are not interested in a vector. We are interested in a vector that is not zero. In fact, have nominal one such that A times this vector x is as close as possible to the zero. We're already seeing a hint of that how to solve that in a single valid deep composition slides. We know that this vector is situated in a null space of A.
19:25
We now illustrate this types of a least square through a simple line fitting process where A is a simple line fitting equation.
19:34
In practice, there are many different types of as in different situations. We'll revisit many of those in the subsequent classes.
19:43
For two dimensional data, we have a point, have this location x and y.
19:50
And if a set of points happen to situated almost like on the line, we expect there will be a two parameter line controlled by the slope c and offset d, such is that cx + d = y.

20:11
So, we can rewrite this process of looking for c and looking for d as the following least square problems. Where we want to minimize the norm of y- c times x- d. We want to minimize this quantity which is a square quantity for every data points we have.
20:40
First, we want to do is construct this matrix A. So, how do we do that? For this equation, we take the y = ax + b into a simple matrix form.
20:54
We take all the ys into a column vector, y1 to yN. We construct a matrix A is an nx2 matrix, where the first column is simply the x, and the second column is just ones.
21:11
It's a simple two column matrix and unknown factors are left out in the 2x1 factors C and D.
21:19
As we can see, we obtain this equation when we have a vector b correspond to the y's and a matrix A which is mi2 matrix correspond to the x and one.
21:34
And we have obtained a generic form of v square where vector b- ax needs to be minimized.
21:44
Now, let's look if we have only one point, we only have one data point. What will happen?
21:52
We have infinite lines, in fact.
21:55
We can fit any line we want through a single point. So, the solution is not defined, or there are many solutions.
22:03
If I only have two points and that two points you need to find the line, the error will be zero.

22:12
Only when I have multiple points, both of x y will have a problem where if all the points are not all aligned on the line, we do not have ax equal to b, we want to minimize this norm which we indicate the vertical distance from the point to this line.
22:34
And this can be computed through a simple errorvation shown here.
22:39
And the solution is represented by this equation.
22:45
Where x is unknown two dimensional vectors c and d is calculated. So, what I would call the pseudo inverse of a.
22:56
So, this line is summarized, the general solutions for at least square problems, ax=b.
23:04
There are three cases. The first case, the one rank of A equal to R less than N.
23:13
So, in a previews case, N is two, two columns. So, if your rank is less than two we only have one points. See in that case, we'll have many solutions.
23:26
So, the solution for this problem is taking the inverse of A as we've shown before in the SVD slides and add onto it all possible linear combinations in the now space of A, so there are many solutions to this problem. When the rank of A is exactly equal to n, in our previous line fitting case where rank of A's were two, meaning there were many different rows of A, different points, but they exactly fit two The same line that we have the exact solution, where solution is simply A inverse times b.
24:10
In most common cases, in most problem that we encounter, we have rank of A not equal to n.
24:19
In fact we have the general least square problem and the solution is as what we have seen before through the pseudo inverse. In MATLAB this is a very simple, in fact it's called A\b and it computes the pseudo inverse for us.
24:35
So this summarize the least square problem of Ax including b.

24:39
Now we return to the type two of the least square of the form ax equal to zero, or ax the norm of that need to be minimized
24:49
with the constraint that x is not equal to zero or the norm of x is equal to one.
24:57
Return to our line fitting problem again.
25:00
Last time we've seen the line fitting problem where we're minimizing the distance from the line, vertically to a point.
25:08
So this is a reasonable assumption to make if the line is reasonably oriented. As we tilt the line vertically into a vertical direction. This computation becomes ill-defined. In fact, the better way to do this is to find the error, not in the vertical direction, but error in the direction perpendicular to the line itself.
25:35
And this can be done using the equation shown here. Where we'd represent the line as a homogenous coordinates.
25:45
Or made of three coefficient e, f and g.
25:49
And then we will represent a point x, y through a three dimensional array of x, y1.



25:55
Then we see the between the xy1 homogeneous point, with a line equation, should be equal to 0 if the line passes through that point.
26:07
So therefore, the quantity, the between the homogeneous of the point x was the line. In fact it was the scaled perpendicular distance of that point of the line.
26:20
And this is the quantity we want to minimize.
26:24
We can rewrite this set of constraints, set of equations in the following form. Where we take the homogeneous coordinates of a point x, y, 1, for
26:34
each points stacked on top of each other forming n by three matrix.
26:41
And that's going to be the matrix A that we're interested in.
26:45
And the indo vector corresponds to the line. So 3 prime at our vector efg correspond to the the homogeneous quadrants of the line.
26:56
And we have A times this line need to be minimized.
27:02
So we have to form an equation a x equal to zero, or we're going to minimize a quantity of a x and one of that.
27:13
The trivia solution to this is obvious, where x is equal to zero.
27:18
We're not interested in this point. And there's no such line as 0, 0, 0, no homogeneous coordinate system.
27:24
We're interested in non 0 vector, such is that A times this vector is getting us close to this 0. And this vector, You will see is, in fact, the third columns of SVD of this matrix A. A in this case, again, recall, is a three column vector matrix, x, first column, y second column and one, the third column. And then we have end points.
27:59
Corresponds to each data point we have.
28:03
Only take this matrix a, compute a single value u d v transpose. Each [INAUDIBLE] are perpendicular to each other.
28:13
We take the last rows of a v transpose, or the last column of v,
28:18
and that Is the vector we want to minimize a x equal to zero.
28:26
So once again we are looking for a x zero, there are three conditions.
28:32
The first condition is when the rank of a is less than m minus one.
28:39
And this case, again we have the ill defined parameters, there are many solutions to it.
28:44
In fact, they correspond to any vectors in a null space of A.
28:49
Take any linear combination of this null space vectors, satisfy the constraint, A axis is zero.
28:58
If A A rank of A is exactly M minus 1.

29:04
So if in our previous example, the rank is two, then we have the exact solution.
29:10
Or the solution is set, the vector Vn. The last column in the single value decomposition or the column in the null space give us A x equal to 0. And there's only one vector in the null space in this case.
29:27
In most general cases, we encounter the situation where A x is never equal to 0. In this case, to minimize this norm A x,
29:36
we pick the last of the singular value g composition vn again.
29:43
So if I wanted to pick a vector in the space of all the vectors, the one that minimize ax is the last columns of the v.
29:56
So this factor will be used many many times. We will use a x zero to many different solutions for squares to triangulations. To. And we will come back to this square problem many times.
浙公网安备 33010602011771号