上次我们已经知道
Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2

Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2

So now we need to set this to zero in order to find our βin\beta_{in} parameters and this is 0 if and only if the βin\beta_{in} parameters are given by X~n\widetilde X_n which we are going to define as equation D. What this means is that the optimal coordinates of X~n\widetilde X_n with respect to our basis are the orthogonal projections of the coordinates of our original data point onto the ithi_{th} basis vector that spans our principal subspace.

Reformulation of the objective

Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2

Finding the basis vectors that span the principal subspace

Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2
Coursera-Mathematics for Machine Learning: PCA Week4-2

相关文章: