DSC 140B
Problems tagged with lecture-07

Problems tagged with "lecture-07"

Problem #071

Tags: lecture-07, quiz-04, pca

Let \(\vec{x}^{(1)}\) and \(\vec{x}^{(2)}\) be two points in a centered data set \(\mathcal{X}\) of 100 points in \(d\) dimensions. Suppose PCA is performed, but dimensionality is not reduced; that is, each point \(\vec{x}^{(i)}\) is transformed to the vector \(\vec{z}^{(i)} = U \vec{x}^{(i)}\), where \(U\) is a matrix whose \(d\) rows are the (orthonormal) eigenvectors of the data covariance matrix.

True or False: \(\|\vec{x}^{(1)} - \vec{x}^{(2)}\| = \|\vec{z}^{(1)} - \vec{z}^{(2)}\|\). That is, the distance between \(\vec{z}^{(1)}\) and \(\vec{z}^{(2)}\) in the new data set is necessarily the same as the distance between their corresponding points in the original data set.

Solution

True.

There's a mathematical way of seeing this, and an intuitive way.

Let's start with the intuitive way. We saw in lecture that when we perform PCA without reducing dimensionality, we are simply rotating the data to align with the directions of maximum variance (in the process, "decorrelating" the features). A rotation does not change distances between points, so the distances remain the same. This is illustrated in the figure below. On the left is the original data, and on the right is the same data after PCA transformation (without dimensionality reduction). Notice that the shape of the data hasn't change, and you can find two points in the left plot and see that their distance is the same as the corresponding points in the right plot.

Now for the mathematical way. We want to show that:

\[\|\vec{x}^{(1)} - \vec{x}^{(2)}\| = \|\vec{z}^{(1)} - \vec{z}^{(2)}\|. \]

We know that \(\vec{z}^{(i)} = U \vec{x}^{(i)}\), so we can rewrite the right-hand side:

\[\|\vec{z}^{(1)} - \vec{z}^{(2)}\| = \|U\vec{x}^{(1)} - U\vec{x}^{(2)}\|. \]

Remember that the norm of a vector \(\vec{v}\) can be expressed as:

\[\|\vec{v}\|^2 = \vec{v}^T \vec{v}. \]

So:

$$\begin{align*}\|\vec{z}^{(1)} - \vec{z}^{(2)}\|^2 &= (U\vec{x}^{(1)} - U\vec{x}^{(2)})^T (U\vec{x}^{(1)} - U\vec{x}^{(2)}) \\&= (\vec{x}^{(1)} - \vec{x}^{(2)})^T U^T U (\vec{x}^{(1)} - \vec{x}^{(2)}) \\&= (\vec{x}^{(1)} - \vec{x}^{(2)})^T I (\vec{x}^{(1)} - \vec{x}^{(2)}) \\&= (\vec{x}^{(1)} - \vec{x}^{(2)})^T (\vec{x}^{(1)} - \vec{x}^{(2)}) \\&= \|\vec{x}^{(1)} - \vec{x}^{(2)}\|^2. \end{align*}$$

In the middle of this calculation, we used the fact that \(U\) is an orthonormal matrix, so \(U^T U = I\).

Problem #085

Tags: reconstruction error, pca, covariance, eigenvalues, lecture-07, quiz-04

Suppose PCA is used to reduce the dimensionality of the centered data shown below from 2 dimensions to 1.

Part 1)

What will be the reconstruction error?

Solution

\(52\).

The first thing to figure out is what the first principal component (first eigenvector of the covariance matrix) is, since this is the direction onto which the data will be projected. Remember that the first eigenvector points in the direction of maximum variance, and in this plot that appears to be straight up (or down). Thus, the first principal component is \((0, 1)^T\)(or \((0, -1)^T\)).

Next, we imagine projecting all of the data onto this line. Since the first eigenvector is vertical, all of the data is projected onto the \(x_2\)-axis. In the process, every point's \(x_1\)-coordinate is lost, and becomes zero, while the \(x_2\)-coordinate remains unchanged. The figure below shows these projected points in red (note that each red point is actually two points on top of each other, since both \((x_1, x_2)^T\) and \((-x_1, x_2)^T\) project to \((0, x_2)^T\)).

To compute the reconstruction error, we find the squared distance between each point's original position and its projected position. Starting with the upper-right-most point at \((3, 5)^T\), its projection is \((0, 5)^T\), and the squared distance between these two points is \((3 - 0)^2 + (5 - 5)^2 = 9\). For the point at \((2, 1)^T\), its projection is \((0, 1)^T\), and the squared distance is \((2 - 0)^2 + (1 - 1)^2 = 4\).

We could continue, manually calculating the squared distance for each point, but the symmetry of the data allows us to be more efficient. There are four points in the data set that are exactly like the first we just considered and which will have a reconstruction error of 9 each. Similarly, there are four points like the second we considered, each with a reconstruction error of 4. Therefore, the total reconstruction error is:

\[ 4 \cdot 9 + 4 \cdot 4 = 36 + 16 = 52. \]

Part 2)

What is the smallest eigenvalue of the data's covariance matrix?

Solution

\(13/2\).

Remember that the smallest eigenvalue of the data's covariance matrix is equal to the variance of the second PCA feature. That is, it is the variance in the direction of the second principal component (the second eigenvector of the covariance matrix), which in this case is the vector \((1, 0)^T\)(or \((-1, 0)^T\)).

The variance of the data in this direction can be computed by

\[\text{Var}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2,\]

where \(x_i\) is the \(x_1\)-coordinate of the \(i\)-th data point, \(\mu\) is the mean of all the \(x_1\)-coordinates, and \(n\) is the number of data points. Since hte data is centered, \(\mu = 0\). Therefore, we just need to compute the average of the squared \(x_1\)-coordinates.

Reading these off, we have four points whose \(x_1\)-coordinate is \(3\) or \(-3\), and four points whose \(x_1\)-coordinate is \(2\) or \(-2\). So the variance is:

\[\text{Var}(X) = \frac{1}{8}\left(4\cdot 3^2 + 4 \cdot 2^2\right)= \frac{1}{8}(36 + 16) = \frac{52}{8} = \frac{13}{2}. \]

Therefore, the smallest eigenvalue of the data's covariance matrix is \(13/2\).

Problem #087

Tags: pca, covariance, eigenvalues, lecture-07, quiz-04

Consider the centered data set shown below. The data is symmetric across both axes.

Let \(C\) be the empirical covariance matrix. What is the largest eigenvalue of \(C\)?

Solution

\(45/2\).

Remember that the largest eigenvalue of the data's covariance matrix is equal to the variance of the first PCA feature. That is, it is the variance in the direction of the first principal component (the first eigenvector of the covariance matrix).

Since the data is symmetric across both axes, the eigenvectors of the covariance matrix are aligned with the coordinate axes. From the figure, the data is more spread out vertically than horizontally, so the first principal component is the vector \((0, 1)^T\)(or \((0, -1)^T\)).

The variance of the data in this direction can be computed by

\[\text{Var}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2,\]

where \(x_i\) is the \(x_2\)-coordinate of the \(i\)-th data point, \(\mu\) is the mean of all the \(x_2\)-coordinates, and \(n\) is the number of data points. Since the data is centered, \(\mu = 0\). Therefore, we just need to compute the average of the squared \(x_2\)-coordinates.

Reading these off, we have four points whose \(x_2\)-coordinate is \(6\) or \(-6\), and four points whose \(x_2\)-coordinate is \(3\) or \(-3\). So the variance is:

\[\text{Var}(X) = \frac{1}{8}\left(4\cdot 6^2 + 4 \cdot 3^2\right)= \frac{1}{8}(144 + 36) = \frac{180}{8} = \frac{45}{2}. \]

Therefore, the largest eigenvalue of the data's covariance matrix is \(45/2\).

Problem #088

Tags: lecture-07, eigenvalues, variance, pca

Let \(\mathcal X = \{\vec{x}^{(1)}, \ldots, \vec{x}^{(100)}\}\) be a set of 100 points in \(3\) dimensions. The variance in the direction of \(\hat{e}^{(1)}\) is 20, the variance in the direction of \(\hat{e}^{(2)}\) is 12, and the variance in the direction of \(\hat{e}^{(3)}\) is 10.

Suppose PCA is performed, but dimensionality is not reduced; that is, each point \(\vec{x}^{(i)}\) is transformed to the vector \(\vec{z}^{(i)} = U \vec{x}^{(i)}\), where \(U\) is a matrix whose \(3\) rows are the (orthonormal) eigenvectors of the data covariance matrix. Let \(\mathcal Z = \{\vec{z}^{(1)}, \ldots, \vec{z}^{(100)}\}\) be the resulting data set.

Suppose \(C\) is the covariance matrix of the new data set, \(\mathcal Z\), and that the top two eigenvalues of \(C\) are 25 and 15. What is the third eigenvalue?

Solution

\(2\).

Although it wasn't discussed in lecture, it turns out that the total variance is preserved under orthogonal transformations. The total variance of the original data is:

\[ 20 + 12 + 10 = 42 \]

After PCA, the covariance matrix \(C\) of \(\mathcal Z\) is diagonal with the eigenvalues on the diagonal. The sum of eigenvalues equals the total variance:

\[\lambda_1 + \lambda_2 + \lambda_3 = 42 \]
\[ 25 + 15 + \lambda_3 = 42 \]
\[\lambda_3 = 42 - 40 = 2 \]

Problem #089

Tags: reconstruction error, pca, covariance, eigenvalues, lecture-07, quiz-04

Consider the centered data set consisting of four points:

$$\begin{align*}\vec{x}^{(1)}&= (3, 3)^T\\\vec{x}^{(2)}&= (-3, -3)^T\\\vec{x}^{(3)}&= (-2, 2)^T\\\vec{x}^{(4)}&= (2, -2)^T\\\end{align*}$$

Suppose PCA is used to reduce the dimensionality of the data from 2 dimensions to 1.

Part 1)

What will be the reconstruction error?

Solution

\(16\).

The first thing to figure out is what the first principal component (first eigenvector of the covariance matrix) is, since this is the direction onto which the data will be projected. Remember that the first eigenvector points in the direction of maximum variance. Looking at the data, the points \((3, 3)^T\) and \((-3, -3)^T\) are farther from the origin than the points \((-2, 2)^T\) and \((2, -2)^T\), so the direction of maximum variance is along the line \(x_2 = x_1\). Thus, the first principal component is \(\frac{1}{\sqrt{2}}(1, 1)^T\)(or \(\frac{1}{\sqrt{2}}(-1, -1)^T\)).

Next, we imagine projecting all of the data onto this line. The figure below shows these projected points in red.

Notice that the points \((3, 3)^T\) and \((-3, -3)^T\) already lie on the line \(x_2 = x_1\), so they project to themselves. Their reconstruction error is zero. The points \((-2, 2)^T\) and \((2, -2)^T\) lie on the line \(x_2 = -x_1\), which is perpendicular to the first principal component. These points both project to the origin \((0, 0)^T\).

To compute the reconstruction error, we find the squared distance between each point's original position and its projected position. For the point at \((-2, 2)^T\), its projection is \((0, 0)^T\), and the squared distance is \((-2 - 0)^2 + (2 - 0)^2 = 4 + 4 = 8\). Similarly, the point \((2, -2)^T\) projects to \((0, 0)^T\) with squared distance \(8\).

Therefore, the total reconstruction error is:

\[ 0 + 0 + 8 + 8 = 16. \]

Part 2)

What is the smallest eigenvalue of the data's covariance matrix?

Solution

\(4\).

Remember that the smallest eigenvalue of the data's covariance matrix is equal to the variance of the second PCA feature.

The second PCA feature corresponds to each point's projection onto the second principal component (the second eigenvector of the covariance matrix, and the dashed line in the figure above). In this case, we see that two of the points, \((3, 3)^T\) and \((-3, -3)^T\), project to the origin \((0, 0)^T\), and will have a second PCA feature value of \(0\). The other two points are at \(8\) units away from the origin along this direction, so their second PCA feature values are \(2 \sqrt{2}\) and \(-2 \sqrt{2}\). You could also project these points onto the second principal component to verify this:

$$\begin{align*} z_2^{(3)}&= \vec{u}^{(2)}\cdot\vec{x}^{(3)} = \frac{1}{\sqrt{2}}(1, -1) \cdot(-2, 2) = \frac{1}{\sqrt{2}}(-2 - 2) = \frac{-4}{\sqrt{2}} = -2\sqrt{2}\\ z_2^{(4)}&= \vec{u}^{(2)}\cdot\vec{x}^{(4)} = \frac{1}{\sqrt{2}}(1, -1) \cdot(2, -2) = \frac{1}{\sqrt{2}}(2 + 2) = \frac{4}{\sqrt{2}} = 2\sqrt{2}\end{align*}$$

Therefore, the second PCA feature values for the four points are 0, 0, 2, and -2. The variance of these (centered) values is:

\[\frac{1}{4}\left(0^2 + 0^2 + (2\sqrt{2})^2 + (-2\sqrt{2})^2 \right)= \frac{1}{4}(0 + 0 + 8 + 8) = \frac{16}{4} = 4. \]

Problem #090

Tags: lecture-07, quiz-04, dimensionality reduction, pca

Let \(\mathcal X_1\) and \(\mathcal X_2\) be two data sets containing 100 points each, and let \(\mathcal X\) be the combination of the two data sets into a data set of 200 points.

Suppose \(\vec x \in\mathcal X_1\) is a point in the first data set. Suppose PCA is performed on \(\mathcal X_1\) by itself, reducing each point to one dimension, and that the new representation of \(\vec x\) is \(z\).

The point \(\vec x\) is also in the combined data set, \(\mathcal X\). Suppose PCA is performed on the combined data set, \(\mathcal X\), reducing each point to one dimension, and that the new representation of \(\vec x\) after this PCA is \(z'\).

True or False: it is necessarily the case that \(z = z'\).

Solution

False.

The principal eigenvector of \(\mathcal X_1\) may be different from the principal eigenvector of the combined data set \(\mathcal X\). Since \(z\) and \(z'\) are computed by projecting \(\vec x\) onto different eigenvectors, they can be different values.

For example, if \(\mathcal X_1\) has maximum variance along the \(x\)-axis and \(\mathcal X_2\) has maximum variance along the \(y\)-axis, the combined data set might have a different principal direction altogether.

Problem #091

Tags: laplacian eigenmaps, intrinsic dimension, lecture-07, quiz-04, dimensionality reduction

Consider the plot shown below:

Part 1)

What is the ambient dimension?

Solution

3.

The ambient dimension is the dimension of the space in which the data lives. Here, the helix is embedded in 3D space (it has \(x\), \(y\), and \(z\) coordinates), so the ambient dimension is 3.

Part 2)

What is the intrinsic dimension of the curve?

Solution

1.

The intrinsic dimension is the number of parameters needed to describe a point's location on the manifold. For this helix, even though it lives in 3D space, you only need one parameter (e.g., the arc length along the curve, or equivalently, the angle of rotation) to specify any point on it. If you ``unroll'' the helix, it becomes a straight line, which is 1-dimensional. Therefore, the intrinsic dimension is 1.

Problem #092

Tags: laplacian eigenmaps, intrinsic dimension, lecture-07, quiz-04, dimensionality reduction

Consider the plot shown below:

Part 1)

What is the ambient dimension?

Solution

3.

The ambient dimension is the dimension of the space in which the data lives. The curve shown is embedded in 3D space (with \(x\), \(y\), and \(z\) axes visible), so the ambient dimension is 3.

Part 2)

What is the intrinsic dimension of the curve?

Solution

1.

The intrinsic dimension is the minimum number of coordinates needed to describe a point's position on the manifold itself. This zigzag path, despite twisting through 3D space, is fundamentally a 1-dimensional curve. You only need a single parameter (such as the distance traveled along the path from a starting point) to uniquely identify any location on it. If you were to ``straighten out'' the path, it would become a line segment, confirming its intrinsic dimension is 1.