DSC 140B
Problems tagged with eigenvectors

Problems tagged with "eigenvectors"

Problem #028

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A = \begin{pmatrix} 3 & 1 \\ 1 & 3 \end{pmatrix}\) and let \(\vec v = (1, 1)^T\).

True or False: \(\vec v\) is an eigenvector of \(A\). If true, what is the corresponding eigenvalue?

Solution

True, with eigenvalue \(\lambda = 4\).

We compute:

\[ A \vec v = \begin{pmatrix} 3 & 1 \\ 1 & 3 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \end{pmatrix} = \begin{pmatrix} 4 \\ 4 \end{pmatrix} = 4 \begin{pmatrix} 1 \\ 1 \end{pmatrix} = 4 \vec v \]

Since \(A \vec v = 4 \vec v\), the vector \(\vec v\) is an eigenvector with eigenvalue \(4\).

Problem #029

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}\) and let \(\vec v = (1, 2)^T\).

True or False: \(\vec v\) is an eigenvector of \(A\). If true, what is the corresponding eigenvalue?

Solution

False.

We compute:

\[ A \vec v = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 4 \\ 5 \end{pmatrix}\]

For \(\vec v\) to be an eigenvector, we would need \(A \vec v = \lambda\vec v\) for some scalar \(\lambda\). This would require:

\[\begin{pmatrix} 4 \\ 5 \end{pmatrix} = \lambda\begin{pmatrix} 1 \\ 2 \end{pmatrix}\]

From the first component, we would need \(\lambda = 4\). From the second component, we would need \(\lambda = \frac{5}{2}\). Since these two values are not equal, there is no such \(\lambda\) that satisfies both equations. Therefore, \(\vec v\) is not an eigenvector of \(A\).

Problem #030

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A = \begin{pmatrix} 1 & 1 & 3 \\ 1 & 2 & 1 \\ 3 & 1 & 9 \end{pmatrix}\) and let \(\vec v = (1, 4, -1)^T\).

True or False: \(\vec v\) is an eigenvector of \(A\). If true, what is the corresponding eigenvalue?

Solution

True, with eigenvalue \(\lambda = 2\).

We compute:

\[ A \vec v = \begin{pmatrix} 1 & 1 & 3 \\ 1 & 2 & 1 \\ 3 & 1 & 9 \end{pmatrix}\begin{pmatrix} 1 \\ 4 \\ -1 \end{pmatrix} = \begin{pmatrix} 1 + 4 - 3 \\ 1 + 8 - 1 \\ 3 + 4 - 9 \end{pmatrix} = \begin{pmatrix} 2 \\ 8 \\ -2 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ 4 \\ -1 \end{pmatrix} = 2 \vec v \]

Since \(A \vec v = 2 \vec v\), the vector \(\vec v\) is an eigenvector with eigenvalue \(2\).

Problem #031

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A = \begin{pmatrix} 5 & 1 & 1 & 1 \\ 1 & 5 & 1 & 1 \\ 1 & 1 & 5 & 1 \\ 1 & 1 & 1 & 5 \end{pmatrix}\) and let \(\vec v = (1, 2, -2, -1)^T\).

True or False: \(\vec v\) is an eigenvector of \(A\). If true, what is the corresponding eigenvalue?

Solution

True, with eigenvalue \(\lambda = 4\).

We compute:

\[ A \vec v = \begin{pmatrix} 5 & 1 & 1 & 1 \\ 1 & 5 & 1 & 1 \\ 1 & 1 & 5 & 1 \\ 1 & 1 & 1 & 5 \end{pmatrix}\begin{pmatrix} 1 \\ 2 \\ -2 \\ -1 \end{pmatrix} = \begin{pmatrix} 5 + 2 - 2 - 1 \\ 1 + 10 - 2 - 1 \\ 1 + 2 - 10 - 1 \\ 1 + 2 - 2 - 5 \end{pmatrix} = \begin{pmatrix} 4 \\ 8 \\ -8 \\ -4 \end{pmatrix} = 4 \vec v \]

Since \(A \vec v = 4 \vec v\), the vector \(\vec v\) is an eigenvector with eigenvalue \(4\).

Problem #032

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A = \begin{pmatrix} 1 & 6 \\ 6 & 6 \end{pmatrix}\) and let \(\vec v = (3, -2)^T\).

True or False: \(\vec v\) is an eigenvector of \(A\). If true, what is the corresponding eigenvalue?

Solution

True, with eigenvalue \(\lambda = -3\).

We compute:

\[ A \vec v = \begin{pmatrix} 1 & 6 \\ 6 & 6 \end{pmatrix}\begin{pmatrix} 3 \\ -2 \end{pmatrix} = \begin{pmatrix} 3 - 12 \\ 18 - 12 \end{pmatrix} = \begin{pmatrix} -9 \\ 6 \end{pmatrix} = -3 \begin{pmatrix} 3 \\ -2 \end{pmatrix} = -3 \vec v \]

Since \(A \vec v = -3 \vec v\), the vector \(\vec v\) is an eigenvector with eigenvalue \(-3\).

Problem #033

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(\vec v\) be a unit vector in \(\mathbb R^d\), and let \(I\) be the \(d \times d\) identity matrix. Consider the matrix \(P\) defined as:

\[ P = I - 2 \vec v \vec v^T \]

True or False: \(\vec v\) is an eigenvector of \(P\).

Solution

True.

To verify this, we compute \(P \vec v\):

$$\begin{align*} P \vec v &= (I - 2 \vec v \vec v^T) \vec v \\&= I \vec v - 2 \vec v \vec v^T \vec v \\&= \vec v - 2 \vec v (\vec v^T \vec v) \\&= \vec v - 2 \vec v (1) \quad\text{($\vec v^T \vec v = \|\vec v\|^2$, and $\vec v$ is a unit vector)}\\&= \vec v - 2 \vec v \\&= -\vec v \end{align*}$$

Since \(P \vec v = -\vec v = (-1) \vec v\), we see that \(\vec v\) is an eigenvector of \(P\) with eigenvalue \(-1\).

Note: The matrix \(P = I - 2 \vec v \vec v^T\) is called a Householder reflection matrix, which reflects vectors across the hyperplane orthogonal to \(\vec v\).

Problem #034

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A\) be the matrix:

\[ A = \begin{pmatrix} 5 & 2 \\ 2 & 2 \end{pmatrix}\]

It can be verified that the vector \(\vec{u}^{(1)} = (2, 1)^T\) is an eigenvector of \(A\).

Part 1)

What is the eigenvalue associated with \(\vec{u}^{(1)}\)?

Solution

The eigenvalue is \(\lambda_1 = 6\).

To find the eigenvalue, we compute \(A \vec{u}^{(1)}\):

$$\begin{align*} A \vec{u}^{(1)}&= \begin{pmatrix} 5 & 2 \\ 2 & 2 \end{pmatrix}\begin{pmatrix} 2 \\ 1 \end{pmatrix}\\&= \begin{pmatrix} 5 \cdot 2 + 2 \cdot 1 \\ 2 \cdot 2 + 2 \cdot 1 \end{pmatrix}\\&= \begin{pmatrix} 12 \\ 6 \end{pmatrix}\\&= 6 \begin{pmatrix} 2 \\ 1 \end{pmatrix} = 6 \vec{u}^{(1)}\end{align*}$$

Therefore, \(\lambda_1 = 6\).

Part 2)

Find another eigenvector \(\vec{u}^{(2)}\) of \(A\). Your eigenvector should have an eigenvalue that is different from \(\vec{u}^{(1)}\)'s eigenvalue. It does not need to be normalized.

Solution

\(\vec{u}^{(2)} = (1, -2)^T\)(or any scalar multiple).

Since \(A\) is symmetric, we know that we can always find two orthogonal eigenvectors. This suggests that we should find a vector orthogonal to \(\vec{u}^{(1)} = (2, 1)^T\) and make sure that it is indeed an eigenvector.

We know from the math review in Week 01 that the vector \((1, -2)^T\) is orthogonal to \((2, 1)^T\)(in general, \((a, b)^T\) is orthogonal to \((-b, a)^T\)).

We can verify:

\[ A \vec{u}^{(2)} = \begin{pmatrix} 5 & 2 \\ 2 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ -2 \end{pmatrix} = \begin{pmatrix} 5 - 4 \\ 2 - 4 \end{pmatrix} = \begin{pmatrix} 1 \\ -2 \end{pmatrix} = 1 \cdot\vec{u}^{(2)}\]

So the eigenvalue is \(\lambda_2 = 1\), which is indeed different from \(\lambda_1 = 6\).

Problem #035

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(D\) be the diagonal matrix:

\[ D = \begin{pmatrix} 2 & 0 & 0 \\ 0 & -5 & 0 \\ 0 & 0 & 7 \end{pmatrix}\]

Part 1)

What is the top eigenvalue of \(D\)? What eigenvector corresponds to this eigenvalue?

Solution

The top eigenvalue is \(7\).

For a diagonal matrix, the eigenvalues are exactly the diagonal entries. The diagonal entries are \(2, -5, 7\), and the largest is \(7\).

An eigenvector corresponding to this eigenvalue is \(\vec v = (0, 0, 1)^T\).

Part 2)

What is the bottom (smallest) eigenvalue of \(D\)? What eigenvector corresponds to this eigenvalue?

Solution

The bottom eigenvalue is \(-5\).

An eigenvector corresponding to this eigenvalue is \(\vec w = (0, 1, 0)^T\).

Problem #036

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04, linear transformations

Consider the linear transformation \(\vec f : \mathbb{R}^2 \to\mathbb{R}^2\) that reflects vectors over the line \(y = -x\).

Find two orthogonal eigenvectors of this transformation and their corresponding eigenvalues.

Solution

The eigenvectors are \((1, -1)^T\) with \(\lambda = 1\), and \((1, 1)^T\) with \(\lambda = -1\).

Geometrically, vectors along the line \(y = -x\)(i.e., multiples of \((1, -1)^T\)) are unchanged by reflection over that line, so they have eigenvalue \(1\). Vectors perpendicular to the line \(y = -x\)(i.e., multiples of \((1, 1)^T\)) are flipped to point in the opposite direction, so they have eigenvalue \(-1\).

Problem #037

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04, linear transformations

Consider the linear transformation \(\vec f : \mathbb{R}^2 \to\mathbb{R}^2\) that scales vectors along the line \(y = x\) by a factor of 2, and scales vectors along the line \(y = -x\) by a factor of 3.

Find two orthogonal eigenvectors of this transformation and their corresponding eigenvalues.

Solution

The eigenvectors are \((1, 1)^T\) with \(\lambda = 2\), and \((1, -1)^T\) with \(\lambda = 3\).

Geometrically, vectors along the line \(y = x\)(i.e., multiples of \((1, 1)^T\)) are scaled by a factor of 2, so they have eigenvalue \(2\). Vectors along the line \(y = -x\)(i.e., multiples of \((1, -1)^T\)) are scaled by a factor of 3, so they have eigenvalue \(3\).

Problem #038

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04, linear transformations

Consider the linear transformation \(\vec f : \mathbb{R}^2 \to\mathbb{R}^2\) that rotates vectors by \(180°\).

Find two orthogonal eigenvectors of this transformation and their corresponding eigenvalues.

Solution

Any pair of orthogonal vectors works, such as \((1, 0)^T\) and \((0, 1)^T\). Both have eigenvalue \(\lambda = -1\).

Geometrically, rotating any vector by \(180°\) reverses its direction, so \(\vec f(\vec v) = -\vec v\) for all \(\vec v\). This means every nonzero vector is an eigenvector with eigenvalue \(-1\).

Since we need two orthogonal eigenvectors, any orthogonal pair will do.

Problem #039

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Consider the diagonal matrix:

\[ A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 5 \end{pmatrix}\]

How many unit length eigenvectors does \(A\) have?

Solution

\(\infty\) The upper-left \(2 \times 2\) block of \(A\) is the identity matrix. Recall from lecture that the identity matrix has infinitely many eigenvectors: every nonzero vector is an eigenvector of the identity with eigenvalue 1.

Similarly, any vector of the form \((a, b, 0)^T\) where \(a\) and \(b\) are not both zero satisfies:

\[ A \begin{pmatrix} a \\ b \\ 0 \end{pmatrix} = \begin{pmatrix} a \\ b \\ 0 \end{pmatrix} = 1 \cdot\begin{pmatrix} a \\ b \\ 0 \end{pmatrix}\]

So any such vector is an eigenvector with eigenvalue 1. There are infinitely many unit vectors of this form (they form a circle in the \(x\)-\(y\) plane), so \(A\) has infinitely many unit length eigenvectors.

Additionally, \((0, 0, 1)^T\) is an eigenvector with eigenvalue 5.

Problem #040

Tags: linear algebra, quiz-03, spectral theorem, eigenvectors, lecture-04

Consider the matrix:

\[ A = \begin{pmatrix} 1 & 2 \\ 0 & 3 \end{pmatrix}\]

True or False: The spectral theorem guarantees that \(A\) has 2 orthogonal eigenvectors.

Solution

False.

The spectral theorem only applies to symmetric matrices. The matrix \(A\) is not symmetric because \(A^T \neq A\):

\[ A^T = \begin{pmatrix} 1 & 0 \\ 2 & 3 \end{pmatrix}\neq\begin{pmatrix} 1 & 2 \\ 0 & 3 \end{pmatrix} = A \]

Since \(A\) is not symmetric, the spectral theorem does not apply, and we cannot use it to conclude anything about \(A\)'s eigenvectors.

Problem #041

Tags: linear algebra, quiz-03, spectral theorem, eigenvectors, lecture-04

Suppose \(A\) is a symmetric matrix, and \(\vec{u}^{(1)}\) and \(\vec{u}^{(2)}\) are both eigenvectors of \(A\).

True or False: \(\vec{u}^{(1)}\) and \(\vec{u}^{(2)}\) must be orthogonal.

Solution

False.

Consider the identity matrix \(I\). Every nonzero vector is an eigenvector of \(I\) with eigenvalue 1. For example, both \((1, 0)^T\) and \((1, 1)^T\) are eigenvectors of \(I\), but they are not orthogonal:

\[(1, 0) \cdot(1, 1) = 1 \neq 0 \]

The spectral theorem says that you can find\(n\) orthogonal eigenvectors for an \(n \times n\) symmetric matrix, but it does not say that every pair of eigenvectors is orthogonal.

Aside: It can be shown that if \(\vec{u}^{(1)}\) and \(\vec{u}^{(2)}\) have different eigenvalues, then they must be orthogonal.

Problem #042

Tags: linear algebra, quiz-03, spectral theorem, eigenvectors, diagonalization, lecture-04

Suppose \(A\) is a \(d \times d\) symmetric matrix.

True or False: There exists an orthonormal basis in which \(A\) is diagonal.

Solution

True.

By the spectral theorem, every \(d \times d\) symmetric matrix has \(d\) mutually orthogonal eigenvectors. If we normalize these eigenvectors, they form an orthonormal basis.

In this eigenbasis, the matrix \(A\) is diagonal: the diagonal entries are the eigenvalues of \(A\).

Problem #043

Tags: linear algebra, quiz-03, symmetric matrices, eigenvectors, lecture-04, linear transformations

The figure below shows a linear transformation \(\vec{f}\) applied to points on the unit circle. Each arrow shows the direction and relative magnitude of \(\vec{f}(\vec{x})\) for a point \(\vec{x}\) on the circle.

True or False: The linear transformation \(\vec{f}\) is symmetric.

Solution

False.

Recall from lecture that symmetric linear transformations have orthogonal axes of symmetry. In the visualization, this would appear as two perpendicular directions where the arrows point directly outward (or inward) from the circle.

In this figure, there are no such orthogonal axes of symmetry. The pattern of arrows does not exhibit the characteristic symmetry of a symmetric transformation.

Problem #044

Tags: linear algebra, quiz-03, diagonal matrices, eigenvectors, lecture-04, linear transformations

The figure below shows a linear transformation \(\vec{f}\) applied to points on the unit circle. Each arrow shows the direction and relative magnitude of \(\vec{f}(\vec{x})\) for a point \(\vec{x}\) on the circle.

True or False: The matrix representing \(\vec{f}\) with respect to the standard basis is diagonal.

Solution

True.

A matrix is diagonal if and only if the standard basis vectors are eigenvectors. In the visualization, eigenvectors correspond to directions where the arrows point radially (directly outward or inward).

Looking at the figure, the arrows at \((1, 0)\) and \((-1, 0)\) point horizontally, and the arrows at \((0, 1)\) and \((0, -1)\) point vertically. This means the standard basis vectors \(\hat e^{(1)} = (1, 0)^T\) and \(\hat e^{(2)} = (0, 1)^T\) are eigenvectors.

Since the standard basis vectors are eigenvectors, the matrix is diagonal in the standard basis.

Problem #045

Tags: linear algebra, eigenbasis, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) be three unit length orthonormal eigenvectors of a linear transformation \(\vec{f}\), with eigenvalues \(8\), \(4\), and \(-3\) respectively.

Suppose a vector \(\vec{x}\) can be written as:

\[\vec{x} = 2\vec{u}^{(1)} - 3\vec{u}^{(2)} + \vec{u}^{(3)}\]

What is \(\vec{f}(\vec{x})\), expressed in coordinates with respect to the eigenbasis \(\{\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\}\)?

Solution

\([\vec{f}(\vec{x})]_{\mathcal{U}} = (16, -12, -3)^T\) Using linearity and the eigenvector property:

$$\begin{align*}\vec{f}(\vec{x}) &= \vec{f}(2\vec{u}^{(1)} - 3\vec{u}^{(2)} + \vec{u}^{(3)}) \\&= 2\vec{f}(\vec{u}^{(1)}) - 3\vec{f}(\vec{u}^{(2)}) + \vec{f}(\vec{u}^{(3)}) \\\end{align*}$$

Since \(\vec{u}^{(1)}\) is an eigenvector with eigenvalue \(8\), we have \(\vec{f}(\vec{u}^{(1)}) = 8\vec{u}^{(1)}\). Similarly, \(\vec{f}(\vec{u}^{(2)}) = 4\vec{u}^{(2)}\) and \(\vec{f}(\vec{u}^{(3)}) = -3\vec{u}^{(3)}\):

$$\begin{align*}&= 2(8\vec{u}^{(1)}) - 3(4\vec{u}^{(2)}) + (-3)\vec{u}^{(3)}\\&= 16\vec{u}^{(1)} - 12\vec{u}^{(2)} - 3\vec{u}^{(3)}\end{align*}$$

In the eigenbasis, this is simply \((16, -12, -3)^T\).

Problem #046

Tags: linear algebra, eigenbasis, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) be three unit length orthonormal eigenvectors of a linear transformation \(\vec{f}\), with eigenvalues \(5\), \(-2\), and \(3\) respectively.

Suppose a vector \(\vec{x}\) can be written as:

\[\vec{x} = 4\vec{u}^{(1)} + \vec{u}^{(2)} - 2\vec{u}^{(3)}\]

What is \(\vec{f}(\vec{x})\), expressed in coordinates with respect to the eigenbasis \(\{\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\}\)?

Solution

\([\vec{f}(\vec{x})]_{\mathcal{U}} = (20, -2, -6)^T\) Using linearity and the eigenvector property:

$$\begin{align*}\vec{f}(\vec{x}) &= \vec{f}(4\vec{u}^{(1)} + \vec{u}^{(2)} - 2\vec{u}^{(3)}) \\&= 4\vec{f}(\vec{u}^{(1)}) + \vec{f}(\vec{u}^{(2)}) - 2\vec{f}(\vec{u}^{(3)}) \\\end{align*}$$

Since \(\vec{u}^{(1)}\) is an eigenvector with eigenvalue \(5\), we have \(\vec{f}(\vec{u}^{(1)}) = 5\vec{u}^{(1)}\). Similarly, \(\vec{f}(\vec{u}^{(2)}) = -2\vec{u}^{(2)}\) and \(\vec{f}(\vec{u}^{(3)}) = 3\vec{u}^{(3)}\):

$$\begin{align*}&= 4(5\vec{u}^{(1)}) + (-2)\vec{u}^{(2)} - 2(3\vec{u}^{(3)}) \\&= 20\vec{u}^{(1)} - 2\vec{u}^{(2)} - 6\vec{u}^{(3)}\end{align*}$$

In the eigenbasis, this is simply \((20, -2, -6)^T\).

Problem #047

Tags: optimization, linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A\) be a symmetric matrix with eigenvalues \(6\) and \(-9\).

True or False: The maximum value of \(\|A\vec{x}\|\) over all unit vectors \(\vec{x}\) is \(6\).

Solution

False. The maximum is \(9\).

The maximum of \(\|A\vec{x}\|\) over unit vectors is achieved when \(\vec{x}\) is an eigenvector corresponding to the eigenvalue with the largest absolute value.

Here, \(|-9| = 9 > 6 = |6|\), so the maximum is achieved at the eigenvector with eigenvalue \(-9\).

If \(\vec{u}\) is a unit eigenvector with eigenvalue \(-9\), then:

\[\|A\vec{u}\| = \|(-9)\vec{u}\| = |-9| \cdot\|\vec{u}\| = 9 \cdot 1 = 9 \]

Problem #048

Tags: optimization, linear algebra, quiz-03, eigenvalues, eigenvectors, quadratic forms, lecture-04

Let \(A\) be a \(3 \times 3\) symmetric matrix with the following eigenvectors and corresponding eigenvalues: \(\vec{u}^{(1)} = \frac{1}{3}(1, 2, 2)^T\) has eigenvalue \(4\), \(\vec{u}^{(2)} = \frac{1}{3}(2, 1, -2)^T\) has eigenvalue \(1\), and \(\vec{u}^{(3)} = \frac{1}{3}(2, -2, 1)^T\) has eigenvalue \(-10\).

Consider the quadratic form \(\vec{x}\cdot A\vec{x}\).

Part 1)

What unit vector \(\vec{x}\) maximizes \(\vec{x}\cdot A\vec{x}\)?

Solution

\(\vec{u}^{(1)} = \frac{1}{3}(1, 2, 2)^T\) The quadratic form \(\vec{x}\cdot A\vec{x}\) is maximized by the eigenvector with the largest eigenvalue. Among \(4\), \(1\), and \(-10\), the largest is \(4\), so the maximizer is \(\vec{u}^{(1)}\).

Part 2)

What is the maximum value of \(\vec{x}\cdot A\vec{x}\) over all unit vectors?

Solution

\(4\) The maximum value equals the largest eigenvalue. We can verify:

\[\vec{u}^{(1)}\cdot A\vec{u}^{(1)} = \vec{u}^{(1)}\cdot(4\vec{u}^{(1)}) = 4(\vec{u}^{(1)}\cdot\vec{u}^{(1)}) = 4 \cdot 1 = 4 \]

Part 3)

What unit vector \(\vec{x}\) minimizes \(\vec{x}\cdot A\vec{x}\)?

Solution

\(\vec{u}^{(3)} = \frac{1}{3}(2, -2, 1)^T\) The quadratic form is minimized by the eigenvector with the smallest eigenvalue. Among \(4\), \(1\), and \(-10\), the smallest is \(-10\), so the minimizer is \(\vec{u}^{(3)}\).

Part 4)

What is the minimum value of \(\vec{x}\cdot A\vec{x}\) over all unit vectors?

Solution

\(-10\) The minimum value equals the smallest eigenvalue. We can verify:

\[\vec{u}^{(3)}\cdot A\vec{u}^{(3)} = \vec{u}^{(3)}\cdot(-10\vec{u}^{(3)}) = -10(\vec{u}^{(3)}\cdot\vec{u}^{(3)}) = -10 \cdot 1 = -10 \]

Problem #049

Tags: linear algebra, quiz-03, eigenvalues, eigenvectors, lecture-04

Let \(A\) be a symmetric matrix with top eigenvalue \(\lambda\). Let \(B = A + 5I\), where \(I\) is the identity matrix.

True or False: The top eigenvalue of \(B\) must be \(\lambda + 5\).

Solution

True.

The main thing to realize is that \(A\) and \(B\) have the same eigenvectors. If \(\vec v\) is an eigenvector of \(A\) with eigenvalue \(\lambda\), then:

$$\begin{align*} B \vec v &= (A + 5I) \vec v \\&= A \vec v + 5I \vec v \\&= \lambda\vec v + 5 \vec v \\&= (\lambda + 5) \vec v \end{align*}$$

Thus, \(\vec v\) is also an eigenvector of \(B\) with eigenvalue \(\lambda + 5\).

This means that the eigenvalues of \(B\) are simply the eigenvalues of \(A\) shifted by 5. Therefore, if \(\lambda\) is the top eigenvalue of \(A\), then \(\lambda + 5\) is the top eigenvalue of \(B\).

Problem #053

Tags: linear algebra, quiz-03, lecture-05, eigenvalues, eigenvectors, diagonalization

Let \(\vec f\) be a linear transformation with eigenvectors and eigenvalues:

$$\begin{align*}\hat{u}^{(1)}&= \frac{1}{\sqrt 2}(1, 1)^T & \lambda_1 &= 2\\\hat{u}^{(2)}&= \frac{1}{\sqrt 2}(-1, 1)^T & \lambda_2 &= -1 \end{align*}$$

What is \(\vec f(\vec x)\) for \(\vec x = (3, 1)^T\)?

Solution

\(\vec f(\vec x) = (3, 5)^T\).

We'll take the three step approach of 1) finding the coordinates of \(\vec x\) in the eigenbasis, 2) applying the transformation in that basis (where the matrix of the transformation \(A_\mathcal{U}\) is diagonal and consists of the eigenvalues), and 3) converting back to the standard basis.

We saw in lecture that we can do this all in one go using the change of basis matrix \(U\):

\[\vec f(\vec x) = U^T A_\mathcal{U} U \vec x \]

where \(U\) is the change of basis matrix with the eigenvectors as rows and \(A_\mathcal{U}\) is the diagonal matrix with the eigenvalues on the diagonal. In this case, they are:

\[ U = \begin{pmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\[0.5em] \frac{-1}{\sqrt 2} & \frac{1}{\sqrt 2} \end{pmatrix}\qquad A_\mathcal{U} = \begin{pmatrix} 2 & 0 \\ 0 & -1 \end{pmatrix}\]

Then:

$$\begin{align*}\vec f(\vec x) &= U^T A_\mathcal{U} U \vec x \\&= \begin{pmatrix} \frac{1}{\sqrt 2} & \frac{-1}{\sqrt 2}\\[0.5em] \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2} \end{pmatrix}\begin{pmatrix} 2 & 0 \\ 0 & -1 \end{pmatrix}\begin{pmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\[0.5em] \frac{-1}{\sqrt 2} & \frac{1}{\sqrt 2} \end{pmatrix}\begin{pmatrix} 3 \\ 1 \end{pmatrix}\\&= \begin{pmatrix} 3 \\ 5 \end{pmatrix}\end{align*}$$

Problem #054

Tags: linear algebra, quiz-03, lecture-05, eigenvalues, eigenvectors, diagonalization

Let \(\vec f\) be a linear transformation with eigenvectors and eigenvalues:

$$\begin{align*}\hat{u}^{(1)}&= \frac{1}{\sqrt{2}}(1, 1, 0)^T & \lambda_1 &= 3\\\hat{u}^{(2)}&= \frac{1}{\sqrt{2}}(1, -1, 0)^T & \lambda_2 &= 2\\\hat{u}^{(3)}&= (0, 0, 1)^T & \lambda_3 &= 1 \end{align*}$$

What is \(\vec f(\vec x)\) for \(\vec x = (1, 1, 2)^T\)?

Solution

\(\vec f(\vec x) = (3, 3, 2)^T\).

We'll take the three step approach of 1) finding the coordinates of \(\vec x\) in the eigenbasis, 2) applying the transformation in that basis (where the matrix of the transformation \(A_\mathcal{U}\) is diagonal and consists of the eigenvalues), and 3) converting back to the standard basis.

We saw in lecture that we can do this all in one go using the change of basis matrix \(U\):

\[\vec f(\vec x) = U^T A_\mathcal{U} U \vec x \]

where \(U\) is the change of basis matrix with the eigenvectors as rows and \(A_\mathcal{U}\) is the diagonal matrix with the eigenvalues on the diagonal. In this case, they are:

\[ U = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\[0.5em] \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} & 0\\[0.5em] 0 & 0 & 1 \end{pmatrix}\qquad A_\mathcal{U} = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}\]

Then:

$$\begin{align*}\vec f(\vec x) &= U^T A_\mathcal{U} U \vec x \\&= \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\[0.5em] \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} & 0\\[0.5em] 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\[0.5em] \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} & 0\\[0.5em] 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \\ 2 \end{pmatrix}\\&= \begin{pmatrix} 3 \\ 3 \\ 2 \end{pmatrix}\end{align*}$$

Problem #055

Tags: linear algebra, quiz-03, lecture-05, eigenvalues, eigenvectors, diagonalization

Let \(\vec f\) be a linear transformation with eigenvectors and eigenvalues:

$$\begin{align*}\hat{u}^{(1)}&= \frac{1}{2}(1, 1, 1, 1)^T & \lambda_1 &= 4\\\hat{u}^{(2)}&= \frac{1}{2}(1, 1, -1, -1)^T & \lambda_2 &= 2\\\hat{u}^{(3)}&= \frac{1}{2}(1, -1, 1, -1)^T & \lambda_3 &= 1\\\hat{u}^{(4)}&= \frac{1}{2}(1, -1, -1, 1)^T & \lambda_4 &= 0 \end{align*}$$

What is \(\vec f(\vec x)\) for \(\vec x = (4, 0, 0, 0)^T\)?

Solution

\(\vec f(\vec x) = (7, 5, 3, 1)^T\).

We'll take the three step approach of 1) finding the coordinates of \(\vec x\) in the eigenbasis, 2) applying the transformation in that basis (where the matrix of the transformation \(A_\mathcal{U}\) is diagonal and consists of the eigenvalues), and 3) converting back to the standard basis.

We saw in lecture that we can do this all in one go using the change of basis matrix \(U\):

\[\vec f(\vec x) = U^T A_\mathcal{U} U \vec x \]

where \(U\) is the change of basis matrix with the eigenvectors as rows and \(A_\mathcal{U}\) is the diagonal matrix with the eigenvalues on the diagonal. In this case, they are:

\[ U = \frac{1}{2}\begin{pmatrix} 1 & 1 & 1 & 1\\ 1 & 1 & -1 & -1\\ 1 & -1 & 1 & -1\\ 1 & -1 & -1 & 1 \end{pmatrix}\qquad A_\mathcal{U} = \begin{pmatrix} 4 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}\]

Then:

$$\begin{align*}\vec f(\vec x) &= U^T A_\mathcal{U} U \vec x \\&= \frac{1}{2}\begin{pmatrix} 1 & 1 & 1 & 1\\ 1 & 1 & -1 & -1\\ 1 & -1 & 1 & -1\\ 1 & -1 & -1 & 1 \end{pmatrix}\begin{pmatrix} 4 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}\frac{1}{2}\begin{pmatrix} 1 & 1 & 1 & 1\\ 1 & 1 & -1 & -1\\ 1 & -1 & 1 & -1\\ 1 & -1 & -1 & 1 \end{pmatrix}\begin{pmatrix} 4 \\ 0 \\ 0 \\ 0 \end{pmatrix}\\&= \begin{pmatrix} 7 \\ 5 \\ 3 \\ 1 \end{pmatrix}\end{align*}$$

Problem #056

Tags: linear algebra, quiz-03, symmetric matrices, spectral theorem, eigenvalues, eigenvectors, lecture-05

Find a symmetric \(2 \times 2\) matrix \(A\) whose eigenvectors are \(\hat{u}^{(1)} = \frac{1}{\sqrt{5}}(1, 2)^T\) with eigenvalue \(6\) and \(\hat{u}^{(2)} = \frac{1}{\sqrt{5}}(2, -1)^T\) with eigenvalue \(1\).

Solution

\(A = \begin{pmatrix} 2 & 2 \\ 2 & 5 \end{pmatrix}\).

We saw in lecture that a symmetric matrix \(A\) can be written as \(A = U^T \Lambda U\), where \(U\) is the matrix whose rows are the eigenvectors of \(A\) and \(\Lambda\) is the diagonal matrix of eigenvalues.

This means that if we know the eigenvectors and eigenvalues of a symmetric matrix, we can "work backwards" to find the matrix itself.

In this case:

\[ U = \frac{1}{\sqrt{5}}\begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}, \quad\Lambda = \begin{pmatrix} 6 & 0 \\ 0 & 1 \end{pmatrix}\]

Therefore:

$$\begin{align*} A &= U^T \Lambda U \\[0.5em]&= \frac{1}{\sqrt{5}}\begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}\begin{pmatrix} 6 & 0 \\ 0 & 1 \end{pmatrix}\frac{1}{\sqrt{5}}\begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}\\[0.5em]&= \frac{1}{5}\begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}\begin{pmatrix} 6 & 0 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}\\[0.5em]&= \frac{1}{5}\begin{pmatrix} 6 & 2 \\ 12 & -1 \end{pmatrix}\begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}\\[0.5em]&= \frac{1}{5}\begin{pmatrix} 6 + 4 & 12 - 2 \\ 12 - 2 & 24 + 1 \end{pmatrix}\\[0.5em]&= \frac{1}{5}\begin{pmatrix} 10 & 10 \\ 10 & 25 \end{pmatrix}\\[0.5em]&= \begin{pmatrix} 2 & 2 \\ 2 & 5 \end{pmatrix}\end{align*}$$

Problem #057

Tags: linear algebra, quiz-03, symmetric matrices, spectral theorem, eigenvalues, eigenvectors, lecture-05

Find a symmetric \(3 \times 3\) matrix \(A\) whose eigenvectors are \(\hat{u}^{(1)} = \frac{1}{3}(1, 2, 2)^T\) with eigenvalue \(9\), \(\hat{u}^{(2)} = \frac{1}{3}(2, 1, -2)^T\) with eigenvalue \(18\), and \(\hat{u}^{(3)} = \frac{1}{3}(2, -2, 1)^T\) with eigenvalue \(-9\).

Solution

\(A = \begin{pmatrix} 5 & 10 & -8 \\ 10 & 2 & 2 \\ -8 & 2 & 11 \end{pmatrix}\).

We saw in lecture that a symmetric matrix \(A\) can be written as \(A = U^T \Lambda U\), where \(U\) is the matrix whose rows are the eigenvectors of \(A\) and \(\Lambda\) is the diagonal matrix of eigenvalues.

In this case:

\[ U = \frac{1}{3}\begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & -2 \\ 2 & -2 & 1 \end{pmatrix}, \quad\Lambda = \begin{pmatrix} 9 & 0 & 0 \\ 0 & 18 & 0 \\ 0 & 0 & -9 \end{pmatrix}\]

Therefore:

$$\begin{align*} A &= U^T \Lambda U \\[0.5em]&= \frac{1}{3}\begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & -2 \\ 2 & -2 & 1 \end{pmatrix}\begin{pmatrix} 9 & 0 & 0 \\ 0 & 18 & 0 \\ 0 & 0 & -9 \end{pmatrix}\frac{1}{3}\begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & -2 \\ 2 & -2 & 1 \end{pmatrix}\\[0.5em]&= \frac{1}{9}\begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & -2 \\ 2 & -2 & 1 \end{pmatrix}\begin{pmatrix} 9 & 0 & 0 \\ 0 & 18 & 0 \\ 0 & 0 & -9 \end{pmatrix}\begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & -2 \\ 2 & -2 & 1 \end{pmatrix}\\[0.5em]&= \frac{1}{9}\begin{pmatrix} 9 & 36 & -18 \\ 18 & 18 & 18 \\ 18 & -36 & -9 \end{pmatrix}\begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & -2 \\ 2 & -2 & 1 \end{pmatrix}\\[0.5em]&= \frac{1}{9}\begin{pmatrix} 9 + 72 - 36 & 18 + 36 + 36 & 18 - 72 - 18 \\ 18 + 36 + 36 & 36 + 18 - 36 & 36 - 36 + 18 \\ 18 - 72 - 18 & 36 - 36 + 18 & 36 + 72 - 9 \end{pmatrix}\\[0.5em]&= \frac{1}{9}\begin{pmatrix} 45 & 90 & -72 \\ 90 & 18 & 18 \\ -72 & 18 & 99 \end{pmatrix}\\[0.5em]&= \begin{pmatrix} 5 & 10 & -8 \\ 10 & 2 & 2 \\ -8 & 2 & 11 \end{pmatrix}\end{align*}$$

Problem #068

Tags: covariance, eigenvectors, quiz-04, variance, lecture-06

Let \(C\) be the sample covariance matrix of a centered data set, and suppose \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) are normalized eigenvectors of \(C\) with eigenvalues \(\lambda_1 = 9, \lambda_2 = 3, \lambda_3=0\), respectively.

Suppose \(\vec x = \frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\). What is the variance in the direction of \(\vec x\)?

Solution

\(5\) Remember that for any unit vector \(\vec u\), the variance in the direction of \(\vec u\) is given by \(\vec u^T C \vec u\). So, for the vector \(\vec x\), we have

$$\begin{align*}\operatorname{Var}(\vec x) &= \vec x^T C \vec x \\&= \left(\frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\right)^T C \left(\frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\right)\end{align*}$$

Since \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) are eigenvectors of \(C\), the matrix multiplication simplifies:

$$\begin{align*}\operatorname{Var}(\vec x) &= \left(\frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\right)^T \left(\frac{9}{\sqrt 2}\vec{u}^{(1)} + \frac{3}{\sqrt 6}\vec{u}^{(2)} + \frac{0}{\sqrt 3}\vec{u}^{(3)}\right)\\&= \frac{1}{2}\cdot 9 + \frac{1}{6}\cdot 3 + \frac{1}{3}\cdot 0 \\&= \frac{9}{2} + \frac{1}{2} + 0 \\&= 5 \end{align*}$$

Problem #070

Tags: lecture-06, quiz-04, eigenvectors, pca

Consider the data set shown below:

Which of the following could possibly be the top eigenvector of the data's sample covariance matrix?

Solution

\((3, 1)^T\) The top eigenvector of the covariance matrix points in the direction of greatest variance. Looking at the scatter plot, the data is elongated along a direction that has a positive slope, rising more in the \(x_1\) direction than the \(x_2\) direction. The vector \((3, 1)^T\) points roughly in this direction.

\((1, 0)^T\) and \((0, 1)^T\) are along the axes, which don't align with the data's elongation. \((1, 1)^T\) has too steep a slope compared to the apparent direction of the data.

Problem #073

Tags: covariance, eigenvectors, quiz-04, variance, lecture-06

Let \(C\) be the sample covariance matrix of a centered data set, and suppose \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) are normalized eigenvectors of \(C\) with eigenvalues \(\lambda_1 = 16, \lambda_2 = 12, \lambda_3=0\), respectively.

Suppose \(\vec x = \frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\). What is the variance in the direction of \(\vec x\)?

Solution

\(10\) Remember that for any unit vector \(\vec u\), the variance in the direction of \(\vec u\) is given by \(\vec u^T C \vec u\). So, for the vector \(\vec x\), we have

$$\begin{align*}\operatorname{Var}(\vec x) &= \vec x^T C \vec x \\&= \left(\frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\right)^T C \left(\frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\right)\end{align*}$$

Since \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) are eigenvectors of \(C\), the matrix multiplication simplifies:

$$\begin{align*}\operatorname{Var}(\vec x) &= \left(\frac{1}{\sqrt 2}\vec{u}^{(1)} + \frac{1}{\sqrt 6}\vec{u}^{(2)} + \frac{1}{\sqrt 3}\vec{u}^{(3)}\right)^T \left(\frac{16}{\sqrt 2}\vec{u}^{(1)} + \frac{12}{\sqrt 6}\vec{u}^{(2)} + \frac{0}{\sqrt 3}\vec{u}^{(3)}\right)\\&= \frac{1}{2}\cdot 16 + \frac{1}{6}\cdot 12 + \frac{1}{3}\cdot 0 \\&= 8 + 2 + 0 \\&= 10 \end{align*}$$

Problem #078

Tags: pca, dimensionality-reduction, eigenvectors, quiz-04, lecture-06

Let \(C\) be the sample covariance matrix of a data set in \(\mathbb{R}^3\), and suppose \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}\) are orthonormal eigenvectors of \(C\) with eigenvalues \(\lambda_1 = 9, \lambda_2 = 4, \lambda_3 = 1\), respectively, where:

\[\vec{u}^{(1)} = \begin{pmatrix} 2/3 \\ 2/3 \\ 1/3 \end{pmatrix}, \quad\vec{u}^{(2)} = \begin{pmatrix} -2/3 \\ 1/3 \\ 2/3 \end{pmatrix}, \quad\vec{u}^{(3)} = \begin{pmatrix} 1/3 \\ -2/3 \\ 2/3 \end{pmatrix}\]

Suppose a data point is \(\vec{x} = \begin{pmatrix} 3 \\ 6 \\ 6 \end{pmatrix}\).

If PCA is performed to reduce the dimensionality from 3 to 2, what is the new representation of \(\vec{x}\)?

Solution

\(\begin{pmatrix} 8 \\ 4 \end{pmatrix}\) In PCA, to reduce from \(d\) dimensions to \(k\) dimensions, we project each data point onto the top \(k\) eigenvectors (those with the largest eigenvalues).

Here, the top 2 eigenvectors are \(\vec{u}^{(1)}\)(with \(\lambda_1 = 9\)) and \(\vec{u}^{(2)}\)(with \(\lambda_2 = 4\)).

The new representation is obtained by computing the dot product of \(\vec{x}\) with each of the top \(k\) eigenvectors:

$$\begin{align*}\vec{x}\cdot\vec{u}^{(1)}&= 3 \cdot\frac{2}{3} + 6 \cdot\frac{2}{3} + 6 \cdot\frac{1}{3} = 2 + 4 + 2 = 8 \\\vec{x}\cdot\vec{u}^{(2)}&= 3 \cdot\left(-\frac{2}{3}\right)+ 6 \cdot\frac{1}{3} + 6 \cdot\frac{2}{3} = -2 + 2 + 4 = 4 \end{align*}$$

Therefore, the new representation is:

\[\vec{z} = \begin{pmatrix} \vec{x} \cdot \vec{u}^{(1)} \\ \vec{x} \cdot \vec{u}^{(2)} \end{pmatrix} = \begin{pmatrix} 8 \\ 4 \end{pmatrix}\]

Problem #079

Tags: pca, dimensionality-reduction, eigenvectors, quiz-04, lecture-06

Let \(C\) be the sample covariance matrix of a data set in \(\mathbb{R}^4\), and suppose \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}, \vec{u}^{(4)}\) are orthonormal eigenvectors of \(C\) with eigenvalues \(\lambda_1 = 16, \lambda_2 = 9, \lambda_3 = 4, \lambda_4 = 1\), respectively, where:

\[\vec{u}^{(1)} = \begin{pmatrix} 0 \\ 1/3 \\ 2/3 \\ 2/3 \end{pmatrix}, \quad\vec{u}^{(2)} = \begin{pmatrix} 1/3 \\ 0 \\ 2/3 \\ -2/3 \end{pmatrix}, \quad\vec{u}^{(3)} = \begin{pmatrix} -2/3 \\ -2/3 \\ 1/3 \\ 0 \end{pmatrix}, \quad\vec{u}^{(4)} = \begin{pmatrix} 2/3 \\ -2/3 \\ 0 \\ 1/3 \end{pmatrix}\]

Suppose a data point is \(\vec{x} = \begin{pmatrix} 3 \\ 3 \\ 6 \\ 6 \end{pmatrix}\).

If PCA is performed to reduce the dimensionality from 4 to 2, what is the new representation of \(\vec{x}\)?

Solution

\(\begin{pmatrix} 9 \\ 1 \end{pmatrix}\) In PCA, to reduce from \(d\) dimensions to \(k\) dimensions, we project each data point onto the top \(k\) eigenvectors (those with the largest eigenvalues).

Here, the top 2 eigenvectors are \(\vec{u}^{(1)}\)(with \(\lambda_1 = 16\)) and \(\vec{u}^{(2)}\)(with \(\lambda_2 = 9\)).

The new representation is obtained by computing the dot product of \(\vec{x}\) with each of the top \(k\) eigenvectors:

$$\begin{align*}\vec{x}\cdot\vec{u}^{(1)}&= 3 \cdot 0 + 3 \cdot\frac{1}{3} + 6 \cdot\frac{2}{3} + 6 \cdot\frac{2}{3} = 0 + 1 + 4 + 4 = 9 \\\vec{x}\cdot\vec{u}^{(2)}&= 3 \cdot\frac{1}{3} + 3 \cdot 0 + 6 \cdot\frac{2}{3} + 6 \cdot\left(-\frac{2}{3}\right)= 1 + 0 + 4 - 4 = 1 \end{align*}$$

Therefore, the new representation is:

\[\vec{z} = \begin{pmatrix} \vec{x} \cdot \vec{u}^{(1)} \\ \vec{x} \cdot \vec{u}^{(2)} \end{pmatrix} = \begin{pmatrix} 9 \\ 1 \end{pmatrix}\]

Problem #080

Tags: pca, dimensionality-reduction, eigenvectors, quiz-04, lecture-06

Let \(C\) be the sample covariance matrix of a data set in \(\mathbb{R}^5\), and suppose \(\vec{u}^{(1)}, \vec{u}^{(2)}, \vec{u}^{(3)}, \vec{u}^{(4)}, \vec{u}^{(5)}\) are orthonormal eigenvectors of \(C\) with eigenvalues \(\lambda_1 = 25, \lambda_2 = 16, \lambda_3 = 9, \lambda_4 = 4, \lambda_5 = 1\), respectively, where:

\[\vec{u}^{(1)} = \begin{pmatrix} 1/3 \\ 2/3 \\ 2/3 \\ 0 \\ 0 \end{pmatrix}, \quad\vec{u}^{(2)} = \begin{pmatrix} 2/3 \\ -1/3 \\ 0 \\ 2/3 \\ 0 \end{pmatrix}, \quad\vec{u}^{(3)} = \begin{pmatrix} -2/3 \\ 0 \\ 1/3 \\ 2/3 \\ 0 \end{pmatrix}, \quad\vec{u}^{(4)} = \begin{pmatrix} 0 \\ 2/3 \\ -2/3 \\ 1/3 \\ 0 \end{pmatrix}, \quad\vec{u}^{(5)} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}\]

Suppose a data point is \(\vec{x} = \begin{pmatrix} 3 \\ 6 \\ 6 \\ 3 \\ 2 \end{pmatrix}\).

If PCA is performed to reduce the dimensionality from 5 to 3, what is the new representation of \(\vec{x}\)?

Solution

\(\begin{pmatrix} 9 \\ 2 \\ 2 \end{pmatrix}\) In PCA, to reduce from \(d\) dimensions to \(k\) dimensions, we project each data point onto the top \(k\) eigenvectors (those with the largest eigenvalues).

Here, the top 3 eigenvectors are \(\vec{u}^{(1)}\)(with \(\lambda_1 = 25\)), \(\vec{u}^{(2)}\)(with \(\lambda_2 = 16\)), and \(\vec{u}^{(3)}\)(with \(\lambda_3 = 9\)).

The new representation is obtained by computing the dot product of \(\vec{x}\) with each of the top \(k\) eigenvectors:

$$\begin{align*}\vec{x}\cdot\vec{u}^{(1)}&= 3 \cdot\frac{1}{3} + 6 \cdot\frac{2}{3} + 6 \cdot\frac{2}{3} + 3 \cdot 0 + 2 \cdot 0 = 1 + 4 + 4 + 0 + 0 = 9 \\\vec{x}\cdot\vec{u}^{(2)}&= 3 \cdot\frac{2}{3} + 6 \cdot\left(-\frac{1}{3}\right)+ 6 \cdot 0 + 3 \cdot\frac{2}{3} + 2 \cdot 0 = 2 - 2 + 0 + 2 + 0 = 2 \\\vec{x}\cdot\vec{u}^{(3)}&= 3 \cdot\left(-\frac{2}{3}\right)+ 6 \cdot 0 + 6 \cdot\frac{1}{3} + 3 \cdot\frac{2}{3} + 2 \cdot 0 = -2 + 0 + 2 + 2 + 0 = 2 \end{align*}$$

Therefore, the new representation is:

\[\vec{z} = \begin{pmatrix} \vec{x} \cdot \vec{u}^{(1)} \\ \vec{x} \cdot \vec{u}^{(2)} \\ \vec{x} \cdot \vec{u}^{(3)} \end{pmatrix} = \begin{pmatrix} 9 \\ 2 \\ 2 \end{pmatrix}\]

Problem #081

Tags: lecture-06, dimensionality-reduction, eigenvectors, pca

Consider the following data set of 5 points in \(\mathbb{R}^3\):

\[\vec{x}^{(1)} = \begin{pmatrix} 3 \\ 1 \\ 2 \end{pmatrix}, \quad\vec{x}^{(2)} = \begin{pmatrix} -1 \\ 2 \\ 0 \end{pmatrix}, \quad\vec{x}^{(3)} = \begin{pmatrix} 2 \\ -1 \\ 1 \end{pmatrix}, \quad\vec{x}^{(4)} = \begin{pmatrix} -2 \\ 0 \\ -1 \end{pmatrix}, \quad\vec{x}^{(5)} = \begin{pmatrix} -2 \\ -2 \\ -2 \end{pmatrix}\]

Perform PCA on this data set to reduce the dimensionality from 3 to 2. What are the new representations of each data point?

Note: You are not expected to compute the eigenvalues and eigenvectors by hand. Use software (such as numpy.linalg.eigh) to find them.

Solution

First, we form the data matrix and compute the sample covariance matrix:



>>> import numpy as np
>>> X = np.array([
...     [3, 1, 2],
...     [-1, 2, 0],
...     [2, -1, 1],
...     [-2, 0, -1],
...     [-2, -2, -2]
... ])
>>> mu = X.mean(axis=0)
>>> Z = X - mu
>>> C = 1 / len(X) * Z.T @ Z

Next, we compute the eigendecomposition of \(C\):



>>> eigenvalues, eigenvectors = np.linalg.eigh(C)
>>> idx = eigenvalues.argsort()[::-1]# sort descending
>>> eigenvalues = eigenvalues[idx]
>>> eigenvectors = eigenvectors[:, idx]
>>> eigenvalues
array([6.49, 1.89, 0.01])

To reduce to 2 dimensions, we project onto the top 2 eigenvectors:



>>> U2 = eigenvectors[:, :2]
>>> Z_new = Z @ U2
>>> Z_new
array([[-3.74, -0.13],
       [ 0.34, -2.21],
       [-1.93,  1.51],
       [ 2.16, -0.57],
       [ 3.17,  1.40]])

The new representations are (rounded to two decimal places):

\[\vec{z}^{(1)} = \begin{pmatrix} -3.74 \\ -0.13 \end{pmatrix}, \quad\vec{z}^{(2)} = \begin{pmatrix} 0.34 \\ -2.21 \end{pmatrix}, \quad\vec{z}^{(3)} = \begin{pmatrix} -1.93 \\ 1.51 \end{pmatrix}, \quad\vec{z}^{(4)} = \begin{pmatrix} 2.16 \\ -0.57 \end{pmatrix}, \quad\vec{z}^{(5)} = \begin{pmatrix} 3.17 \\ 1.40 \end{pmatrix}\]

Note: The signs of the eigenvectors are not unique; flipping the sign of an eigenvector will flip the sign of the corresponding component in the new representation.

Problem #082

Tags: lecture-06, dimensionality-reduction, eigenvectors, pca

Consider the following data set of 5 points in \(\mathbb{R}^4\):

\[\vec{x}^{(1)} = \begin{pmatrix} 2 \\ 1 \\ 3 \\ 0 \end{pmatrix}, \quad\vec{x}^{(2)} = \begin{pmatrix} 0 \\ -2 \\ 1 \\ 1 \end{pmatrix}, \quad\vec{x}^{(3)} = \begin{pmatrix} -1 \\ 0 \\ -1 \\ 2 \end{pmatrix}, \quad\vec{x}^{(4)} = \begin{pmatrix} 1 \\ 2 \\ 0 \\ -1 \end{pmatrix}, \quad\vec{x}^{(5)} = \begin{pmatrix} -2 \\ -1 \\ -3 \\ -2 \end{pmatrix}\]

Perform PCA on this data set to reduce the dimensionality from 4 to 2. What are the new representations of each data point?

Note: You are not expected to compute the eigenvalues and eigenvectors by hand. Use software (such as numpy.linalg.eigh) to find them.

Solution

First, we form the data matrix and compute the sample covariance matrix:



>>> import numpy as np
>>> X = np.array([
...     [2, 1, 3, 0],
...     [0, -2, 1, 1],
...     [-1, 0, -1, 2],
...     [1, 2, 0, -1],
...     [-2, -1, -3, -2]
... ])
>>> mu = X.mean(axis=0)
>>> Z = X - mu
>>> C = 1 / len(X) * Z.T @ Z

Next, we compute the eigendecomposition of \(C\):



>>> eigenvalues, eigenvectors = np.linalg.eigh(C)
>>> idx = eigenvalues.argsort()[::-1]# sort descending
>>> eigenvalues = eigenvalues[idx]
>>> eigenvectors = eigenvectors[:, idx]
>>> eigenvalues
array([6.35, 2.57, 1.06, 0.02])

To reduce to 2 dimensions, we project onto the top 2 eigenvectors:



>>> U2 = eigenvectors[:, :2]
>>> Z_new = Z @ U2
>>> Z_new
array([[-3.68,  0.43],
       [-0.40, -2.19],
       [ 0.96, -1.44],
       [-0.92,  2.18],
       [ 4.03,  1.02]])

The new representations are (rounded to two decimal places):

\[\vec{z}^{(1)} = \begin{pmatrix} -3.68 \\ 0.43 \end{pmatrix}, \quad\vec{z}^{(2)} = \begin{pmatrix} -0.40 \\ -2.19 \end{pmatrix}, \quad\vec{z}^{(3)} = \begin{pmatrix} 0.96 \\ -1.44 \end{pmatrix}, \quad\vec{z}^{(4)} = \begin{pmatrix} -0.92 \\ 2.18 \end{pmatrix}, \quad\vec{z}^{(5)} = \begin{pmatrix} 4.03 \\ 1.02 \end{pmatrix}\]

Note: The signs of the eigenvectors are not unique; flipping the sign of an eigenvector will flip the sign of the corresponding component in the new representation.

Problem #083

Tags: lecture-06, dimensionality-reduction, eigenvectors, pca

Consider the following data set of 5 points in \(\mathbb{R}^4\):

\[\vec{x}^{(1)} = \begin{pmatrix} 1 \\ 2 \\ 0 \\ 3 \end{pmatrix}, \quad\vec{x}^{(2)} = \begin{pmatrix} -1 \\ 0 \\ 2 \\ 1 \end{pmatrix}, \quad\vec{x}^{(3)} = \begin{pmatrix} 2 \\ -1 \\ 1 \\ 0 \end{pmatrix}, \quad\vec{x}^{(4)} = \begin{pmatrix} 0 \\ 1 \\ -2 \\ -2 \end{pmatrix}, \quad\vec{x}^{(5)} = \begin{pmatrix} -2 \\ -2 \\ -1 \\ -2 \end{pmatrix}\]

Perform PCA on this data set to reduce the dimensionality from 4 to 3. What are the new representations of each data point?

Note: You are not expected to compute the eigenvalues and eigenvectors by hand. Use software (such as numpy.linalg.eigh) to find them.

Solution

First, we form the data matrix and compute the sample covariance matrix:



>>> import numpy as np
>>> X = np.array([
...     [1, 2, 0, 3],
...     [-1, 0, 2, 1],
...     [2, -1, 1, 0],
...     [0, 1, -2, -2],
...     [-2, -2, -1, -2]
... ])
>>> mu = X.mean(axis=0)
>>> Z = X - mu
>>> C = 1 / len(X) * Z.T @ Z

Next, we compute the eigendecomposition of \(C\):



>>> eigenvalues, eigenvectors = np.linalg.eigh(C)
>>> idx = eigenvalues.argsort()[::-1]# sort descending
>>> eigenvalues = eigenvalues[idx]
>>> eigenvectors = eigenvectors[:, idx]
>>> eigenvalues
array([5.72, 2.28, 1.34, 0.26])

To reduce to 3 dimensions, we project onto the top 3 eigenvectors:



>>> U3 = eigenvectors[:, :3]
>>> Z_new = Z @ U3
>>> Z_new
array([[-3.45, -1.17,  0.67],
       [-1.10,  1.83,  1.02],
       [-0.70,  0.83, -2.20],
       [ 1.84, -2.31, -0.10],
       [ 3.40,  0.82,  0.61]])

The new representations are (rounded to two decimal places):

\[\vec{z}^{(1)} = \begin{pmatrix} -3.45 \\ -1.17 \\ 0.67 \end{pmatrix}, \quad\vec{z}^{(2)} = \begin{pmatrix} -1.10 \\ 1.83 \\ 1.02 \end{pmatrix}, \quad\vec{z}^{(3)} = \begin{pmatrix} -0.70 \\ 0.83 \\ -2.20 \end{pmatrix}, \quad\vec{z}^{(4)} = \begin{pmatrix} 1.84 \\ -2.31 \\ -0.10 \end{pmatrix}, \quad\vec{z}^{(5)} = \begin{pmatrix} 3.40 \\ 0.82 \\ 0.61 \end{pmatrix}\]

Note: The signs of the eigenvectors are not unique; flipping the sign of an eigenvector will flip the sign of the corresponding component in the new representation.

Problem #084

Tags: pca, eigenvectors, quiz-04, dimensionality reduction, lecture-06

Suppose \(C\) is a \(3 \times 3\) sample covariance matrix for a data set \(\mathcal X\), and that the top two eigenvectors of \(C\) are:

$$\begin{align*}\vec{u}^{(1)}&= \frac{1}{\sqrt2}(-1, 1, 0)^T,\\\vec{u}^{(2)}&= \frac{1}{\sqrt3}( 1, 1, 1)^T, \end{align*}$$

with eigenvalues \(\lambda_1 = 10\) and \(\lambda_2 = 4\), respectively.

Let \(\vec x = (1, 2, 3)^T\) be the coordinates of \(\vec x\) with respect to the standard basis. Let \(\vec z\) be the result of applying PCA to reduce the dimensionality of \(\vec x\) to 2. What is \(\vec z\)?

Solution

\(\vec z = \left(\frac{1}{\sqrt 2}, \frac{6}{\sqrt 3}\right)^T\).

We compute \(\vec z\) by projecting \(\vec x\) onto each eigenvector:

$$\begin{align*} z_1 &= \vec{u}^{(1)}\cdot\vec x = \frac{1}{\sqrt 2}(-1 + 2 + 0) = \frac{1}{\sqrt 2}\\ z_2 &= \vec{u}^{(2)}\cdot\vec x = \frac{1}{\sqrt 3}(1 + 2 + 3) = \frac{6}{\sqrt 3}\end{align*}$$

Problem #086

Tags: pca, eigenvectors, quiz-04, dimensionality reduction, lecture-06

Suppose \(C\) is a \(3 \times 3\) sample covariance matrix for a data set \(\mathcal X\), and that the top two eigenvectors of \(C\) are:

$$\begin{align*}\vec{u}^{(1)}&= \frac{1}{\sqrt6}(-1, 1, 1)^T,\\\vec{u}^{(2)}&= \frac{1}{\sqrt2}( 0, 1, 1)^T, \end{align*}$$

with eigenvalues \(\lambda_1 = 5\) and \(\lambda_2 = 2\), respectively.

Let \(\vec x = (3,2,1)^T\) be the coordinates of \(\vec x\) with respect to the standard basis. Let \(\vec z\) be the result of applying PCA to reduce the dimensionality of \(\vec x\) to 2. What is \(\vec z\)?

Solution

\(\vec z = \left(0, \frac{3}{\sqrt 2}\right)^T\).

We compute \(\vec z\) by projecting \(\vec x\) onto each eigenvector:

$$\begin{align*} z_1 &= \vec{u}^{(1)}\cdot\vec x = \frac{1}{\sqrt 6}(-3 + 2 + 1) = \frac{0}{\sqrt 6} = 0\\ z_2 &= \vec{u}^{(2)}\cdot\vec x = \frac{1}{\sqrt 2}(0 + 2 + 1) = \frac{3}{\sqrt 2}\end{align*}$$

Problem #097

Tags: lecture-08, laplacian eigenmaps, quiz-05, eigenvalues, eigenvectors

Given a \(4 \times 4\) Graph Laplacian matrix \(L\), the 4 (eigenvalue, eigenvector) pairs of this matrix are given as \((3, \vec v_1)\), \((0, \vec v_2)\), \((8, \vec v_3)\), \((14, \vec v_4)\).

You need to pick one eigenvector. Which of the above eigenvectors will be a non-trivial solution to the spectral embedding problem?

Solution

\(\vec v_1\).

The embedding is the eigenvector with the smallest eigenvalue that is strictly greater than \(0\). The eigenvector \(\vec v_2\) corresponds to eigenvalue \(0\), which gives the trivial solution (all embeddings equal). Among the remaining eigenvectors, \(\vec v_1\) has the smallest eigenvalue (\(3\)), so it is the non-trivial solution to the spectral embedding problem.

Problem #101

Tags: quiz-05, laplacian eigenmaps, eigenvectors, lecture-08

Suppose \(L\) is a graph's Laplacian matrix and let \(\vec f\) be a (nonzero) embedding vector whose embedding cost is zero according to the Laplacian eigenmaps cost function.

True or False: \(\vec f\) is an eigenvector of \(L\).

True False
Solution

True.

If the embedding cost is zero, then \(\vec{f}^T L \vec{f} = 0\). Since \(L\) is positive semidefinite, this implies \(L \vec{f} = \vec{0}\). This means \(\vec{f}\) is an eigenvector of \(L\) with eigenvalue \(0\).

Problem #102

Tags: quiz-05, laplacian eigenmaps, eigenvectors, lecture-08

Suppose a graph \(G\) with nodes \(u_1, \ldots, u_5\) is embedded into \(\mathbb R^2\) using Laplacian Eigenmaps. The bottom three eigenvectors of the graph's Laplacian matrix and their corresponding eigenvalues are:

$$\begin{align*}\vec{u}^{(1)}&= \frac{1}{\sqrt 5}(1, 1, 1, 1, 1)^T \text{ with }\lambda_1 = 0 \\\vec{u}^{(2)}&= \frac{1}{\sqrt 2}(1, 0, -1, 0, 0)^T \text{ with }\lambda_2 = 2 \\\vec{u}^{(3)}&= \frac{1}{\sqrt 6}(0, -2, 0, 1, 1)^T \text{ with }\lambda_3 = 5 \\\end{align*}$$

What is the embedding of node 2?

Solution

\((0, -\frac{2}{\sqrt{6}})^T\).

In Laplacian Eigenmaps, we embed into \(\mathbb{R}^2\) using the eigenvectors corresponding to the two smallest nonzero eigenvalues: \(\vec{u}^{(2)}\)(with \(\lambda_2 = 2\)) and \(\vec{u}^{(3)}\)(with \(\lambda_3 = 5\)). We skip \(\vec{u}^{(1)}\) since it corresponds to \(\lambda_1 = 0\).

The embedding of node \(i\) is \((u^{(2)}_i, u^{(3)}_i)^T\). For node 2 (the second entry of each eigenvector):

\[(u^{(2)}_2, u^{(3)}_2)^T = \left(0, \frac{-2}{\sqrt 6}\right)^T \]

Problem #107

Tags: quiz-05, laplacian eigenmaps, eigenvectors, lecture-08

Suppose a graph \(G\) with nodes \(u_1, \ldots, u_5\) is embedded into \(\mathbb R^2\) using Laplacian Eigenmaps. The bottom three eigenvectors of the graph's Laplacian matrix and their corresponding eigenvalues are:

$$\begin{align*}\vec{u}^{(1)}&= \frac{1}{\sqrt 5}(1, 1, 1, 1, 1)^T \text{ with }\lambda_1 = 0 \\\vec{u}^{(2)}&= \frac{1}{\sqrt 2}(1, 0, 0, -1, 0)^T \text{ with }\lambda_2 = 3 \\\vec{u}^{(3)}&= \frac{1}{\sqrt 6}(1, -2, 0, 1, 0)^T \text{ with }\lambda_3 = 7 \\\end{align*}$$

What is the embedding of node 4?

Solution

\((-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{6}})^T\).

In Laplacian Eigenmaps, we embed into \(\mathbb{R}^2\) using the eigenvectors corresponding to the two smallest nonzero eigenvalues: \(\vec{u}^{(2)}\)(with \(\lambda_2 = 3\)) and \(\vec{u}^{(3)}\)(with \(\lambda_3 = 7\)). We skip \(\vec{u}^{(1)}\) since it corresponds to \(\lambda_1 = 0\).

The embedding of node \(i\) is \((u^{(2)}_i, u^{(3)}_i)^T\). For node 4 (the fourth entry of each eigenvector):

\[(u^{(2)}_4, u^{(3)}_4)^T = \left(\frac{-1}{\sqrt 2}, \frac{1}{\sqrt 6}\right)^T \]