DSC 140B
Problems tagged with lecture-14

Problems tagged with "lecture-14"

Problem #152

Tags: vanishing gradients, quiz-07, lecture-14, neural networks

Which of the following best describes the vanishing gradient problem that can occur while training deep neural networks using backpropagation?

Solution

Partial derivatives of parameters in layers far from the output become very small.

During backpropagation, the chain rule multiplies many factors together as the gradient propagates from the output back through each layer. If these factors are small (e.g., because of sigmoid activations whose derivatives are at most \(1/4\)), the product shrinks exponentially with depth. This means that parameters in early layers (far from the output) receive very small gradient updates and learn very slowly.

Problem #155

Tags: vanishing gradients, quiz-07, lecture-14, neural networks

True or False: one solution to the vanishing gradient problem is to increase the learning rate \(\eta\).

True False
Solution

False.

Increasing the learning rate does not fix the vanishing gradient problem. Vanishing gradients arise because, during backpropagation, the chain rule multiplies many small factors together, causing gradients in early layers to shrink exponentially with depth. Increasing \(\eta\) scales all gradient updates equally, so the relative imbalance between layers remains: early-layer gradients are still exponentially smaller than later-layer gradients.

Actual solutions to the vanishing gradient problem include using ReLU activations (whose derivatives are \(0\) or \(1\), rather than always small), skip connections, and batch normalization.

Problem #156

Tags: gradient descent, quiz-07, lecture-14

Recall that the gradient of the square loss at a single data point \((\vec x, y)\) is:

\[\nabla_{\vec w} L = 2(\operatorname{Aug}(\vec x) \cdot\vec w - y) \operatorname{Aug}(\vec x) \]

Suppose you are running gradient descent to minimize the empirical risk with respect to the squared loss in order to train a function \(H(x) = w_0 + w_1 x\) on the following data set: \((x, y) \in\{(1, 3),\;(2, 3),\;(3, 6)\}\).

Suppose that the initial weight vector is \(\vec w = (0, 0)^T\) and that the learning rate \(\eta = 1/2\). What will be the weight vector after one iteration of gradient descent?

Solution

\((4, 9)^T\).

To perform gradient descent, we need to compute the gradient of the risk. The main thing to remember is that the gradient of the risk is the average of the gradient of the loss on each data point.

\[\newcommand{\Aug}{\operatorname{Aug}}\]

So to start this problem, calculate the gradient of the loss for each of the three points. Our formula for the gradient of the squared loss tells us to compute \(2(\Aug(\vec x) \cdot\vec w - y) \Aug(\vec x)\) for each point.

Now, the initial weight vector \(\vec w\) was conveniently chosen to be \(\vec 0\), meaning that \(\operatorname{Aug}(\vec x) \cdot\vec w = 0\) for all of our data points. Therefore, when we compute \(\operatorname{Aug}(\vec x) \cdot\vec w - y\), we get \(-y\) for every data point. The gradient of the loss at each of the three data points is:

Point \((1, 3)\): \(2(0 - 3)(1, 1)^T = (-6, -6)^T\) Point \((2, 3)\): \(2(0 - 3)(1, 2)^T = (-6, -12)^T\) Point \((3, 6)\): \(2(0 - 6)(1, 3)^T = (-12, -36)^T\) This means that the gradient of the risk is the average of these three:

\[\frac{1}{3}\left(\begin{bmatrix}-6\\-6\end{bmatrix} + \begin{bmatrix}-6\\-12\end{bmatrix} + \begin{bmatrix}-12\\-36\end{bmatrix}\right)= \frac{1}{3}\begin{bmatrix}-24\\-54\end{bmatrix} = \begin{bmatrix}-8\\ -18 \end{bmatrix}\]

The gradient descent update rule says that \(\vec w^{(1)} = \vec w^{(0)} - \eta\nabla R(\vec w^{(0)})\). The learning rate \(\eta\) was given as \(1/2\), so we have:

\[\vec w^{(1)} = \begin{bmatrix}0\\ 0\end{bmatrix} - \frac{1}{2}\begin{bmatrix}-8\\ -18 \end{bmatrix} = \begin{bmatrix}4\\ 9 \end{bmatrix}\]

Problem #157

Tags: gradient descent, quiz-07, lecture-14

Recall that the gradient of the squared loss at a single data point \((\vec x, y)\) is:

\[\nabla_{\vec w} L = 2(\operatorname{Aug}(\vec x) \cdot\vec w - y) \operatorname{Aug}(\vec x) \]

Suppose you are running stochastic gradient descent to minimize the empirical risk with respect to the squared loss in order to train a function \(H(x) = w_0 + w_1 x\) on the following data set: \((x, y) \in\{(1, 2),\;(3, 5),\;(2, 1),\;(4, 4),\;(5, 8)\}\).

Suppose that the initial weight vector is \(\vec w = (0, 0)^T\), the learning rate is \(\eta = 1/2\), and the mini-batch size is 2. If the first mini-batch consists of the 1st and 4th data points, what will be the weight vector after one iteration of stochastic gradient descent?

Solution

\((3, 9)^T\).

Stochastic gradient descent works like gradient descent, but instead of computing the gradient of the risk over the entire data set, we compute the gradient over a mini-batch --- a small, randomly-chosen subset of the data. The gradient of the risk on the mini-batch is the average of the gradient of the loss on each point in the mini-batch.

\[\newcommand{\Aug}{\operatorname{Aug}}\]

The mini-batch consists of the 1st and 4th data points: \((1, 2)\) and \((4, 4)\). We compute the gradient of the squared loss at each of these points using \(2(\Aug(\vec x) \cdot\vec w - y) \Aug(\vec x)\).

Since \(\vec w = \vec 0\), we have \(\Aug(\vec x) \cdot\vec w = 0\) for both points.

Point \((1, 2)\): \(2(0 - 2)(1, 1)^T = (-4, -4)^T\) Point \((4, 4)\): \(2(0 - 4)(1, 4)^T = (-8, -32)^T\) The gradient of the risk on the mini-batch is the average:

\[\frac{1}{2}\left(\begin{bmatrix}-4\\-4\end{bmatrix} + \begin{bmatrix}-8\\-32\end{bmatrix}\right)= \frac{1}{2}\begin{bmatrix}-12\\-36\end{bmatrix} = \begin{bmatrix}-6\\ -18 \end{bmatrix}\]

Applying the update rule \(\vec w^{(1)} = \vec w^{(0)} - \eta\nabla R(\vec w^{(0)})\) with \(\eta = 1/2\):

\[\vec w^{(1)} = \begin{bmatrix}0\\ 0\end{bmatrix} - \frac{1}{2}\begin{bmatrix}-6\\ -18 \end{bmatrix} = \begin{bmatrix}3\\ 9 \end{bmatrix}\]

Problem #158

Tags: quiz-07, SGD, neural networks, gradient descent, lecture-14

True or False: in stochastic gradient descent, each update step moves in the direction of steepest descent of the empirical risk.

True False
Solution

False.

Each update step moves in the direction of steepest descent of the risk computed on the mini-batch, not the full training set. The mini-batch gradient is only an approximation of the true gradient of the empirical risk. This is what makes each iteration of SGD fast --- computing the gradient over a small mini-batch of \(m\) points is much cheaper than computing it over all \(n\) points --- but it means that the update direction is noisy.

Problem #159

Tags: quiz-07, SGD, neural networks, gradient descent, lecture-14

True or False: stochastic gradient descent generally requires fewer iterations than gradient descent to converge.

True False
Solution

False.

SGD typically requires more iterations than gradient descent to converge, because each update uses a noisy approximation of the true gradient. However, each iteration of SGD is much cheaper (\(O(md)\) vs. \(O(nd)\) for a mini-batch of size \(m \ll n\)), so SGD often converges faster in terms of total computation time.

Problem #160

Tags: gradient descent, quiz-07, lecture-14, neural networks

True or False: if two neural networks have the same architecture but different random initializations, they are guaranteed to converge to the same solution when trained with gradient descent on the same data.

True False
Solution

False.

The loss surface of a neural network is non-convex, meaning it has many local minima. Different random initializations place the starting point at different locations on this surface, so gradient descent can follow different paths and converge to different local minima. This is one reason why initialization matters in practice.