DSC 140B
Problems tagged with autoencoders

Problems tagged with "autoencoders"

Problem #177

Tags: lecture-16, autoencoders

An autoencoder is trained on data in \(\mathbb{R}^{8}\) with a bottleneck (hidden) layer of dimension 3.

Part 1)

What are the input and output dimensions of the encoder?

Solution

The encoder maps \(\mathbb{R}^8 \to\mathbb{R}^3\). Its input dimension is 8 and its output dimension is 3.

Part 2)

What are the input and output dimensions of the decoder?

Solution

The decoder maps \(\mathbb{R}^3 \to\mathbb{R}^8\). Its input dimension is 3 and its output dimension is 8.

Part 3)

What is the output dimension of the full autoencoder, \(H(\vec x) = \operatorname{decode}(\operatorname{encode}(\vec x))\)?

Solution

\(8\), the same as the input dimension.

The autoencoder maps \(\mathbb{R}^8 \to\mathbb{R}^8\). Its goal is to reconstruct the input, so the output must have the same dimensionality.

Part 4)

True or False: training this autoencoder is a supervised learning problem.

True False
Solution

False.

Training an autoencoder is an unsupervised learning problem. The network is trained to reconstruct its own input --- there are no separate labels. The loss function is the reconstruction error \(\sum_{i=1}^n \|\vec{x}^{(i)} - H(\vec{x}^{(i)})\|^2\).

Problem #179

Tags: lecture-16, autoencoders

True or False: if an autoencoder has 10 input nodes, 10 hidden nodes, and 10 output nodes, the smallest reconstruction error it can possibly achieve is zero.

True False
Solution

True.

Since the hidden layer has the same dimensionality as the input, the autoencoder can learn the identity function, mapping each input to itself exactly. This results in zero reconstruction error.