1460 lines
59 KiB
Plaintext
1460 lines
59 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9d313ffdcee5542c",
|
||
"metadata": {
|
||
"collapsed": false
|
||
},
|
||
"source": [
|
||
"### Problem Set 6: Introduction to PyTorch\n",
|
||
"\n",
|
||
"**Release Date:** 19 March 2024\n",
|
||
"\n",
|
||
"**Due Date:** 23:59, 06 April 2024\n",
|
||
"\n",
|
||
"In the real world, while fundamentals are welcomed and appreciated, implementing algorithms from scratch is time consuming, especially when it comes to Deep Learning (DL) models like neural networks with many layers. Backpropagating manually or by hand is often tedious and erroneous. Which is why, it is absolutely critical to learn **at least one** Machine Learning library, either to get jobs or build projects in this field. As such, in *Problem Set 6*, we will introduce you to **PyTorch**.\n",
|
||
"\n",
|
||
"<img src=\"imgs/img_logo.png\" width=\"600\">\n",
|
||
"\n",
|
||
"`PyTorch` is one of the largest DL libaries widely used around the globe. It offers a very Pythonic API to build layers and compose them together. In fact, data processing is also made easy using the multitude of tools and wrappers that are at your disposal – it is the complete workbench. Of course, there are other popular libraries such as `TensorFlow`, but they require you to understand how to understand \"computation graphs,\" and thus we feel are less accessible for beginners. Hence, we decided to use PyTorch for CS2109S. \n",
|
||
"\n",
|
||
"In *Problem Set 6*, we will attempt to help you learn the `PyTorch` API by having you build a simple deep neural network and training it locally on your system via backpropagation and stochastic gradient descent. Subsequently, you will also learn how to build data processing pipelines to prepare your data before ingestion into your model(s)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "afedfcd3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# RUN THIS CELL FIRST\n",
|
||
"import math\n",
|
||
"from collections import OrderedDict\n",
|
||
"\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"import torch\n",
|
||
"import torch.nn as nn\n",
|
||
"import numpy as np\n",
|
||
"from numpy import allclose, isclose"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "a5482ebb",
|
||
"metadata": {},
|
||
"source": [
|
||
"# 1 Tensors in PyTorch\n",
|
||
"\n",
|
||
"### 1.1 Concept - What are Tensors?\n",
|
||
"\n",
|
||
"In Linear Algebra, you've learned about vectors – they are 1-dimensional (1D) serial arrays (like `[1, 2, 3, 4, 23, 18]`) containing a column (or row) of information. You've also learned about matrices – they are \"rectangles\" (i.e., 2D) that also capture elements.\n",
|
||
"\n",
|
||
"**Tensors** generalise the concept of matrices: they are $n$-dimensional arrays that contain or represent information. In *PyTorch*, everything is defined as a `tensor`. It's analogous to `np.array(...)` from *NumPy*. A `tensor` object in *PyTorch* looks like this:\n",
|
||
"\n",
|
||
"<img src=\"imgs/img_tensors.png\" width=\"600\">\n",
|
||
"\n",
|
||
"\n",
|
||
"---\n",
|
||
"The following are some mappings of useful functions between Numpy and Pytorch, in fact, they are so similar that there is a function `torch.from_numpy(ndarray)` which transforms a numpy array into a pytorch tensor! The main difference in the functions in the table below is that Numpy and Pytorch functions takes as input and gives as output numpy array or torch tensors respectively. PyTorch tensors also have additional functionality for GPU acceleration. Refer to this [website](https://pytorch-for-numpy-users.wkentaro.com/) for more information.\n",
|
||
"\n",
|
||
"<img src=\"imgs/img_numpy_pytorch.png\" width=\"600\">"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bcc457e7",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 1.1.1 Demo - Tensor functions\n",
|
||
"\n",
|
||
"Notice that tensors have a `.grad` attribute. This is used for automatic gradient computation. \n",
|
||
"To create tensors, you can use the `torch.tensor(...)` constructor: \n",
|
||
"\n",
|
||
"A 0-dimensional tensor: `torch.tensor(5.0)` \n",
|
||
"A 1-dimensional tensor: `torch.tensor([1.0, 2.0, 3.0])` \n",
|
||
"A 2-dimensional tensor: `torch.tensor([[.4, .3], [.1, .2]])` \n",
|
||
"\n",
|
||
"If automatic gradient computation is required, then the equivalent constructors will be: \n",
|
||
"`torch.tensor(5.0, requires_grad=True)` \n",
|
||
"`torch.tensor([1.0, 2.0, 3.0], requires_grad=True)` \n",
|
||
"`torch.tensor([[.4, .3], [.1, .2]], requires_grad=True)` \n",
|
||
"\n",
|
||
"We can call detach() on these tensors to stop them from being traced for gradient computation, returning us the tensors without requires_grad=True.\n",
|
||
"\n",
|
||
"We can call item() on our tensors to return the value of our tensor as a standard python number:\n",
|
||
"\n",
|
||
"`>>> torch.tensor([1.0]).item()\n",
|
||
"1.0`\n",
|
||
"\n",
|
||
"The following code block shows how we can make use of all these functions introduced."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"outputs": [],
|
||
"source": [
|
||
"# Create a tensor with requires_grad set to True\n",
|
||
"x = torch.tensor([2.0], requires_grad=True)\n",
|
||
"\n",
|
||
"# Compute the gradient of a simple expression using backward\n",
|
||
"y = x**2 + 2 * x\n",
|
||
"y.backward()\n",
|
||
"\n",
|
||
"# Print the derivative value of y i.e dy/dx = 2x + 2 = 6.0.\n",
|
||
"print(\"Gradient of y with respect to x:\", x.grad)\n",
|
||
"\n",
|
||
"# Detach the gradient of x\n",
|
||
"x = x.detach()\n",
|
||
"\n",
|
||
"# Print the gradient of x after detachment\n",
|
||
"print(\"Gradient of x after detachment:\", x.grad)\n",
|
||
"\n",
|
||
"# Extract the scalar value of a tensor as a Python number\n",
|
||
"x_value = x.item()\n",
|
||
"print(\"Value of x as a Python number:\", x_value)"
|
||
],
|
||
"metadata": {},
|
||
"id": "a937e9bf",
|
||
"execution_count": null
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### 1.1.2 Demo - Working with Tensors\n",
|
||
"\n",
|
||
"Here, we use `torch.linspace` to create a `torch.tensor`. In PyTorch, and Machine Learning in general, tensors form the basis of all operations.\n",
|
||
"\n",
|
||
"We then make use of the built-in *PyTorch* function `torch.sin` to create the corresponding y-values of a sine function, and plot the points using *Matplotlib*."
|
||
],
|
||
"metadata": {},
|
||
"id": "3a9b1300"
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "be36fbc0",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# This is a demonstration: You just need to run this cell without editing.\n",
|
||
"\n",
|
||
"x = torch.linspace(-math.pi, math.pi, 1000) # Task 1.1: What is torch.linspace?\n",
|
||
"y_true = torch.sin(x)\n",
|
||
"\n",
|
||
"plt.plot(x, y_true, linestyle='solid', label='sin(x)')\n",
|
||
"plt.axis('equal')\n",
|
||
"plt.title('Original function to fit')\n",
|
||
"plt.legend()\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "23acad3b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Run this cell to explore what the FIRST 10 VALUES of x has been assigned to.\n",
|
||
"# By default, each cell will always print the output of the last expression in the cell\n",
|
||
"# You can explore what x is by modifying the expression e.g. x.max(), x.shape\n",
|
||
"x.shape\n",
|
||
"x.min() == -math.pi"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ece5d5e2",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 1.1 - What is `torch.linspace`?\n",
|
||
"\n",
|
||
"From the example above, answer the following questions:\n",
|
||
"\n",
|
||
"1. What does `x = torch.linspace(-math.pi, math.pi, 1000)` do? \n",
|
||
"2. How many values are stored in `x`? \n",
|
||
"3. What are the minimum and maximum values in `x`? "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ba5928f9",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 1.2.1 Demo - Using Tensors for linear regression\n",
|
||
"\n",
|
||
"For this example, we fit a **degree 3 polynomial** to the sine function, using a learning rate of 1e-6 and 5000 iterations."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "828fdba4",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# This is a demonstration: You just need to run this cell without editing.\n",
|
||
"\n",
|
||
"# Set learning rate\n",
|
||
"learning_rate = 1e-6\n",
|
||
"\n",
|
||
"# Initialize weights to 0\n",
|
||
"a = torch.tensor(0.)\n",
|
||
"b = torch.tensor(0.)\n",
|
||
"c = torch.tensor(0.)\n",
|
||
"d = torch.tensor(0.)\n",
|
||
"\n",
|
||
"print('iter', 'loss', '\\n----', '----', sep='\\t')\n",
|
||
"for t in range(1, 5001): # 5000 iterations\n",
|
||
" # Forward pass: compute predicted y\n",
|
||
" y_pred = a + b * x + c * x**2 + d * x**3\n",
|
||
"\n",
|
||
" # Compute MSE loss\n",
|
||
" loss = torch.mean(torch.square(y_pred - y_true))\n",
|
||
" if t % 1000 == 0:\n",
|
||
" print(t, loss.item(), sep='\\t')\n",
|
||
"\n",
|
||
" # Backpropagation\n",
|
||
" grad_y_pred = 2.0 * (y_pred - y_true) / y_pred.shape[0]\n",
|
||
" \n",
|
||
" # Compute gradients of a, b, c, d with respect to loss\n",
|
||
" grad_a = grad_y_pred.sum()\n",
|
||
" grad_b = (grad_y_pred * x).sum()\n",
|
||
" grad_c = (grad_y_pred * x ** 2).sum()\n",
|
||
" grad_d = (grad_y_pred * x ** 3).sum()\n",
|
||
"\n",
|
||
" # Update weights using gradient descent\n",
|
||
" a -= learning_rate * grad_a\n",
|
||
" b -= learning_rate * grad_b\n",
|
||
" c -= learning_rate * grad_c\n",
|
||
" d -= learning_rate * grad_d\n",
|
||
"\n",
|
||
"# print fitted polynomial\n",
|
||
"equation = f'{a:.5f} + {b:.5f} x + {c:.5f} x^2 + {d:.5f} x^3'\n",
|
||
"\n",
|
||
"y_pred = a + b * x + c * x**2 + d * x**3\n",
|
||
"plt.plot(x, y_true, linestyle='solid', label='sin(x)')\n",
|
||
"plt.plot(x, y_pred, linestyle='dashed', label=f'{equation}')\n",
|
||
"plt.axis('equal')\n",
|
||
"plt.title('3rd degree poly fitted to sine (MSE loss)')\n",
|
||
"plt.legend()\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "76952906",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 1.2.2 Demo - Using autograd to automatically compute gradients\n",
|
||
"\n",
|
||
"In the previous example, we explicitly computed the gradient for Mean Squared Error (MSE): \n",
|
||
"`grad_y_pred = 2.0 * (y_pred - y_true) / y_pred.shape[0]`\n",
|
||
"\n",
|
||
"In the next example, we will use PyTorch's autograd functionality to help us compute the gradient for **Mean Absolute Error (MAE)**. \n",
|
||
"In order to compute the gradients, we will use the `.backward()` method of *PyTorch* tensors.\n",
|
||
"\n",
|
||
"Once again, we fit a **degree 3 polynomial** to the sine function, using a learning rate of `1e-6` and `5000` iterations. \n",
|
||
"This time, we will use MAE instead of MSE."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d2861c55",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# This is a demonstration: You just need to run this cell without editing.\n",
|
||
"\n",
|
||
"# Set learning rate\n",
|
||
"learning_rate = 1e-6\n",
|
||
"\n",
|
||
"# Initialize weights to 0\n",
|
||
"a = torch.tensor(0., requires_grad=True)\n",
|
||
"b = torch.tensor(0., requires_grad=True)\n",
|
||
"c = torch.tensor(0., requires_grad=True)\n",
|
||
"d = torch.tensor(0., requires_grad=True)\n",
|
||
"\n",
|
||
"print('iter', 'loss', '\\n----', '----', sep='\\t')\n",
|
||
"for t in range(1, 5001):\n",
|
||
" # Forward pass: compute predicted y\n",
|
||
" y_pred = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
" if t == 1: print(y_pred.shape, y_true.shape)\n",
|
||
"\n",
|
||
" # Compute MAE loss\n",
|
||
" if t % 1000 == 0:\n",
|
||
" print(t, loss.item(), sep='\\t')\n",
|
||
"\n",
|
||
" # Automatically compute gradients\n",
|
||
" loss.backward()\n",
|
||
"\n",
|
||
" # Update weights using gradient descent\n",
|
||
" with torch.no_grad():\n",
|
||
" a -= learning_rate * a.grad\n",
|
||
" b -= learning_rate * b.grad\n",
|
||
" c -= learning_rate * c.grad\n",
|
||
" d -= learning_rate * d.grad\n",
|
||
" a.grad.zero_() # reset gradients !important\n",
|
||
" b.grad.zero_() # reset gradients !important\n",
|
||
" c.grad.zero_() # reset gradients !important\n",
|
||
" d.grad.zero_() # reset gradients !important\n",
|
||
" # What happens if you don't reset the gradients?\n",
|
||
"\n",
|
||
"# print fitted polynomial\n",
|
||
"equation = f'{a:.5f} + {b:.5f} x + {c:.5f} x^2 + {d:.5f} x^3'\n",
|
||
"\n",
|
||
"y_pred = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
"plt.plot(x, y_true, linestyle='solid', label='sin(x)')\n",
|
||
"plt.plot(x, y_pred.detach().numpy(), linestyle='dashed', label=f'{equation}')\n",
|
||
"plt.axis('equal')\n",
|
||
"plt.title('3rd degree poly fitted to sine (MAE loss)')\n",
|
||
"plt.legend()\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ca266605",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 1.2 - Polyfit model\n",
|
||
"\n",
|
||
"We have demonstrated how to fit a degree-3 polynomial to a set of `x` and `y` points (following the sine curve), using two different types of loss functions (MSE and MAE). \n",
|
||
"\n",
|
||
"Now, your task is to write a function `polyfit` that takes in some arbitrary set of points. You are only allowed to use **ONE** loop for the backpropagation and weights update. You are **NOT** allowed to use a loop to raise the features to their respective powers.\n",
|
||
"1. `x`, corresponding x-values, \n",
|
||
"2. `y`, corresponding true y-values, \n",
|
||
"3. `loss_fn` to compute the loss, given the true `y` and predicted `y`, \n",
|
||
"4. `n` representing the $n$-degree polynomial, and \n",
|
||
"5. `lr` learning rate, and \n",
|
||
"6. `n_iter` for the number of times to iterate. \n",
|
||
"\n",
|
||
"Return the 1D tensor containing the coefficients of the $n$-degree polynomial , after fitting the model. \n",
|
||
"The coefficients should be arranged in ascending powers of $x$.\n",
|
||
"\n",
|
||
"For example,\n",
|
||
"```\n",
|
||
">>> y = torch.sine(x)\n",
|
||
">>> mse = lambda y_true, y_pred: torch.mean(torch.square(y_pred - y_true))\n",
|
||
">>> mae = lambda y_true, y_pred: torch.mean(torch.abs(y_pred - y_true))\n",
|
||
"\n",
|
||
">>> polyfit(x, y, mse, 3, 1e-3, 5000)\n",
|
||
"tensor([-4.2270e-09, 8.5167e-01, 1.2131e-08, -9.2587e-02], requires_grad=True))\n",
|
||
"\n",
|
||
">>> polyfit(x, y, mae, 3, 1e-3, 5000)\n",
|
||
"tensor([-9.6776e-07, 8.7905e-01, -2.4784e-06, -9.8377e-02], requires_grad=True))\n",
|
||
"```\n",
|
||
"\n",
|
||
"*Note: For this regression problem, initialize your weights to 0.0.*"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "c1f9a796",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def polyfit(x, y, loss_fn, n, lr, n_iter):\n",
|
||
" \"\"\"\n",
|
||
" Parameters\n",
|
||
" ----------\n",
|
||
" x : A tensor of shape (1, n)\n",
|
||
" y : A tensor of shape (1, n)\n",
|
||
" loss_fn : Function to measure loss\n",
|
||
" n : The nth-degree polynomial\n",
|
||
" lr : Learning rate\n",
|
||
" n_iter : The number of iterations of gradient descent\n",
|
||
" \n",
|
||
" Returns\n",
|
||
" -------\n",
|
||
" Near-optimal coefficients of the nth-degree polynomial as a tensor of shape (1, n+1) after `n_iter` epochs.\n",
|
||
" \"\"\"\n",
|
||
" weights = torch.zeros(n+1, requires_grad=True)\n",
|
||
" pows = torch.arange(n+1).float()\n",
|
||
" X = x.unsqueeze(1) ** pows\n",
|
||
" for _ in range(n_iter):\n",
|
||
" # Forward Pass\n",
|
||
" y_pred = torch.matmul(X, weights)\n",
|
||
" # Compute Loss\n",
|
||
" loss = loss_fn(y, y_pred)\n",
|
||
" # Compute Gradients\n",
|
||
" loss.backward()\n",
|
||
" # Update Weights\n",
|
||
" with torch.no_grad():\n",
|
||
" weights -= lr * weights.grad\n",
|
||
" weights.grad.zero_()\n",
|
||
" return weights\n",
|
||
"\n",
|
||
"x = torch.linspace(-math.pi, math.pi, 1000)\n",
|
||
"\n",
|
||
"# Original true values\n",
|
||
"y = torch.sin(x)\n",
|
||
"plt.plot(x, y, linestyle='solid', label='sin(x)')\n",
|
||
"\n",
|
||
"# MSE\n",
|
||
"mse = lambda y_true, y_pred: torch.mean(torch.square(y_pred - y_true))\n",
|
||
"a, b, c, d = polyfit(x, y, mse, 3, 1e-3, 5000)\n",
|
||
"y_pred_mse = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
"plt.plot(x, y_pred_mse.detach().numpy(), linestyle='dashed', label=f'mse')\n",
|
||
"\n",
|
||
"# MAE\n",
|
||
"mae = lambda y_true, y_pred: torch.mean(torch.abs(y_pred - y_true))\n",
|
||
"a, b, c, d = polyfit(x, y, mae, 3, 1e-3, 5000)\n",
|
||
"y_pred_mae = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
"plt.plot(x, y_pred_mae.detach().numpy(), linestyle='dashed', label=f'mae')\n",
|
||
"\n",
|
||
"plt.axis('equal')\n",
|
||
"plt.title('Comparison of different fits')\n",
|
||
"plt.legend()\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "e60cfabe",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"x = torch.linspace(-math.pi, math.pi, 10)\n",
|
||
"y = torch.sin(x)\n",
|
||
"\n",
|
||
"def mse(y_true, y_pred):\n",
|
||
" assert y_true.shape == y_pred.shape, f\"Your ground truth and predicted values need to have the same shape {y_true.shape} vs {y_pred.shape}\"\n",
|
||
" return torch.mean(torch.square(y_pred - y_true))\n",
|
||
"def mae(y_true, y_pred):\n",
|
||
" assert y_true.shape == y_pred.shape, f\"Your ground truth and predicted values need to have the same shape {y_true.shape} vs {y_pred.shape}\"\n",
|
||
" return torch.mean(torch.abs(y_pred - y_true))\n",
|
||
"\n",
|
||
"test1 = polyfit(x, x, mse, 1, 1e-1, 100).tolist()\n",
|
||
"test2 = polyfit(x, x**2, mse, 2, 1e-2, 2000).tolist()\n",
|
||
"test3 = polyfit(x, y, mse, 3, 1e-3, 5000).tolist()\n",
|
||
"test4 = polyfit(x, y, mae, 3, 1e-3, 5000).tolist()\n",
|
||
"\n",
|
||
"assert allclose(test1, [0.0, 1.0], atol=1e-6)\n",
|
||
"assert allclose(test2, [0.0, 0.0, 1.0], atol=1e-5)\n",
|
||
"assert allclose(test3, [0.0, 0.81909, 0.0, -0.08469], atol=1e-3)\n",
|
||
"assert allclose(test4, [0.0, 0.83506, 0.0, -0.08974], atol=1e-3)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "825a4e0b",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 1.3 - Observations on different model configurations\n",
|
||
"\n",
|
||
"Run `polyfit` on these model configurations and explain your observations for <b>ALL</b> four configurations. Refer to the learning rate and degree of the polynomial when making observations regarding how well the model converges if at all.\n",
|
||
"\n",
|
||
"1. `polyfit(x, y, mse, 3, 1e-6, 5000)`\n",
|
||
"2. `polyfit(x, y, mse, 3, 1e6, 5000)`\n",
|
||
"3. `polyfit(x, y, mse, 1, 1e-3, 5000)`\n",
|
||
"4. `polyfit(x, y, mse, 6, 1e-3, 5000)`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "4c2554da",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# You may use this cell to run your observations\n",
|
||
"x = torch.linspace(-math.pi, math.pi, 1000)\n",
|
||
"# Original true values\n",
|
||
"y = torch.sin(x)\n",
|
||
"plt.plot(x, y, linestyle='solid', label='sin(x)')\n",
|
||
"print(polyfit(x, y, mse, 6, 1e-5, 5000))\n",
|
||
"\n",
|
||
"# mse = lambda y_true, y_pred: torch.mean(torch.square(y_pred - y_true))\n",
|
||
"\n",
|
||
"# a, b, c, d = polyfit(x, y, mse, 3, 1e-3, 5000)\n",
|
||
"# y_pred_mse = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
"# plt.plot(x, y_pred_mse.detach().numpy(), linestyle='dashed', label=f'3,1e-3')\n",
|
||
"\n",
|
||
"# a, b, c, d = polyfit(x, y, mse, 3, 1e-6, 5000)\n",
|
||
"# y_pred_mse = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
"# plt.plot(x, y_pred_mse.detach().numpy(), linestyle='dashed', label=f'3, 1e-6')\n",
|
||
"\n",
|
||
"# # a, b, c, d =polyfit(x, y, mse, 3, 1e6, 5000)\n",
|
||
"# # y_pred_mse = a + b * x + c * x ** 2 + d * x ** 3\n",
|
||
"# # plt.plot(x, y_pred_mse.detach().numpy(), linestyle='dashed', label=f'3, 1e6')\n",
|
||
"\n",
|
||
"# a, b = polyfit(x, y, mse, 1, 1e-3, 5000)\n",
|
||
"# y_pred_mse = a + b * x\n",
|
||
"# plt.plot(x, y_pred_mse.detach().numpy(), linestyle='dashed', label=f'1, 1e-3')\n",
|
||
"\n",
|
||
"# a,b,c,d,e,f,g = polyfit(x, y, mse, 6, 1e-5, 5000)\n",
|
||
"# y_pred_mse = a + b * x + c * x ** 2 + d * x ** 3 + e * x ** 4 + f * x ** 5 + g * x ** 6\n",
|
||
"# plt.plot(x, y_pred_mse.detach().numpy(), linestyle='dashed', label=f'6, 1e-5')\n",
|
||
"\n",
|
||
"\n",
|
||
"# plt.axis('equal')\n",
|
||
"# plt.title('Comparison of different fits')\n",
|
||
"# plt.legend()\n",
|
||
"# plt.show()\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4e2fb6c5",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"# 2 Computing gradients for arbitrary graphs\n",
|
||
"\n",
|
||
"Recall the neural network for `y = |x-1|` from the lecture. We are going to implement forward propagation as mentioned during lecture. This forward pass is the act of feeding data into our input layer, which will then be passed to and processed by the hidden layers according to the different activation functions specific to each perceptron. After passing through all the hidden layers, our neural network will generate an output, $\\hat{y}$, that is hopefully meaningful to our problem at hand.\n",
|
||
"\n",
|
||
"<img src=\"imgs/img_toy_nn.jpg\" width=\"800\">\n",
|
||
"\n",
|
||
"### Task 2.1 - Forward pass\n",
|
||
"\n",
|
||
"In this task, you are required implement the function `forward_pass` that takes in 4 arguments: \n",
|
||
"1. `x`, the input values (not including bias)\n",
|
||
"2. `w0`, (2x2) weights of the hidden layer\n",
|
||
"3. `w1`, (3x1) weights of the output layer\n",
|
||
"4. `activation_fn`, the activation function of the hidden layer.\n",
|
||
"\n",
|
||
"*Note: As in the lecture, there will be no activation for the output layer (i.e. the activation function of the output layer is the identity function `lambda x: x`)*"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "4d97ca45",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# This is the same as unsqueeze(1)\n",
|
||
"x = torch.linspace(-10, 10, 1000).reshape(-1, 1)\n",
|
||
"y = torch.abs(x-1)\n",
|
||
"\n",
|
||
"def forward_pass(x, w0, w1, activation_fn):\n",
|
||
" n = x.shape[0]\n",
|
||
" x = torch.cat((torch.ones(n, 1), x), 1)\n",
|
||
" # Perform a forward pass\n",
|
||
" a = torch.matmul(x, w0)\n",
|
||
" h = activation_fn(a)\n",
|
||
" h = torch.cat((torch.ones(n, 1), h), 1)\n",
|
||
" y_pred = torch.matmul(h, w1)\n",
|
||
" return y_pred\n",
|
||
"\n",
|
||
"# Exact weights\n",
|
||
"w0 = torch.tensor([[-1., 1.], [1., -1.]], requires_grad=True)\n",
|
||
"w1 = torch.tensor([[0.], [1.], [1.]], requires_grad=True)\n",
|
||
"\n",
|
||
"# Performing a forward pass on exact solution for weights will give us the correct y values\n",
|
||
"x_sample = torch.linspace(-2, 2, 5).reshape(-1, 1)\n",
|
||
"forward_pass(x_sample, w0, w1, torch.relu) # tensor([[3.], [2.], [1.], [0.], [1.]])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "8a3184ab",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"w0 = torch.tensor([[-1., 1.], [1., -1.]], requires_grad=True)\n",
|
||
"w1 = torch.tensor([[0.], [1.], [1.]], requires_grad=True)\n",
|
||
"\n",
|
||
"output0 = forward_pass(torch.linspace(0,1,50).reshape(-1, 1), w0, w1, torch.relu)\n",
|
||
"x_sample = torch.linspace(-2, 2, 5).reshape(-1, 1)\n",
|
||
"test1 = forward_pass(x_sample, w0, w1, torch.relu).tolist()\n",
|
||
"output1 = [[3.], [2.], [1.], [0.], [1.]]\n",
|
||
"\n",
|
||
"assert output0.shape == torch.Size([50, 1])\n",
|
||
"assert test1 == output1"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9c8033e1",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 2.2 - Backward propagation\n",
|
||
"\n",
|
||
"In this task, you will start with random weights for `w0` and `w1`, and iteratively perform forward passes and backward propagation multiple times to converge on a solution.\n",
|
||
"\n",
|
||
"Submit your values of `w0`, `w1`, and `loss` value onto Coursemology. Your `loss` value should be less than 1."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d79c3395",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# torch.manual_seed(1) # Set seed to some fixed value\n",
|
||
"\n",
|
||
"w0 = torch.randn(2, 2, requires_grad=True)\n",
|
||
"w1 = torch.randn(3, 1, requires_grad=True)\n",
|
||
"\n",
|
||
"learning_rate = 1e-4\n",
|
||
"print('iter', 'loss', '\\n----', '----', sep='\\t')\n",
|
||
"for t in range(1, 100001):\n",
|
||
" # Forward pass: compute predicted y\n",
|
||
" y_pred = forward_pass(x, w0, w1, torch.relu)\n",
|
||
"\n",
|
||
" loss = torch.mean(torch.square(y - y_pred))\n",
|
||
" loss.backward()\n",
|
||
"\n",
|
||
" if t % 1000 == 0:\n",
|
||
" print(t, loss.item(), sep='\\t')\n",
|
||
"\n",
|
||
" with torch.no_grad():\n",
|
||
" w0 -= learning_rate * w0.grad\n",
|
||
" w1 -= learning_rate * w1.grad\n",
|
||
" w0.grad.zero_() # reset gradients !important\n",
|
||
" w1.grad.zero_()\n",
|
||
"\n",
|
||
"print(\"--- w0 ---\", w0, sep='\\n')\n",
|
||
"print(\"--- w1 ---\", w1, sep='\\n')\n",
|
||
"print(\"--- w1 ---\", loss, sep='\\n')\n",
|
||
"y_pred = forward_pass(x, w0, w1, torch.relu)\n",
|
||
"plt.plot(x, y, linestyle='solid', label='|x-1|')\n",
|
||
"plt.plot(x, y_pred.detach().numpy(), linestyle='dashed', label='perceptron')\n",
|
||
"plt.axis('equal')\n",
|
||
"plt.title('Fit NN on abs function')\n",
|
||
"plt.legend()\n",
|
||
"plt.show()\n",
|
||
"\n",
|
||
"# Task 5: Submit the values of `w0`, `w1`, and `loss` values after fitting\n",
|
||
"# Note: An acceptable loss value should be less than 1.0\n",
|
||
"# You should try adjusting the random seed, learning rate, or \n",
|
||
"# number of iterations to improve your model.\n",
|
||
"\n",
|
||
"w0 = [[0.0, 0.0], [0.0, 0.0]] # to be computed\n",
|
||
"w1 = [[0.0], [0.0], [0.0]] # to be computed\n",
|
||
"loss = 0.0 # to be computed"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "c4bfdc7d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"w0 = torch.tensor(w0)\n",
|
||
"w1 = torch.tensor(w1)\n",
|
||
"\n",
|
||
"x = torch.linspace(-10, 10, 1000).reshape(-1, 1)\n",
|
||
"y = torch.abs(x-1)\n",
|
||
"\n",
|
||
"#IMPORTANT: Your forward pass above have to be correctly implemented\n",
|
||
"y_pred = forward_pass(x, w0, w1, torch.relu)\n",
|
||
"computed_mse_loss = torch.mean(torch.square(y - y_pred)).item()\n",
|
||
"\n",
|
||
"assert loss < 1\n",
|
||
"assert isclose(computed_mse_loss, loss, atol=1e-5, rtol=1e-2)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "413cd863",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 2.3 - Different random seeds\n",
|
||
"\n",
|
||
"Try to fit the model on different initial random weight values by adjusting the random seed. \n",
|
||
"<br/>\n",
|
||
"What is the impact of a random seed? How should we compare different neural network models given your observation to ensure fairness?\n",
|
||
"\n",
|
||
"Submit your observations and conclusion on Coursemology.\n",
|
||
"\n",
|
||
"---"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c98f725f",
|
||
"metadata": {},
|
||
"source": [
|
||
"# 3 Neural Networks (using PyTorch layers)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8c0f772a",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.1.1 Demo - nn.Module\n",
|
||
"\n",
|
||
"The `nn.Module` class is an interface that houses two main methods: `__init__`, where we instantiate our layers and activation functions, and `forward`, that performs the forward pass.\n",
|
||
"\n",
|
||
"To create our own neural network, we will inherit from the nn.Module parent class and call `super().__init__()` from within our constructor to create our module. Next, we will implement the `forward` function within our class so we can call it from our module to perform the forward pass. \n",
|
||
"\n",
|
||
"In this example, we define a custom LinearLayer class that inherits from nn.Module. The __init__ method initializes the weight and bias parameters as nn.Parameter objects, which are special types of tensors that require gradients to be computed during the backward pass.\n",
|
||
"\n",
|
||
"The forward method defines the forward pass of the linear layer. It takes a tensor x as input and computes the matrix multiplication of x and self.weight using the torch.matmul function, and then adds self.bias.\n",
|
||
"\n",
|
||
"We also created our own activation function which uses `torch.sin` by inheriting from nn.Module.\n",
|
||
"\n",
|
||
"Finally, in our Model, we can combine our own LinearLayers together with our SineActivation to process our input data using the forward function. In later sections, you will see how we can train our models."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "36fd1dd3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Define a linear layer using nn.Module\n",
|
||
"class LinearLayer(nn.Module):\n",
|
||
" def __init__(self, input_dim, output_dim):\n",
|
||
" super().__init__()\n",
|
||
" self.weight = nn.Parameter(torch.randn(input_dim, output_dim))\n",
|
||
" self.bias = nn.Parameter(torch.randn(output_dim))\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" return torch.matmul(x, self.weight) + self.bias\n",
|
||
" \n",
|
||
"class SineActivation(nn.Module):\n",
|
||
" def __init__(self):\n",
|
||
" super().__init__()\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" return torch.sin(x)\n",
|
||
"\n",
|
||
"class Model(nn.Module):\n",
|
||
" def __init__(self, input_size, hidden_size, num_classes):\n",
|
||
" super(Model, self).__init__()\n",
|
||
" self.l1 = LinearLayer(input_size, hidden_size)\n",
|
||
" self.act = SineActivation()\n",
|
||
" self.l2 = LinearLayer(hidden_size, num_classes)\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" x = self.l1(x)\n",
|
||
" x = self.act(x)\n",
|
||
" x = self.l2(x)\n",
|
||
" return x\n",
|
||
" \n",
|
||
"input_size = 1\n",
|
||
"hidden_size = 1\n",
|
||
"num_classes = 1\n",
|
||
"\n",
|
||
"model = Model(input_size, hidden_size, num_classes)\n",
|
||
"\n",
|
||
"x = torch.tensor([[1.0]])\n",
|
||
"output = model(x)\n",
|
||
"print(\"Original value: \", x)\n",
|
||
"print(\"Value after being processed by Model: \", output)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c6adbc1d",
|
||
"metadata": {},
|
||
"source": [
|
||
"_Extra: We can also define a `backward` function to perform backpropagation which will not be required in this problem set._\n",
|
||
"\n",
|
||
"In this trivial example, the Squared module takes an input x and returns x**2. The backward method calculates the gradient of the output with respect to the input, based on the gradients of the output grad_output.\n",
|
||
"\n",
|
||
"We can define the backward function for functions that are not fully differentiable that we still wish to use in our neural network."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "3e1044e1",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Squared(nn.Module):\n",
|
||
" def forward(self, x):\n",
|
||
" self.x = x\n",
|
||
" return x**2\n",
|
||
"\n",
|
||
" def backward(self, grad_output):\n",
|
||
" grad_input = 2 * self.x * grad_output\n",
|
||
" return grad_input"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "023c28dd",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.1.2 Demo - Activation Functions\n",
|
||
"\n",
|
||
"Pytorch also provides built-in activation functions. To help you understand more about activation functions, we have included some examples of activation functions introduced in the lecture, namely Sigmoid, Tanh, and ReLu. \n",
|
||
"\n",
|
||
"<img src=\"imgs/img_activation_fns.png\" width=\"200\"> \n",
|
||
"\n",
|
||
"Activation functions introduces non-linearity into the output of a neuron, allowing the NN to learn non-linear functions. Without non-linearity, our entire network will effectively become a linear model with only one layer, preventing us from modelling complex representations based on our inputs.\n",
|
||
"\n",
|
||
"Sigmoid, Tanh and ReLU are three examples of such activation functions introduced during lecture and the code block below shows how they map input to output values.\n",
|
||
"\n",
|
||
"The choice of activation function for the hidden layers and the output layer depends on the problem you're trying to solve.\n",
|
||
"\n",
|
||
"#### For the hidden layers, there are several commonly used activation functions:\n",
|
||
"\n",
|
||
"ReLU (Rectified Linear Unit): ReLU is a popular activation function that is widely used in deep learning models. It maps non-positive inputs to 0 and positive inputs to their original value. It is mainly used in hidden layers because it is fast to compute, has sparse activations, and helps to mitigate the vanishing gradient problem, where the gradients can become very small and cause the model to learn slowly.\n",
|
||
"\n",
|
||
"Tanh (Hyperbolic Tangent): Tanh is a activation function that maps input values to the range [-1, 1]. It is similar to Sigmoid, but instead of producing output values in the range [0, 1], it produces output values in the range [-1, 1]. Tanh is useful for solving problems where you want the activations to be centered around zero, such as in recurrent neural networks.\n",
|
||
"\n",
|
||
"Sigmoid: Sigmoid maps its input values to the range [0, 1]. It is less commonly used in hidden layers because it has a relatively slow convergence rate and can introduce saturation, where the output values become very small or very large, which can make it difficult for the gradients to flow through the model.\n",
|
||
"\n",
|
||
"#### For the output layer, the choice of activation function depends on the problem you're trying to solve. Here are some common choices:\n",
|
||
"\n",
|
||
"Sigmoid: The Sigmoid activation function maps input values to the range [0, 1]. It is commonly used for binary classification problems where the network produces a probability of one of two classes. In this case, the Sigmoid activation maps the output to a probability distribution over the two classes.\n",
|
||
"\n",
|
||
"Softmax: The Softmax activation function is a generalization of the Sigmoid activation that maps input values to a probability distribution over multiple classes. It is commonly used for multiclass classification problems. The Softmax activation function is used to convert the raw scores produced by the network into a probability distribution over the classes.\n",
|
||
"\n",
|
||
"Linear: For regression problems, the linear activation function is often used because it just maps the input values to the output values without any change.\n",
|
||
"\n",
|
||
"In summary, ReLU is a common choice for hidden layers, and the choice of activation function for the output layer depends on the problem you're trying to solve (binary classification, multiclass classification, or regression).\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"_Extra (Vanishing Gradient Problem):_\n",
|
||
"\n",
|
||
"_Below is an image of the derivatives of the Sigmoid, Tanh and ReLU function. We can see that the derivatives for both Sigmoid and Tanh tend to zero when the inputs are largely positive or negative, while derivative for ReLU is zero only when the inputs are non-positive. In our neural network, gradients are calculated through backpropagation using chain rule and the derivatives of each layer are multiplied down the network. The gradient is more likely to decrease exponentially as we propagate down to the initial layers if we use Sigmoid and Tanh as compared to ReLU, leading to the vanishing gradient problem._\n",
|
||
"\n",
|
||
"<img src=\"imgs/img_activation_fns_der.png\" width=\"500\"> \n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1ea9f8e6",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"x_sample = torch.linspace(-2, 2, 100)\n",
|
||
"sigmoid_output = nn.Sigmoid()(x_sample).detach().numpy()\n",
|
||
"tanh_output = nn.Tanh()(x_sample).detach().numpy()\n",
|
||
"relu_output = nn.ReLU()(x_sample).detach().numpy()\n",
|
||
"\n",
|
||
"f = plt.figure()\n",
|
||
"f.set_figwidth(6)\n",
|
||
"f.set_figheight(6)\n",
|
||
"plt.xlabel('x - axis')\n",
|
||
"plt.ylabel('y - axis')\n",
|
||
"plt.title(\"Input: 100 x-values between -1 to 1 \\n\\n Output: Corresponding y-values after passed through each activation function\\n\", fontsize=16)\n",
|
||
"plt.axvline(x=0, color='r', linestyle='dashed')\n",
|
||
"plt.axhline(y=0, color='r', linestyle='dashed')\n",
|
||
"plt.plot(x_sample, sigmoid_output)\n",
|
||
"plt.plot(x_sample, tanh_output)\n",
|
||
"plt.plot(x_sample, relu_output)\n",
|
||
"plt.legend([\"\",\"\",\"Sigmoid Output\", \"Tanh Output\", \"ReLU Output\"])\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "971aac32",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 3.1 - Forward pass\n",
|
||
"\n",
|
||
"In part 2, you manually created the Linear layers and explicitly specified weights and biases for the forward pass to connect every input neuron to every output neuron which will be extremely tedious for larger networks. \n",
|
||
"\n",
|
||
"In this task, you will be using `nn.Linear(in_dimensions, out_dimensions)` provided by pytorch which abstracts all these details away. `nn.Linear` represents a fully connected layer with bias automatically included. We can also choose to remove the bias column by simply calling `nn.Linear(in_dimensions, out_dimensions, bias=False)` instead.\n",
|
||
"\n",
|
||
"We inherit from PyTorch's `nn.Module` class to build the model from the previous task `y = |x-1|` from the lecture. \n",
|
||
"\n",
|
||
"<img src=\"imgs/img_toy_nn.jpg\" width=\"400\"> \n",
|
||
"\n",
|
||
"Pytorch is widely used in machine learning due to the ease of being able to combine many different types of layers and activation functions to create neural networks. This task should allow you to appreciate how easily we can build neural networks using PyTorch. \n",
|
||
"\n",
|
||
"The model has been built for you in `__init__`. You need to implement the `forward` method, making use of the layers `self.l1`, `self.l2`, and the activation function `self.relu`. You need to combine the linear layers AND the activation function in the forward pass function!\n",
|
||
"\n",
|
||
"_Extra: PyTorch has many other layers implemented for various model architectures. \n",
|
||
"You can read more in the glossary as well as in the docs: https://pytorch.org/docs/stable/nn.html \n",
|
||
"For now, we will only be using fully connected `nn.Linear` layers._"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "81145ccc",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class MyFirstNeuralNet(nn.Module):\n",
|
||
" def __init__(self): # set the arguments you'd need\n",
|
||
" super().__init__()\n",
|
||
" self.l1 = nn.Linear(1, 2) # bias included by default\n",
|
||
" self.l2 = nn.Linear(2, 1) # bias included by default\n",
|
||
" self.relu = nn.ReLU()\n",
|
||
" \n",
|
||
" # Task 3.1: Forward pass\n",
|
||
" def forward(self, x):\n",
|
||
" '''\n",
|
||
" Forward pass to process input through two linear layers and ReLU activation function.\n",
|
||
"\n",
|
||
" Parameters\n",
|
||
" ----------\n",
|
||
" x : A tensor of of shape (n, 1) where n is the number of training instances\n",
|
||
"\n",
|
||
" Returns\n",
|
||
" -------\n",
|
||
" Tensor of shape (n, 1)\n",
|
||
" '''\n",
|
||
" x = self.l1(x)\n",
|
||
" x = self.relu(x)\n",
|
||
" x = self.l2(x)\n",
|
||
" return x\n",
|
||
" \n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1040ee57",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"x_sample = torch.linspace(-2, 2, 5).reshape(-1, 1)\n",
|
||
"\n",
|
||
"model = MyFirstNeuralNet()\n",
|
||
"\n",
|
||
"state_dict = OrderedDict([\n",
|
||
" ('l1.weight', torch.tensor([[1.],[-1.]])),\n",
|
||
" ('l1.bias', torch.tensor([-1., 1.])),\n",
|
||
" ('l2.weight', torch.tensor([[1., 1.]])),\n",
|
||
" ('l2.bias', torch.tensor([0.]))\n",
|
||
"])\n",
|
||
"\n",
|
||
"model.load_state_dict(state_dict)\n",
|
||
"\n",
|
||
"student1 = model.forward(x_sample).detach().numpy()\n",
|
||
"output1 = [[3.], [2.], [1.], [0.], [1.]]\n",
|
||
"\n",
|
||
"assert allclose(student1, output1, atol=1e-5)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b897a701",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.1.3 Demo - Optimisers in PyTorch\n",
|
||
"\n",
|
||
"Optimizers in PyTorch are used to update the parameters of a model during training. They do this by computing the gradients of the model's parameters with respect to the loss function, and then using these gradients to update the parameters in a way that minimizes the loss. \n",
|
||
"\n",
|
||
"In the following code example, we will simply demo a few basic functionalities of optimisers. Only in 3.1.4 Demo will you see an actual optimizer at work to train a Neural Net.\n",
|
||
"\n",
|
||
"We first create a tensor x with requires_grad set to True. Next, we define our loss function to be the simple equation y = x ** 2 + 2 * x. Next, we define an optimiser (in this case, Stochastic Gradient Descent, SGD) and pass it our tensor x as a parameter to optimise. After updating the gradient stored in x using `backward()`, we will call the `step()` function to let the optimiser update x. We will then set the gradient of our tensor x back to zero using `zero_grad()`.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "7c2a8f0e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"x = torch.tensor([1.0], requires_grad=True)\n",
|
||
"\n",
|
||
"#Loss function\n",
|
||
"y = x ** 2 + 2 * x\n",
|
||
"\n",
|
||
"# Define an optimizer, pass it our tensor x to update\n",
|
||
"optimiser = torch.optim.SGD([x], lr=0.1)\n",
|
||
"\n",
|
||
"# Perform backpropagation\n",
|
||
"y.backward()\n",
|
||
"\n",
|
||
"print(\"Value of x before it is updated by optimiser: \", x)\n",
|
||
"print(\"Gradient stored in x after backpropagation: \", x.grad)\n",
|
||
"\n",
|
||
"# Call the step function on the optimizer to update weight\n",
|
||
"optimiser.step()\n",
|
||
"\n",
|
||
"#Weight update, x = x - lr * x.grad = 1.0 - 0.1 * 4.0 = 0.60\n",
|
||
"print(\"Value of x after it is updated by optimiser: \", x)\n",
|
||
"\n",
|
||
"# Set gradient of weight to zero\n",
|
||
"optimiser.zero_grad()\n",
|
||
"print(\"Gradient stored in x after zero_grad is called: \", x.grad)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "65c27fd8",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.1.4 Demo - Training Your First Neural Net\n",
|
||
"\n",
|
||
"Now, let's make use of an optimiser to train our neural network in Task 3.1!\n",
|
||
"\n",
|
||
"Take note, if you make changes to your model (e.g. fix any bugs in your forward pass), then you will have to re-run your previous cell to update the model definition.\n",
|
||
"\n",
|
||
"In the example below, we are applying what we have learnt in the above section about optimisers to train our neural network.\n",
|
||
"\n",
|
||
"We will using `torch.optim.SGD(model.parameters(), lr=1e-3, momentum=0)` as the optimiser. This SGD optimiser will implement stochastic gradient descent for us. As mentioned previously, `optimiser.zero_grad()` will set all the gradients to zero to prevent accumulation of all the previous old gradients we have calculated using backpropagation. `optimiser.step()` causes our optimiser to update the model weights based on the gradients of our parameters.\n",
|
||
"\n",
|
||
"We can see clearly from our example below that we are calling `optimiser.zero_grad()` at the start of the loop so we can clear the gradient from the previous iteration of backpropagation. Then after we compute the loss in the current iteration using our loss function and model predictions, y_pred, we will call `loss.backward()` to let pytorch carry out the backpropagation for us. After backpropagation, gradients for each of our parameters will be computed for us to update our model weights using `optimiser.step()`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "587d9e4d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"torch.manual_seed(6) # Set seed to some fixed value\n",
|
||
"\n",
|
||
"epochs = 10000\n",
|
||
"\n",
|
||
"model = MyFirstNeuralNet()\n",
|
||
"# the optimizer controls the learning rate\n",
|
||
"optimiser = torch.optim.SGD(model.parameters(), lr=1e-3, momentum=0)\n",
|
||
"loss_fn = nn.MSELoss()\n",
|
||
"\n",
|
||
"x = torch.linspace(-10, 10, 1000).reshape(-1, 1)\n",
|
||
"y = torch.abs(x-1)\n",
|
||
"\n",
|
||
"print('Epoch', 'Loss', '\\n-----', '----', sep='\\t')\n",
|
||
"for i in range(1, epochs+1):\n",
|
||
" # reset gradients to e\n",
|
||
" optimiser.zero_grad()\n",
|
||
" # get predictions\n",
|
||
" y_pred = model(x)\n",
|
||
" # compute loss\n",
|
||
" loss = loss_fn(y_pred, y)\n",
|
||
" # backpropagate\n",
|
||
" loss.backward()\n",
|
||
" # update the model weights\n",
|
||
" optimiser.step()\n",
|
||
"\n",
|
||
" if i % 1000 == 0:\n",
|
||
" print (f\"{i:5d}\", loss.item(), sep='\\t')\n",
|
||
"\n",
|
||
"y_pred = model(x)\n",
|
||
"plt.plot(x, y, linestyle='solid', label='|x-1|')\n",
|
||
"plt.plot(x, y_pred.detach().numpy(), linestyle='dashed', label='perceptron')\n",
|
||
"plt.axis('equal')\n",
|
||
"plt.title('Fit NN on y=|x-1| function')\n",
|
||
"plt.legend()\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "6f845c8d",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.2 Concept - Save and load models\n",
|
||
"\n",
|
||
"Your model weights are stored within the model itself. \n",
|
||
"You may save/load the model weights:\n",
|
||
"```\n",
|
||
"torch.save(model.state_dict(), \"path/to/model_state_dict\")\n",
|
||
"\n",
|
||
"model = MyFirstNeuralNet()\n",
|
||
"model.load_state_dict(torch.load(\"path/to/model_state_dict\"))\n",
|
||
"```\n",
|
||
"\n",
|
||
"Alternatively, you can save/load the entire model using\n",
|
||
"```\n",
|
||
"torch.save(model, \"path/to/model\")\n",
|
||
"\n",
|
||
"model = torch.load(\"path/to/model\")\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ac548df6",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 3.2 - Model weights\n",
|
||
"\n",
|
||
"For this task, you will print out the trained model's `.state_dict()` and submit this to Coursemology.\n",
|
||
"\n",
|
||
"*Note: An acceptable loss value should be less than 1.0. If your loss is greater than 1, try re-running with a different random initialization, or adjust your model configuration.*"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "481bb0cc",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# To submit this output\n",
|
||
"print(\"--- Submit the OrderedDict below ---\")\n",
|
||
"print(model.state_dict())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d6e437cc",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def get_loss(model):\n",
|
||
" model.load_state_dict(state_dict)\n",
|
||
" x = torch.linspace(-10, 10, 1000).reshape(-1, 1)\n",
|
||
" y = torch.abs(x-1)\n",
|
||
" loss_fn = nn.MSELoss()\n",
|
||
" y_pred = model.forward(x)\n",
|
||
" return loss_fn(y_pred, y).item()\n",
|
||
"\n",
|
||
"assert model.load_state_dict(state_dict)\n",
|
||
"assert get_loss(model) < 1"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9151f959",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.3 Concept - Using NN to recognize handwritten digits\n",
|
||
"\n",
|
||
"In the final part of this problem set, we will be building a neural network to classify images to their respective digits. \n",
|
||
"\n",
|
||
"You will build and train a model on the classic **MNIST Handwritten Digits** dataset. Each grayscale image is a $28 \\times 28$ matrix/tensor that looks like so:\n",
|
||
"\n",
|
||
"<img src=\"https://upload.wikimedia.org/wikipedia/commons/2/27/MnistExamples.png\" width=\"500\" />\n",
|
||
"\n",
|
||
"MNIST is a classification problem and the task is to take in an input image and classify them into one of ten buckets: the digits from $0$ to $9$. "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "eda667dc",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.3 Demo - Loading an external dataset\n",
|
||
"\n",
|
||
"The cell below imports the MNIST dataset, which is already pre-split into train and test sets. \n",
|
||
"\n",
|
||
"The download takes approximately 63MB of space."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "2ce62735",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# DO NOT REMOVE THIS CELL – THIS DOWNLOADS THE MNIST DATASET\n",
|
||
"# RUN THIS CELL BEFORE YOU RUN THE REST OF THE CELLS BELOW\n",
|
||
"from torchvision import datasets\n",
|
||
"\n",
|
||
"# This downloads the MNIST datasets ~63MB\n",
|
||
"mnist_train = datasets.MNIST(\"./\", train=True, download=True)\n",
|
||
"mnist_test = datasets.MNIST(\"./\", train=False, download=True)\n",
|
||
"\n",
|
||
"x_train = mnist_train.data.reshape(-1, 784) / 255\n",
|
||
"y_train = mnist_train.targets\n",
|
||
" \n",
|
||
"x_test = mnist_test.data.reshape(-1, 784) / 255\n",
|
||
"y_test = mnist_test.targets"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e092f6c4",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 3.3 - Define the model architechure and implement the forward pass\n",
|
||
"Create a 3-layer network in the `__init__` method of the model `DigitNet`. \n",
|
||
"These layers are all `Linear` layers and should correspond to the following the architecture:\n",
|
||
"\n",
|
||
"<img src=\"imgs/img_linear_nn.png\" width=\"600\">\n",
|
||
"\n",
|
||
"In our data, a given image $x$ has been flattened from a 28x28 image to a 784-length array.\n",
|
||
"\n",
|
||
"After initializing the layers, stitch them together in the `forward` method. Your network should look like so:\n",
|
||
"\n",
|
||
"$$x \\rightarrow \\text{Linear(512)} \\rightarrow \\text{ReLU} \\rightarrow \\text{Linear(128)} \\rightarrow \\text{ReLU} \\rightarrow \\text{Linear(10)} \\rightarrow \\text{Softmax} \\rightarrow \\hat{y}$$\n",
|
||
"\n",
|
||
"**Softmax Layer**: The final softmax activation is commonly used for classification tasks, as it will normalizes the results into a vector of values that follows a probability distribution whose total sums up to 1. The output values are between the range [0,1] which is nice because we are able to avoid binary classification and accommodate as many classes or dimensions in our neural network model.\n",
|
||
"\n",
|
||
"*Note: When using `torch.softmax(...)` on the final layer, ensure you are applying it on the correct dimension (as you did in NumPy via the `axis` argument in popular methods)*"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "596d04f8",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class DigitNet(nn.Module):\n",
|
||
" def __init__(self, input_dimensions, num_classes): # set the arguments you'd need\n",
|
||
" super().__init__()\n",
|
||
" \"\"\"\n",
|
||
" YOUR CODE HERE\n",
|
||
" - DO NOT hardcode the input_dimensions, use the parameter in the function\n",
|
||
" - Your network should work for any input and output size \n",
|
||
" - Create the 3 layers (and a ReLU layer) using the torch.nn layers API\n",
|
||
" \"\"\"\n",
|
||
" self.l1 = nn.Linear(input_dimensions, 512)\n",
|
||
" self.l2 = nn.Linear(512, 128)\n",
|
||
" self.l3 = nn.Linear(128, num_classes)\n",
|
||
" self.relu = nn.ReLU()\n",
|
||
"\n",
|
||
" \n",
|
||
" def forward(self, x):\n",
|
||
" \"\"\"\n",
|
||
" Performs the forward pass for the network.\n",
|
||
" \n",
|
||
" Parameters\n",
|
||
" ----------\n",
|
||
" x : Input tensor (batch size is the entire dataset)\n",
|
||
"\n",
|
||
" Returns\n",
|
||
" -------\n",
|
||
" The output of the entire 3-layer model.\n",
|
||
" \"\"\"\n",
|
||
" x = self.l1(x)\n",
|
||
" x = self.relu(x)\n",
|
||
" x = self.l2(x)\n",
|
||
" x = self.relu(x)\n",
|
||
" x = self.l3(x)\n",
|
||
" return torch.softmax(x, dim=1)\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "95c1a075",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"model = DigitNet(784, 10)\n",
|
||
"assert [layer.detach().numpy().shape for name, layer in model.named_parameters()] \\\n",
|
||
" == [(512, 784), (512,), (128, 512), (128,), (10, 128), (10,)]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d356b9ad",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 3.4 - Training Loop\n",
|
||
"\n",
|
||
"As demonstrated in Section 3.2, implement the function `train_model` that performs the following for every epoch/iteration:\n",
|
||
"\n",
|
||
"1. set the optimizer's gradients to zero\n",
|
||
"2. forward pass\n",
|
||
"3. calculate the loss\n",
|
||
"4. backpropagate using the loss\n",
|
||
"5. take an optimzer step to update weights\n",
|
||
"\n",
|
||
"This time, use the Adam optimiser to train the network.\n",
|
||
"<br/>\n",
|
||
"<br/>\n",
|
||
"Use Cross-Entropy Loss, since we are performing a classification.\n",
|
||
"<br/>\n",
|
||
"_(PyTorch Softmax normalize logits while CrossEntropyLoss accepts unnormalized logits and CrossEntropyLoss already applies LogSoftmax, however, we will use Softmax here as we want to showcase how Softmax can convert the raw scores produced by the network into a probability distribution over the classes)._\n",
|
||
"<br/>\n",
|
||
"<br/>\n",
|
||
"Train for 20 epochs. \n",
|
||
"\n",
|
||
"*Note: refer to the command glossary to find out how to instantiate optimisers, losses, and more*"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "60ab3632",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def train_model(x_train, y_train, epochs=20):\n",
|
||
" \"\"\"\n",
|
||
" Trains the model for 20 epochs/iterations\n",
|
||
" \n",
|
||
" Parameters\n",
|
||
" ----------\n",
|
||
" x_train : A tensor of training features of shape (60000, 784)\n",
|
||
" y_train : A tensor of training labels of shape (60000, 1)\n",
|
||
" epochs : Number of epochs, default of 20\n",
|
||
" \n",
|
||
" Returns\n",
|
||
" -------\n",
|
||
" The final model \n",
|
||
" \"\"\"\n",
|
||
" model = DigitNet(784, 10)\n",
|
||
" optimiser = torch.optim.Adam(model.parameters())\n",
|
||
" loss_fn = nn.CrossEntropyLoss()\n",
|
||
"\n",
|
||
" for i in range(epochs):\n",
|
||
" optimiser.zero_grad()\n",
|
||
" y_pred = model(x_train)\n",
|
||
" \n",
|
||
" loss = loss_fn(y_pred, y_train)\n",
|
||
" loss.backward()\n",
|
||
" optimiser.step()\n",
|
||
" return model\n",
|
||
" \n",
|
||
"digit_model = train_model(x_train, y_train)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"outputs": [],
|
||
"source": [],
|
||
"metadata": {
|
||
"collapsed": false
|
||
},
|
||
"id": "7cf3b5fd0f653e73"
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a99b7049",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"x_train_new = torch.rand(5, 784, requires_grad=True)\n",
|
||
"y_train_new = ones = torch.ones(5, dtype=torch.uint8)\n",
|
||
"\n",
|
||
"assert type(train_model(x_train_new, y_train_new)) == DigitNet"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "01fdee35",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 3.5 Demo - Explore your model\n",
|
||
"\n",
|
||
"Now that we have trained the model, let us run some predictions on the model."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "6f83aa93",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# This is a demonstration: You can use this cell for exploring your trained model\n",
|
||
"\n",
|
||
"idx = 190 # try on some index\n",
|
||
"\n",
|
||
"scores = digit_model(x_test[idx:idx+1])\n",
|
||
"_, predictions = torch.max(scores, 1)\n",
|
||
"print(\"true label:\", y_test[idx].item())\n",
|
||
"print(\"pred label:\", predictions[0].item())\n",
|
||
"\n",
|
||
"plt.imshow(x_test[idx].numpy().reshape(28, 28), cmap='gray')\n",
|
||
"plt.axis(\"off\")\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fcc94586",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 3.5 - Evaluate the model\n",
|
||
"\n",
|
||
"Now that we have trained the model, we should evaluate it using our test set. \n",
|
||
"We will be using the accuracy (whether or not the model predicted the correct label) to measure the model performance. \n",
|
||
"\n",
|
||
"Since our model takes in a (n x 784) tensor and returns a (n x 10) tensor of probability scores for each of the 10 classes, we need to convert the probability scores into the actual predictions by taking the index of the maximum probability. "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a5684246",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def get_accuracy(scores, labels):\n",
|
||
" \"\"\"\n",
|
||
" Helper function that returns accuracy of model\n",
|
||
" \n",
|
||
" Parameters\n",
|
||
" ----------\n",
|
||
" scores : The raw softmax scores of the network\n",
|
||
" labels : The ground truth labels\n",
|
||
" \n",
|
||
" Returns\n",
|
||
" -------\n",
|
||
" Accuracy of the model. Return a number in range [0, 1].\n",
|
||
" 0 means 0% accuracy while 1 means 100% accuracy\n",
|
||
" \"\"\"\n",
|
||
" idxes = torch.argmax(scores, dim=1)\n",
|
||
" ints = (idxes == labels).float()\n",
|
||
" return torch.mean(ints)\n",
|
||
"\n",
|
||
"scores = digit_model(x_test) # n x 10 tensor\n",
|
||
"get_accuracy(scores, y_test)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "beafdef0",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"scores = torch.tensor([[0.4118, 0.6938, 0.9693, 0.6178, 0.3304, 0.5479, 0.4440, 0.7041, 0.5573, 0.6959],\n",
|
||
" [0.9849, 0.2924, 0.4823, 0.6150, 0.4967, 0.4521, 0.0575, 0.0687, 0.0501, 0.0108],\n",
|
||
" [0.0343, 0.1212, 0.0490, 0.0310, 0.7192, 0.8067, 0.8379, 0.7694, 0.6694, 0.7203],\n",
|
||
" [0.2235, 0.9502, 0.4655, 0.9314, 0.6533, 0.8914, 0.8988, 0.3955, 0.3546, 0.5752],\n",
|
||
" [0,0,0,0,0,0,0,0,0,1]])\n",
|
||
"y_true = torch.tensor([5, 3, 6, 4, 9])\n",
|
||
"acc_true = 0.4\n",
|
||
"assert isclose(get_accuracy(scores, y_true),acc_true) , \"Mismatch detected\"\n",
|
||
"print(\"passed\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9ce50c78",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Submission\n",
|
||
"\n",
|
||
"Once you are done, please remember to submit your work to Coursemology, by copying the right snippets of code into the corresponding box that says \"Your answer\", and click \"Save\". After you save, you can make changes to your submission.\n",
|
||
"\n",
|
||
"Once you are satisfied with what you have uploaded, click \"Finalize submission\". **Note that once your submission is finalized, it is considered to be submitted for grading and cannot be changed.** If you need to undo this action, you will have to reach out to your assigned tutor for help. Please do not finalize your submission until you are sure that you want to submit your solutions for grading. \n",
|
||
"\n",
|
||
"### HAVE FUN AND ENJOY CODING!"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"name": "cs2109s",
|
||
"language": "python",
|
||
"display_name": "CS2109S"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.9"
|
||
},
|
||
"vscode": {
|
||
"interpreter": {
|
||
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
|
||
}
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|