847 lines
33 KiB
Plaintext
847 lines
33 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7d017333",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Final Assessment Scratch Pad"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d3d00386",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Instructions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ea516aa7",
|
|
"metadata": {},
|
|
"source": [
|
|
"1. Please use only this Jupyter notebook to work on your model, and **do not use any extra files**. If you need to define helper classes or functions, feel free to do so in this notebook.\n",
|
|
"2. This template is intended to be general, but it may not cover every use case. The sections are given so that it will be easier for us to grade your submission. If your specific use case isn't addressed, **you may add new Markdown or code blocks to this notebook**. However, please **don't delete any existing blocks**.\n",
|
|
"3. If you don't think a particular section of this template is necessary for your work, **you may skip it**. Be sure to explain clearly why you decided to do so."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "022cb4cd",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Report"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9c14a2d8",
|
|
"metadata": {},
|
|
"source": [
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"Please provide a summary of the ideas and steps that led you to your final model. Someone reading this summary should understand why you chose to approach the problem in a particular way and able to replicate your final model at a high level. Please ensure that your summary is detailed enough to provide an overview of your thought process and approach but also concise enough to be easily understandable. Also, please follow the guidelines given in the `main.ipynb`.\n",
|
|
"\n",
|
|
"This report should not be longer than **1-2 pages of A4 paper (up to around 1,000 words)**. Marks will be deducted if you do not follow instructions and you include too many words here. \n",
|
|
"\n",
|
|
"**[DELETE EVERYTHING FROM THE PREVIOUS TODO TO HERE BEFORE SUBMISSION]**\n",
|
|
"\n",
|
|
"##### Overview\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 1. Descriptive Analysis\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 2. Detection and Handling of Missing Values\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 3. Detection and Handling of Outliers\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 4. Detection and Handling of Class Imbalance \n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 5. Understanding Relationship Between Variables\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 6. Data Visualization\n",
|
|
"**[TODO]** \n",
|
|
"##### 7. General Preprocessing\n",
|
|
"**[TODO]**\n",
|
|
" \n",
|
|
"##### 8. Feature Selection \n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 9. Feature Engineering\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 10. Creating Models\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 11. Model Evaluation\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### 12. Hyperparameters Search\n",
|
|
"**[TODO]**\n",
|
|
"\n",
|
|
"##### Conclusion\n",
|
|
"**[TODO]**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "49dcaf29",
|
|
"metadata": {},
|
|
"source": [
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "27103374",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Workings (Not Graded)\n",
|
|
"\n",
|
|
"You will do your working below. Note that anything below this section will not be graded, but we might counter-check what you wrote in the report above with your workings to make sure that you actually did what you claimed to have done. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0f4c6cd4",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Import Packages\n",
|
|
"\n",
|
|
"Here, we import some packages necessary to run this notebook. In addition, you may import other packages as well. Do note that when submitting your model, you may only use packages that are available in Coursemology (see `main.ipynb`)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "cded1ed6",
|
|
"metadata": {
|
|
"ExecuteTime": {
|
|
"end_time": "2024-04-28T02:23:08.720475Z",
|
|
"start_time": "2024-04-28T02:23:08.235724Z"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import pandas as pd\n",
|
|
"import os\n",
|
|
"import numpy as np"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "748c35d7",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Load Dataset\n",
|
|
"\n",
|
|
"The dataset `data.npy` consists of $N$ grayscale videos and their corresponding labels. Each video has a shape of (L, H, W). L represents the length of the video, which may vary between videos. H and W represent the height and width, which are consistent across all videos. \n",
|
|
"\n",
|
|
"A code snippet that loads the data is provided below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c09da291",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Load Data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "6297e25a",
|
|
"metadata": {
|
|
"ExecuteTime": {
|
|
"end_time": "2024-04-28T02:23:10.018019Z",
|
|
"start_time": "2024-04-28T02:23:09.989783Z"
|
|
}
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Number of data sample: 2500\n",
|
|
"Shape of the first data sample: (10, 16, 16)\n",
|
|
"Shape of the third data sample: (8, 16, 16)\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"with open('data.npy', 'rb') as f:\n",
|
|
" data = np.load(f, allow_pickle=True).item()\n",
|
|
" X = data['data']\n",
|
|
" y = data['label']\n",
|
|
" \n",
|
|
"print('Number of data sample:', len(X))\n",
|
|
"print('Shape of the first data sample:', X[0].shape)\n",
|
|
"print('Shape of the third data sample:', X[2].shape)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cbe832b6",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Data Exploration & Preparation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2f6a464c",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1. Descriptive Analysis"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "3b1f62dd",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "adb61967",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 2. Detection and Handling of Missing Values"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "4bb9cdfb",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8adcb9cd",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 3. Detection and Handling of Outliers"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "ed1c17a1",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d4916043",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 4. Detection and Handling of Class Imbalance"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "ad3ab20e",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2552a795",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 5. Understanding Relationship Between Variables"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "29ddbbcf",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "757fb315",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 6. Data Visualization"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "93f82e42",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2a7eebcf",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Data Preprocessing"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ae3e3383",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 7. General Preprocessing"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "19174365",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "fb3aa527",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 8. Feature Selection"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "a85808bf",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4921e8ca",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 9. Feature Engineering"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "dbcde626",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "fa676c3f",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Modeling & Evaluation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "589b37e4",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 10. Creating models"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch\n",
|
|
"from torch import nn\n",
|
|
"class CNN(nn.Module):\n",
|
|
" def __init__(self, num_classes):\n",
|
|
" super(CNN, self).__init__()\n",
|
|
"\n",
|
|
" self.conv1 = nn.Conv2d(1,32,3,stride=1,padding=0)\n",
|
|
" self.conv2 = nn.Conv2d(32,64,3,stride=1,padding=0)\n",
|
|
" self.relu = nn.ReLU()\n",
|
|
" self.maxpool = nn.MaxPool2d(2)\n",
|
|
" self.fc1 = nn.Linear(256, 128) # Calculate input size based on output from conv2 and pooling\n",
|
|
" self.fc2 = nn.Linear(128, num_classes)\n",
|
|
" self.flatten = nn.Flatten()\n",
|
|
"\n",
|
|
" def forward(self, x):\n",
|
|
" x = self.conv1(x)\n",
|
|
" x = self.relu(x)\n",
|
|
" x = self.maxpool(x)\n",
|
|
" x = self.conv2(x)\n",
|
|
" x = self.relu(x)\n",
|
|
" x = self.maxpool(x)\n",
|
|
" x = self.flatten(x)\n",
|
|
" x = self.fc1(x)\n",
|
|
" x = self.relu(x)\n",
|
|
" x = self.fc2(x)\n",
|
|
" return x\n",
|
|
"\n",
|
|
"# video is a numpy array of shape (L, H, W)\n",
|
|
"def clean_batch(batch):\n",
|
|
" batch = np.array(batch)\n",
|
|
" print(batch.shape)\n",
|
|
" temp_x = batch.reshape(-1, 256)\n",
|
|
" np.nan_to_num(temp_x, copy=False)\n",
|
|
" col_mean = np.nanmean(temp_x, axis=0)\n",
|
|
" inds = np.where(np.isnan(temp_x))\n",
|
|
" temp_x[inds] = np.take(col_mean, inds[1])\n",
|
|
" temp_x = np.clip(temp_x, 1, 255)\n",
|
|
" batch = temp_x.reshape(-1, 1, 16,16)\n",
|
|
" return torch.tensor(batch, dtype=torch.float32)\n",
|
|
"def flatten_data(X, y):\n",
|
|
" not_nan_indices = np.argwhere(~np.isnan(np.array(y))).squeeze()\n",
|
|
" y = [y[i] for i in not_nan_indices]\n",
|
|
" X = [X[i] for i in not_nan_indices]\n",
|
|
" flattened_x = []\n",
|
|
" flattened_y = []\n",
|
|
" for idx, video in enumerate(X):\n",
|
|
" for frame in video:\n",
|
|
" flattened_x.append(frame)\n",
|
|
" flattened_y.append(y[idx])\n",
|
|
" flattened_x = clean_batch(flattened_x)\n",
|
|
" return flattened_x, torch.Tensor(np.array(flattened_y, dtype=np.int64)).long()\n",
|
|
"\n",
|
|
"class Model():\n",
|
|
" def __init__(self):\n",
|
|
" self.cnn = CNN(6)\n",
|
|
" def fit(self, X, y):\n",
|
|
" self.cnn.train()\n",
|
|
" X, y = flatten_data(X, y)\n",
|
|
" train_dataset = torch.utils.data.TensorDataset(X, y)\n",
|
|
" train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=320, shuffle=True)\n",
|
|
" criterion = nn.CrossEntropyLoss()\n",
|
|
" optimizer = torch.optim.Adam(self.cnn.parameters(), lr=0.001)\n",
|
|
" for epoch in range(70):\n",
|
|
" for idx, (inputs, labels) in enumerate(train_loader):\n",
|
|
" optimizer.zero_grad()\n",
|
|
" outputs = self.cnn(inputs)\n",
|
|
" loss = criterion(outputs, labels)\n",
|
|
" loss.backward()\n",
|
|
" optimizer.step()\n",
|
|
" print(f'Epoch {epoch}, Loss: {loss.item()}')\n",
|
|
" return self\n",
|
|
" def predict(self, X):\n",
|
|
" self.cnn.eval()\n",
|
|
" results = []\n",
|
|
" for idx, batch in enumerate(X):\n",
|
|
" batch = clean_batch(batch)\n",
|
|
" pred = self.cnn(batch)\n",
|
|
" result = torch.argmax(pred, axis=1)\n",
|
|
" results.append(torch.max(result))\n",
|
|
" return results"
|
|
],
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"ExecuteTime": {
|
|
"end_time": "2024-04-28T04:15:35.410374Z",
|
|
"start_time": "2024-04-28T04:15:35.390070Z"
|
|
}
|
|
},
|
|
"id": "d8dffd7d",
|
|
"execution_count": 190
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "495bf3c0",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 11. Model Evaluation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 191,
|
|
"id": "9245ab47",
|
|
"metadata": {
|
|
"ExecuteTime": {
|
|
"end_time": "2024-04-28T04:15:44.130692Z",
|
|
"start_time": "2024-04-28T04:15:37.604561Z"
|
|
}
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"(16186, 16, 16)\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/numpy/core/fromnumeric.py:88: RuntimeWarning: overflow encountered in reduce\n",
|
|
" return ufunc.reduce(obj, axis, dtype, out, **passkwargs)\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Epoch 0, Loss: 1.2299271821975708\n",
|
|
"Epoch 1, Loss: 1.1530401706695557\n",
|
|
"Epoch 2, Loss: 1.0396554470062256\n"
|
|
]
|
|
},
|
|
{
|
|
"ename": "KeyboardInterrupt",
|
|
"evalue": "",
|
|
"output_type": "error",
|
|
"traceback": [
|
|
"\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
|
|
"\u001B[0;31mKeyboardInterrupt\u001B[0m Traceback (most recent call last)",
|
|
"Cell \u001B[0;32mIn[191], line 9\u001B[0m\n\u001B[1;32m 6\u001B[0m X_test \u001B[38;5;241m=\u001B[39m [X_test[i] \u001B[38;5;28;01mfor\u001B[39;00m i \u001B[38;5;129;01min\u001B[39;00m not_nan_indices]\n\u001B[1;32m 8\u001B[0m model \u001B[38;5;241m=\u001B[39m Model()\n\u001B[0;32m----> 9\u001B[0m \u001B[43mmodel\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mfit\u001B[49m\u001B[43m(\u001B[49m\u001B[43mX_train\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43my_train\u001B[49m\u001B[43m)\u001B[49m\n",
|
|
"Cell \u001B[0;32mIn[190], line 66\u001B[0m, in \u001B[0;36mModel.fit\u001B[0;34m(self, X, y)\u001B[0m\n\u001B[1;32m 64\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m idx, (inputs, labels) \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28menumerate\u001B[39m(train_loader):\n\u001B[1;32m 65\u001B[0m optimizer\u001B[38;5;241m.\u001B[39mzero_grad()\n\u001B[0;32m---> 66\u001B[0m outputs \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mcnn\u001B[49m\u001B[43m(\u001B[49m\u001B[43minputs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 67\u001B[0m loss \u001B[38;5;241m=\u001B[39m criterion(outputs, labels)\n\u001B[1;32m 68\u001B[0m loss\u001B[38;5;241m.\u001B[39mbackward()\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/nn/modules/module.py:1511\u001B[0m, in \u001B[0;36mModule._wrapped_call_impl\u001B[0;34m(self, *args, **kwargs)\u001B[0m\n\u001B[1;32m 1509\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_compiled_call_impl(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs) \u001B[38;5;66;03m# type: ignore[misc]\u001B[39;00m\n\u001B[1;32m 1510\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m-> 1511\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_call_impl\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/nn/modules/module.py:1520\u001B[0m, in \u001B[0;36mModule._call_impl\u001B[0;34m(self, *args, **kwargs)\u001B[0m\n\u001B[1;32m 1515\u001B[0m \u001B[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001B[39;00m\n\u001B[1;32m 1516\u001B[0m \u001B[38;5;66;03m# this function, and just call forward.\u001B[39;00m\n\u001B[1;32m 1517\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m (\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_backward_hooks \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_backward_pre_hooks \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_forward_hooks \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_forward_pre_hooks\n\u001B[1;32m 1518\u001B[0m \u001B[38;5;129;01mor\u001B[39;00m _global_backward_pre_hooks \u001B[38;5;129;01mor\u001B[39;00m _global_backward_hooks\n\u001B[1;32m 1519\u001B[0m \u001B[38;5;129;01mor\u001B[39;00m _global_forward_hooks \u001B[38;5;129;01mor\u001B[39;00m _global_forward_pre_hooks):\n\u001B[0;32m-> 1520\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mforward_call\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1522\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 1523\u001B[0m result \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m\n",
|
|
"Cell \u001B[0;32mIn[190], line 21\u001B[0m, in \u001B[0;36mCNN.forward\u001B[0;34m(self, x)\u001B[0m\n\u001B[1;32m 19\u001B[0m x \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mconv2(x)\n\u001B[1;32m 20\u001B[0m x \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mrelu(x)\n\u001B[0;32m---> 21\u001B[0m x \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mmaxpool\u001B[49m\u001B[43m(\u001B[49m\u001B[43mx\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 22\u001B[0m x \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mflatten(x)\n\u001B[1;32m 23\u001B[0m x \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfc1(x)\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/nn/modules/module.py:1511\u001B[0m, in \u001B[0;36mModule._wrapped_call_impl\u001B[0;34m(self, *args, **kwargs)\u001B[0m\n\u001B[1;32m 1509\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_compiled_call_impl(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs) \u001B[38;5;66;03m# type: ignore[misc]\u001B[39;00m\n\u001B[1;32m 1510\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m-> 1511\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_call_impl\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/nn/modules/module.py:1520\u001B[0m, in \u001B[0;36mModule._call_impl\u001B[0;34m(self, *args, **kwargs)\u001B[0m\n\u001B[1;32m 1515\u001B[0m \u001B[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001B[39;00m\n\u001B[1;32m 1516\u001B[0m \u001B[38;5;66;03m# this function, and just call forward.\u001B[39;00m\n\u001B[1;32m 1517\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m (\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_backward_hooks \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_backward_pre_hooks \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_forward_hooks \u001B[38;5;129;01mor\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_forward_pre_hooks\n\u001B[1;32m 1518\u001B[0m \u001B[38;5;129;01mor\u001B[39;00m _global_backward_pre_hooks \u001B[38;5;129;01mor\u001B[39;00m _global_backward_hooks\n\u001B[1;32m 1519\u001B[0m \u001B[38;5;129;01mor\u001B[39;00m _global_forward_hooks \u001B[38;5;129;01mor\u001B[39;00m _global_forward_pre_hooks):\n\u001B[0;32m-> 1520\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mforward_call\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1522\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 1523\u001B[0m result \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/nn/modules/pooling.py:164\u001B[0m, in \u001B[0;36mMaxPool2d.forward\u001B[0;34m(self, input)\u001B[0m\n\u001B[1;32m 163\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mforward\u001B[39m(\u001B[38;5;28mself\u001B[39m, \u001B[38;5;28minput\u001B[39m: Tensor):\n\u001B[0;32m--> 164\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mF\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mmax_pool2d\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43minput\u001B[39;49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mkernel_size\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mstride\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 165\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mpadding\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mdilation\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mceil_mode\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mceil_mode\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 166\u001B[0m \u001B[43m \u001B[49m\u001B[43mreturn_indices\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mreturn_indices\u001B[49m\u001B[43m)\u001B[49m\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/_jit_internal.py:499\u001B[0m, in \u001B[0;36mboolean_dispatch.<locals>.fn\u001B[0;34m(*args, **kwargs)\u001B[0m\n\u001B[1;32m 497\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m if_true(\u001B[38;5;241m*\u001B[39margs, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[1;32m 498\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m--> 499\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mif_false\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
|
|
"File \u001B[0;32m/nix/store/4mv9lb8b1vjx88y2i7px1r2s8p3xlr7d-python3-3.11.9-env/lib/python3.11/site-packages/torch/nn/functional.py:796\u001B[0m, in \u001B[0;36m_max_pool2d\u001B[0;34m(input, kernel_size, stride, padding, dilation, ceil_mode, return_indices)\u001B[0m\n\u001B[1;32m 794\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m stride \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[1;32m 795\u001B[0m stride \u001B[38;5;241m=\u001B[39m torch\u001B[38;5;241m.\u001B[39mjit\u001B[38;5;241m.\u001B[39mannotate(List[\u001B[38;5;28mint\u001B[39m], [])\n\u001B[0;32m--> 796\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mtorch\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mmax_pool2d\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43minput\u001B[39;49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mkernel_size\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mstride\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mpadding\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mdilation\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mceil_mode\u001B[49m\u001B[43m)\u001B[49m\n",
|
|
"\u001B[0;31mKeyboardInterrupt\u001B[0m: "
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from sklearn.model_selection import train_test_split\n",
|
|
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)\n",
|
|
"\n",
|
|
"not_nan_indices = np.argwhere(~np.isnan(np.array(y_test))).squeeze()\n",
|
|
"y_test = [y_test[i] for i in not_nan_indices]\n",
|
|
"X_test = [X_test[i] for i in not_nan_indices]\n",
|
|
"\n",
|
|
"model = Model()\n",
|
|
"model.fit(X_train, y_train)\n",
|
|
"# predictions = model.predict(X_train)\n",
|
|
"# print(predictions[0])\n",
|
|
"# print(y_train[0])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(10, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(8, 16, 16)\n",
|
|
"(7, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"(9, 16, 16)\n",
|
|
"(6, 16, 16)\n",
|
|
"tensor([[-0.7872, 0.9621, -1.1426, -1.4650, -0.4242, -0.4840],\n",
|
|
" [-0.4881, 1.5047, -1.0677, -0.8765, -1.2379, -1.1973],\n",
|
|
" [-0.7944, 0.7633, -0.8908, -1.2021, -0.1398, -0.1900],\n",
|
|
" [-0.2103, 2.2481, -2.2957, -2.3898, 0.0291, -0.8567],\n",
|
|
" [ 0.4143, 2.8888, -1.1756, -0.8911, -1.5052, -1.1943],\n",
|
|
" [ 0.5421, 2.6672, -1.5965, -1.4607, -2.0967, -1.3973]],\n",
|
|
" grad_fn=<AddmmBackward0>)\n",
|
|
"tensor([1, 1, 1, 1, 1, 1])\n",
|
|
"F1 Score (macro): 0.36\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from sklearn.metrics import f1_score\n",
|
|
"\n",
|
|
"y_pred = model.predict(X_test)\n",
|
|
"result = model.cnn(clean_batch(X_train[0]))\n",
|
|
"print(result)\n",
|
|
"print(torch.argmax(result, axis=1))\n",
|
|
"# print(y_train[0])\n",
|
|
"print(\"F1 Score (macro): {0:.2f}\".format(f1_score(y_test, y_pred, average='macro'))) # You may encounter errors, you are expected to figure out what's the issue.\n"
|
|
],
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"ExecuteTime": {
|
|
"end_time": "2024-04-28T04:08:22.419351Z",
|
|
"start_time": "2024-04-28T04:08:22.241362Z"
|
|
}
|
|
},
|
|
"id": "dd595539230499dd",
|
|
"execution_count": 189
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8aa31404",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 12. Hyperparameters Search"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"outputs": [],
|
|
"source": [],
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"id": "7728c49ea8a0bacf"
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "81addd51",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.18"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|