prototorch_models/docs/source/tutorial.ipynb

624 lines
68 KiB
Plaintext
Raw Normal View History

2021-05-18 17:41:58 +00:00
{
"cells": [
{
"cell_type": "markdown",
"id": "f176387e",
"metadata": {},
"source": [
"# A short tutorial for the `prototorch.models` plugin"
]
},
{
"cell_type": "markdown",
"id": "08f641b4",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"id": "d0d8096f",
"metadata": {},
"source": [
"This is a short tutorial for the [models](https://github.com/si-cim/prototorch_models) plugin of the [ProtoTorch](https://github.com/si-cim/prototorch) framework.\n",
"\n",
"[ProtoTorch](https://github.com/si-cim/prototorch) provides [torch.nn](https://pytorch.org/docs/stable/nn.html) modules and utilities to implement prototype-based models. However, it is up to the user to put these modules together into models and handle the training of these models. Expert machine-learning practioners and researchers sometimes prefer this level of control. However, this leads to a lot of boilerplate code that is essentially same across many projects. Needless to say, this is a source of a lot of frustration. [PyTorch-Lightning](https://pytorch-lightning.readthedocs.io/en/latest/) is a framework that helps avoid a lot of this frustration by handling the boilerplate code for you so you don't have to reinvent the wheel every time you need to implement a new model.\n",
"\n",
"With the [prototorch.models](https://github.com/si-cim/prototorch_models) plugin, we've gone one step further and pre-packaged commonly used prototype-models like GMLVQ as [Lightning-Modules](https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.core.lightning.html?highlight=lightning%20module#pytorch_lightning.core.lightning.LightningModule). With only a few lines to code, it is now possible to build and train prototype-models. It quite simply cannot get any simpler than this."
]
},
{
"cell_type": "markdown",
"id": "7b57f991",
"metadata": {},
"source": [
"## Basics"
]
},
{
"cell_type": "markdown",
"id": "009efb2c",
"metadata": {},
"source": [
"First things first. When working with the models plugin, you'll probably need `torch`, `prototorch` and `pytorch_lightning`. So, we recommend that you import all three like so:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d8eb606b",
"metadata": {},
"outputs": [],
"source": [
"import prototorch as pt\n",
"import pytorch_lightning as pl\n",
"import torch"
]
},
{
"cell_type": "markdown",
"id": "d5daf6be",
"metadata": {},
"source": [
"### Building Models"
]
},
{
"cell_type": "markdown",
"id": "7ddc8d04",
"metadata": {},
"source": [
"Let's start by building a `GLVQ` model. It is one of the simplest models to build. The only requirements are a prototype distribution and an initializer."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "39cc97fc",
"metadata": {},
"outputs": [],
"source": [
"model = pt.models.GLVQ(\n",
" hparams=dict(distribution=[1, 1, 1]),\n",
" prototype_initializer=pt.components.Zeros(2),\n",
")"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 3,
2021-05-18 17:41:58 +00:00
"id": "54dc20ec",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GLVQ(\n",
" (proto_layer): LabeledComponents(components.shape: (3, 2))\n",
2021-05-25 18:54:07 +00:00
" (acc_metric): Accuracy()\n",
2021-05-18 17:41:58 +00:00
")\n"
]
}
],
"source": [
"print(model)"
]
},
{
"cell_type": "markdown",
"id": "3927cfea",
"metadata": {},
"source": [
2021-05-25 18:54:07 +00:00
"The key `distribution` in the `hparams` argument describes the prototype distribution. If it is a Python [list](https://docs.python.org/3/tutorial/datastructures.html), it is assumed that there are as many entries in this list as there are classes, and the number at each location of this list describes the number of prototypes to be used for that particular class. So, `[1, 1, 1]` implies that we have three classes with one prototype per class. If it is a Python [tuple](https://docs.python.org/3/tutorial/datastructures.html), a shorthand of `(num_classes, prototypes_per_class)` is assumed. If it is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html), the key-value pairs describe the class label and the number of prototypes for that class respectively. So, `{0: 2, 1: 2, 2: 2}` implies that we have three classes with labels `{1, 2, 3}`, each equipped with two prototypes. If however, the dictionary contains the keys `\"num_classes\"` and `\"prototypes_per_class\"`, they are parsed to use their values as one might expect.\n",
"\n",
"The `prototype_initializer` argument describes how the prototypes are meant to be initialized. This argument has to be an instantiated object of some kind of [ComponentInitializer](https://github.com/si-cim/prototorch/blob/dev/prototorch/components/initializers.py#L27). If this is a [DimensionAwareInitializer](https://github.com/si-cim/prototorch/blob/dev/prototorch/components/initializers.py), this only requires a dimension arugment that describes the vector dimension of the prototypes. So, `pt.components.Zeros(2)` creates 2d-vector prototypes all initialized to zeros.\n",
"\n",
"It is also possible to use a [ClassAwareInitializer](https://github.com/si-cim/prototorch/blob/dev/prototorch/components/initializers.py). However, this type of initializer requires data to be instantiated.\n",
"\n",
2021-05-18 17:41:58 +00:00
"For a full list of available models, please check the [prototorch_models documentation](https://prototorch-models.readthedocs.io/en/latest/)."
]
},
{
"cell_type": "markdown",
"id": "b17c1476",
"metadata": {},
"source": [
"### Data"
]
},
{
"cell_type": "markdown",
"id": "b5d6d28e",
"metadata": {},
"source": [
"The preferred way to working with data in `torch` is to use the [Dataset and Dataloader API](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html). There a few pre-packaged datasets available under `prototorch.datasets`. See [here](https://prototorch.readthedocs.io/en/latest/api.html#module-prototorch.datasets) for a full list of available datasets."
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 4,
2021-05-18 17:41:58 +00:00
"id": "9a104e40",
"metadata": {},
"outputs": [],
"source": [
"train_ds = pt.datasets.Iris(dims=[0, 2])"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 5,
2021-05-18 17:41:58 +00:00
"id": "ebe9036c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"prototorch.datasets.iris.Iris"
]
},
2021-05-25 18:54:07 +00:00
"execution_count": 5,
2021-05-18 17:41:58 +00:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"type(train_ds)"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 6,
2021-05-18 17:41:58 +00:00
"id": "40fc6e22",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"((150, 2), (150,))"
]
},
2021-05-25 18:54:07 +00:00
"execution_count": 6,
2021-05-18 17:41:58 +00:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"train_ds.data.shape, train_ds.targets.shape"
]
},
{
"cell_type": "markdown",
"id": "413a1d4e",
"metadata": {},
"source": [
"Once we have such a dataset, we could wrap it in a `Dataloader` to load the data in batches, and possibly apply some transformations on the fly."
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 7,
2021-05-18 17:41:58 +00:00
"id": "cc8cbc5d",
"metadata": {},
"outputs": [],
"source": [
"train_loader = torch.utils.data.DataLoader(train_ds, batch_size=2)"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 8,
2021-05-18 17:41:58 +00:00
"id": "0788db2f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"torch.utils.data.dataloader.DataLoader"
]
},
2021-05-25 18:54:07 +00:00
"execution_count": 8,
2021-05-18 17:41:58 +00:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"type(train_loader)"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 9,
2021-05-18 17:41:58 +00:00
"id": "b0aa9ef5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"x_batch=tensor([[5.1000, 1.4000],\n",
" [4.9000, 1.4000]]), y_batch=tensor([0., 0.])\n"
]
}
],
"source": [
"x_batch, y_batch = next(iter(train_loader))\n",
"print(f\"{x_batch=}, {y_batch=}\")"
]
},
{
"cell_type": "markdown",
"id": "d8c63bd8",
"metadata": {},
"source": [
"This perhaps seems like a lot of work for a small dataset that fits completely in memory. However, this comes in very handy when dealing with huge datasets that can only be processed in batches."
]
},
{
"cell_type": "markdown",
"id": "b4bb738f",
"metadata": {},
"source": [
"### Training"
]
},
{
"cell_type": "markdown",
"id": "8da4f8eb",
"metadata": {},
"source": [
"If you're familiar with other deep learning frameworks, you might perhaps expect a `.fit(...)` or `.train(...)` method. However, in PyTorch-Lightning, this is done slightly differently. We first create a trainer and then pass the model and the Dataloader to `trainer.fit(...)` instead. So, it is more functional in style than object-oriented."
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 10,
2021-05-18 17:41:58 +00:00
"id": "952d90de",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: False, used: False\n",
2021-05-25 18:54:07 +00:00
"GPU available: False, used: False\n",
"TPU available: False, using: 0 TPU cores\n",
2021-05-18 17:41:58 +00:00
"TPU available: False, using: 0 TPU cores\n"
]
}
],
"source": [
"trainer = pl.Trainer(max_epochs=2, weights_summary=None)"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 11,
2021-05-18 17:41:58 +00:00
"id": "8937b061",
"metadata": {},
"outputs": [
2021-05-25 18:54:07 +00:00
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/blackfly/pyenvs/pt/lib/python3.9/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: you defined a validation_step but have no val_dataloader. Skipping val loop\n",
" warnings.warn(*args, **kwargs)\n"
]
},
2021-05-18 17:41:58 +00:00
{
"data": {
"application/vnd.jupyter.widget-view+json": {
2021-05-25 18:54:07 +00:00
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Validation sanity check: 0it [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/blackfly/pyenvs/pt/lib/python3.9/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 6 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n",
" warnings.warn(*args, **kwargs)\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "45ecc3d497a847c7a81b980c6e047d19",
2021-05-18 17:41:58 +00:00
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: 0it [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"trainer.fit(model, train_loader)"
]
},
{
"cell_type": "markdown",
"id": "915860fe",
"metadata": {},
"source": [
"### From data to a trained model - a very minimal example"
]
},
{
"cell_type": "code",
2021-05-25 18:54:07 +00:00
"execution_count": 12,
2021-05-18 17:41:58 +00:00
"id": "6ce12fc8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"GPU available: False, used: False\n",
2021-05-25 18:54:07 +00:00
"GPU available: False, used: False\n",
"TPU available: False, using: 0 TPU cores\n",
2021-05-18 17:41:58 +00:00
"TPU available: False, using: 0 TPU cores\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
2021-05-25 18:54:07 +00:00
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Validation sanity check: 0it [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a651cde7ef1e4543a146ce81fb11d62c",
2021-05-18 17:41:58 +00:00
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Training: 0it [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"train_ds = pt.datasets.Iris(dims=[0, 2])\n",
"train_loader = torch.utils.data.DataLoader(train_ds, batch_size=32)\n",
"\n",
"model = pt.models.GLVQ(\n",
" dict(distribution=(3, 2), lr=0.1),\n",
" prototype_initializer=pt.components.SMI(train_ds),\n",
")\n",
"\n",
"trainer = pl.Trainer(max_epochs=50, weights_summary=None)\n",
"trainer.fit(model, train_loader)"
]
},
{
"cell_type": "markdown",
"id": "e8094c0b",
"metadata": {},
"source": [
"## Advanced"
]
},
{
"cell_type": "markdown",
2021-05-25 18:54:07 +00:00
"id": "6d691b30",
2021-05-18 17:41:58 +00:00
"metadata": {},
"source": [
2021-05-25 18:54:07 +00:00
"### Initializing prototypes with a subset of a dataset (along with transformations)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "71a028da",
"metadata": {},
"outputs": [],
"source": [
"import prototorch as pt\n",
"import pytorch_lightning as pl\n",
"import torch\n",
"from torchvision import transforms\n",
"from torchvision.datasets import MNIST"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "37528377",
"metadata": {},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "7626a902",
"metadata": {},
"outputs": [],
"source": [
"train_ds = MNIST(\n",
" \"~/datasets\",\n",
" train=True,\n",
" download=True,\n",
" transform=transforms.Compose([\n",
" transforms.RandomHorizontalFlip(p=1.0),\n",
" transforms.RandomVerticalFlip(p=1.0),\n",
" transforms.ToTensor(),\n",
" ]),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "de9ed93c",
"metadata": {},
"outputs": [],
"source": [
"s = int(0.05 * len(train_ds))\n",
"init_ds, rest_ds = torch.utils.data.random_split(train_ds, [s, len(train_ds) - s])"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "400b9ba0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<torch.utils.data.dataset.Subset at 0x7fd9c9c5b8e0>"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"init_ds"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "0574a071",
"metadata": {},
"outputs": [],
"source": [
"model = pt.models.ImageGLVQ(\n",
" dict(distribution=(10, 5)),\n",
" prototype_initializer=pt.components.SMI(init_ds),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "5fc34157",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fd9c8173a00>"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAADLCAYAAABpqviOAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Z1A+gAAAACXBIWXMAAAsTAAALEwEAmpwYAACWlklEQVR4nO39e4xk63Yfhv12VXW93++qfszjzLmHlyAokyIoBTYUQowViVFyE4AgaAcKJTO4SCLFdmLDIi0gUgAHkBJHigIbFG5MxmSg8JKSFZBA7FgyLUIIEFLipShd3nPu3DOPfne93+/Xzh/dvzWr9lTP1N5VPdPnTC2gMT3d1VVrf+v71rfWb70M0zSxox3taEc7+nDI9b4Z2NGOdrSjHb1b2in+He1oRzv6wGin+He0ox3t6AOjneLf0Y52tKMPjHaKf0c72tGOPjDaKf4d7WhHO/rA6M4Uv2EYf9owjKeGYTwzDOPn7upzdrSjHe1oR/bIuIs8fsMw3AC+B+BfB3AO4J8B+DdM0/x06x+2ox3taEc7skV3ZfH/KIBnpmm+ME1zAuCbAL52R5+1ox3taEc7skGeO3rffQBn6v/nAP7YbS82DGNXPryjHe1oR/apZppmxu4f3ZXifysZhvF1AF9/X5+/ox3taEdfAjpx8kd3pfgvAByq/x/c/EzINM1vAPgGsLP4d7SjHe3oXdJdYfz/DMDHhmE8MgzDC+CnAfzmHX3Wjna0ox3tyAbdicVvmubMMIy/BOC/BuAG8EumaX7nLj5rRzu672QYBnZdcO8P3Qd5vG8e7iSd0zYTNqEewzBgGAZcLhdcLtdr3wPAYrHAYrHAfD7HfD6HaZpbX2j9ufqLPJimuZKHbfJh5UH/n6R5WCwWd8LDbXxoebwrHlbxAlzLQ/NAPrZN+nPdbjdcLtfS8/JfrslisbhTHlwuF9xu92trzv15V2thPZdutxuGYSw9L3nS+2LbPFj3g9vtxnw+f40HvRbbprftCfKh9daa9C3TNH/ELj/vLbjrhPQm3tvbg9/vh9/vRzAYRDAYhN/vh8dz/Ujj8Rj9fh+9Xg+DwQCj0QiTyQSz2WzjzUUePB4PvF4v/H4/QqEQQqEQAoEA9vb2AADT6RTD4RD9fh/9fh/D4RCj0QjT6XRjHrgW5CEQCAgPfr8fXq9XDhnXotvtot/vCw+bbnAtD5/PB5/PJ7IIBALw+Xxwu90AruXR6/XQ7XaX5LHpYefB9ng88Hg88Pl8CAQCsh6BQED2xGQywWAwkD0xHA4xHo+3sidIlAn3JnnRynU2m2E6nWI0GgkP21Q2VC7kgV8ul0t4mM/nsj81D9taB/LAfUG5eL1eMYLm8zlms9nSOmxTFsC1PPb29pb2ZyAQkM9ZLBaYzWaYTCYYjUZL+3JbxP3p9/uFB7/fv3ThUB6j0QiDwWDre8JKXxjFz8XjBopEIkgkEshmsygUCsjn80gmk/D5fDBNE91uF+VyGaVSCZeXlyiVSqjX6+j1ephOp455cLvd8Hq98Pl8CIVCiMViSKfTKBaLODo6QjqdFqEOh0M0Gg3h4+LiAqVSCe12G5PJxDEPVC5ci1gshkwmg4ODAxwcHCCdTiMQCMDtdmM6naLT6aBcLuPy8hKnp6e4vLxEu93GeDx2fMi0wg8Gg4jFYkilUsjlcigWi8hms4hGo/B6vVgsFuj1eri6usLl5SXOz89RKpXQbDYxHA4xm80cr4W++MLhMOLxuMijWCwinU7Lnuj1eqhWqyiXy7i6usLV1RWq1Sq63a5jeVj54ZpEo1GkUink83kkEgk53LPZDMPhEK1WC7VaTRTPNj0gl8sFr9eLaDSKZDKJTCaDeDyOvb09sazH4zHa7TZKpdKd8GAYBrxeL0KhEOLxOFKpFFKpFCKRCGaz2RIP1WoVjUZj694H96jf70csFkMymUQul0MikRAeptMper0ems0m6vX61tdC64xIJIJ0Oo1cLod4PA4A8nmj0UjW4i7kYaUvhOLnRg4EAojH48hkMtjf38fR0REODg6wv7+PdDqNSCQCr9crC1mv13F6eornz5/j+fPnOD4+FqVnd0FpOfj9fkQiEdlEh4eHODo6wtHREYrFImKxmFg14/FYlO7Lly8Rj8fh8/nw8uVLNBoN2zzoy4/KNpvN4sGDB3j8+DEePnyIQqGAaDQKv98P4JXXUavV8OLFC8RiMezt7eHk5ASNRsP2JciLR8uDPDx48ACHh4coFApIJpMIBoPweDyYz+dLPHz++ecIhUI4OTlBpVJBv993JA/yEI1GRdkfHBzg8PAQBwcHyOVycvlwT7RaLZyfn+Ply5d4/vw5fD4fzs7O0Gw2HR8yeh1cG+4TXsg85DQGWq0WBoMB3G731i1LQgrkgRdQMpmUS3g2m6HT6WA4HGJvb2/rPFiNk3A4jEQigUwmg3A4LFZup9PBeDwWD3mbl84qj1jLgzz0+30xTAi/bIsH4BXEQ4ufezWRSIgHxgtwNBptfU/cRvde8fO2DAaDomgfPXqEhw8fYn9/H6lUCqFQCB6PB7PZDPP5HC6XC36/H7lcTtw8j8eDxWKByWQibuW6RCUTDAaRSqVQLBaXFH4+nxeFr3nY29uTA+fz+eRCGAwGArnY4UEf5mw2i4ODAzx8+BCPHz/G4eEhEokEvF4vTNPEeDyWzR8Oh+H1egUOmU6nGI/HGI1G6PV6a+OJ2sIOBoPIZrM4PDzEkydP8PjxY+zv7yORSMDv98PtdmM2m2E2m8HlciEQCCCfz8vfG4aB+XyOyWQiX+uS3hNcB148+/v7yGazCIfD2Nvbe21PZDIZWUdawOPxWOCnTYgKh8rD4/GIF2IYBqbTqViahJ22Abmt4oPWotvtFs/U5/NhsVhgMBig2+1iOp2i3+8LD9tWvHw/wzDg8/kQiUQQi8UAAKPRCJ1OB9PpFIPBAJPJZOuxOB3bAZblAUDOAPfAcDjcCgyrP59fJPIQi8XkjHQ6HeGHa3HXyv9eK34e8HA4jP39fTx8+BCPHj0SqzIYDGI4HKJaraLZbMom9vl8SKVS4uZnMhmxfFutFqrV6tqK3zAM7O3tIRQKLSn8/f19FAoFpFIpGIaBq6sr1Ot12cxerxexWAyJRAKRSASRSARHR0cYDAao1Wq4urpaG2qhwg0EAmLZHh4e4vDwEPv7+zg4OIDb7Ua5XEa9XhfowjAMBINBJBIJpFIpBAIB7O/vo9PpoNFooFarYTgcrqX4rTBGoVDA0dERHj16JArX6/Wi3W7j9PQU7XZb3tvn8yGZTCKbzcLr9aJQKGA0GqHf76PZbKLVaq2t+HkBhkKhJYV/cHCAQqGASCSC6XQqVjwVnM/nW1qHZDKJBw8eYDQaodlsolwuO1b8OmirIR3DMETRuFwuNBoNzGYzDAYDdDodgR23qXSpOHnpAhBrNxAIyGVPxcv1odLdJg9cB9M0sbe3J/CsaZoolUriEXe7XYxGI7kUt0Uaw+flT+8DACqVCiaTCXq9HjqdDgaDgfCw6VpY9wTlYRiGwF8ejwedTkc8Yi2Pu0o8IN1bxU9lFwwGkU6nsb+/Lwo/lUphsVjg8vJSsOtSqYRWq4XFYoFgMIj9/X08efIER0dHEhNIpVJIJBLw+Xxr80GMkJb+wcEBisUiMpkMAoEA2u02KpUKzs/PcX5+jnq9jvl8Dr/fj2w2i6OjIzx48ECwf2KdhGLWIZfLJYorn88LvEW8cjAYoFwu4+zsDOfn52g0GphMJvB4PIhGo9jf38dHH32Ew8ND4SGdTiMYDC5l/7xNHnt7ewiHwwK1aXmMx2NcXl4Kjl+tVtHr9WCapsjjk08+wYMHD7C3tye4Lz2ldXlYJY9CoYB0Og2XyyUY/uXlJS4vLwU7DgQCKBaL4h1Fo1GEQiEkk0mJDW1KOlhHTyccDiOXy2E4HKJer8tlx/jGtpU+gKUgssvlEk/V5/OhVqthPB6j1Wqh0WhgMBhsXenzX/JgmqYYQtlsFp1OB5PJBO12G41GQwy2u+BBZ8nQ2s7n8+h2u5jNZuh2u2g2m+h0OhiNRltXuNY9QQ8sm81iNpuh1+thOBze2Z6
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"plt.imshow(model.get_prototype_grid(num_columns=10))"
2021-05-18 17:41:58 +00:00
]
},
{
"cell_type": "markdown",
"id": "e75ba9e0",
"metadata": {},
"source": [
"## FAQs"
]
},
{
"cell_type": "markdown",
"id": "bffea4a1",
"metadata": {},
"source": [
"### How do I Retrieve the prototypes and their respective labels from the model?\n",
"\n",
"For prototype models, the prototypes can be retrieved (as `torch.tensor`) as `model.prototypes`. You can convert it to a NumPy Array by calling `.numpy()` on the tensor if required.\n",
"\n",
"```python\n",
">>> model.prototypes.numpy()\n",
"```\n",
"\n",
"Similarly, the labels of the prototypes can be retrieved via `model.prototype_labels`.\n",
"\n",
"```python\n",
">>> model.prototype_labels\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "ecf33e0a",
"metadata": {},
"source": [
"### How do I make inferences/predictions/recall with my trained model?\n",
"\n",
2021-05-25 18:54:07 +00:00
"The models under [prototorch.models](https://github.com/si-cim/prototorch_models) provide a `.predict(x)` method for making predictions. This returns the predicted class labels. It is essential that the input to this method is a `torch.tensor` and not a NumPy array. Model instances are also callable. So, you could also just say `model(x)` as if `model` were just a function. However, this returns a (pseudo)-probability distribution over the classes.\n",
2021-05-18 17:41:58 +00:00
"\n",
"#### Example\n",
"\n",
"```python\n",
2021-05-25 18:54:07 +00:00
">>> y_pred = model.predict(torch.Tensor(x_train)) # returns class labels\n",
"```\n",
"or, simply\n",
"```python\n",
">>> y_pred = model(torch.Tensor(x_train)) # returns probabilities\n",
2021-05-18 17:41:58 +00:00
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}