my fork of prototorch_models
Go to file
Alexander Engelsberger 7379c61966 [BUGFIX] Fix image visualization for some parameter combination
image visualization was broken if add_embeding was False, but data visualization was on.
2021-06-03 15:12:51 +02:00
docs [FEATURE] Add Growing Neural Gas 2021-06-01 17:19:43 +02:00
examples [BUGFIX] CLI example documentation improved 2021-06-03 13:47:20 +02:00
prototorch/models [BUGFIX] Fix image visualization for some parameter combination 2021-06-03 15:12:51 +02:00
tests Dummy test gets detected again 2021-05-25 22:03:38 +02:00
.bumpversion.cfg Bump version: 0.1.6 → 0.1.7 2021-05-11 17:18:29 +02:00
.codacy.yml Add CI definitions 2021-05-10 15:34:43 +02:00
.codecov.yml Add CI definitions 2021-05-10 15:34:43 +02:00
.gitignore Ignore .pt files 2021-05-25 16:46:31 +02:00
.readthedocs.yml [DOC] Ignore Sphinx warnings until prototorch is bumped 2021-05-18 20:08:14 +02:00
.travis.yml Improve example test script (with failing example) 2021-05-21 18:48:37 +02:00
LICENSE Initial commit 2021-04-21 13:13:28 +02:00
README.md Update readme 2021-06-02 00:03:35 +02:00
setup.py Update setup.py 2021-06-02 13:01:27 +02:00

ProtoTorch Models

Build Status PyPI

Pre-packaged prototype-based machine learning models using ProtoTorch and PyTorch-Lightning.

Installation

To install this plugin, simply run the following command:

pip install prototorch_models

The plugin should then be available for use in your Python environment as prototorch.models.

Note: Installing the models plugin should automatically install a suitable version of ProtoTorch.

Available models

LVQ Family

  • Learning Vector Quantization 1 (LVQ1)
  • Generalized Learning Vector Quantization (GLVQ)
  • Generalized Relevance Learning Vector Quantization (GRLVQ)
  • Generalized Matrix Learning Vector Quantization (GMLVQ)
  • Localized and Generalized Matrix Learning Vector Quantization (LGMLVQ)
  • Limited-Rank Matrix Learning Vector Quantization (LiRaMLVQ)
  • Learning Vector Quantization Multi-Layer Network (LVQMLN)
  • Siamese GLVQ
  • Cross-Entropy Learning Vector Quantization (CELVQ)
  • Robust Soft Learning Vector Quantization (RSLVQ)

Other

  • k-Nearest Neighbors (KNN)
  • Neural Gas (NG)
  • Growing Neural Gas (GNG)

Work in Progress

  • Classification-By-Components Network (CBC)
  • Learning Vector Quantization 2.1 (LVQ2.1)

Planned models

  • Median-LVQ
  • Generalized Tangent Learning Vector Quantization (GTLVQ)
  • Probabilistic Learning Vector Quantization (PLVQ)
  • Self-Incremental Learning Vector Quantization (SILVQ)

Development setup

It is recommended that you use a virtual environment for development. If you do not use conda, the easiest way to work with virtual environments is by using virtualenvwrapper. Once you've installed it with pip install virtualenvwrapper, you can do the following:

export WORKON_HOME=~/pyenvs
mkdir -p $WORKON_HOME
source /usr/local/bin/virtualenvwrapper.sh  # location may vary
mkvirtualenv pt

Once you have a virtual environment setup, you can start install the models plugin with:

workon pt
git clone git@github.com:si-cim/prototorch_models.git
cd prototorch_models
git checkout dev
pip install -e .[all]  # \[all\] if you are using zsh or MacOS

Note: Please avoid installing Tensorflow in this environment.

To assist in the development process, you may also find it useful to install yapf, isort and autoflake. You can install them easily with pip.

FAQ

How do I update the plugin?

If you have already cloned and installed prototorch and the prototorch_models plugin with the -e flag via pip, all you have to do is navigate to those folders from your terminal and do git pull to update.