Add references to the documentation.
This commit is contained in:
parent
0c1f7a4772
commit
66e3e51a52
@ -47,6 +47,7 @@ extensions = [
|
||||
"sphinx.ext.viewcode",
|
||||
"sphinx_rtd_theme",
|
||||
"sphinxcontrib.katex",
|
||||
"sphinxcontrib.bibtex",
|
||||
]
|
||||
|
||||
# https://nbsphinx.readthedocs.io/en/0.8.5/custom-css.html#For-All-Pages
|
||||
@ -202,3 +203,7 @@ intersphinx_mapping = {
|
||||
|
||||
epub_cover = ()
|
||||
version = release
|
||||
|
||||
# -- Options for Bibliography -------------------------------------------
|
||||
bibtex_bibfiles = ['refs.bib']
|
||||
bibtex_reference_style = 'author_year'
|
||||
|
@ -14,7 +14,7 @@ Unsupervised Methods
|
||||
|
||||
Classical Learning Vector Quantization
|
||||
-----------------------------------------
|
||||
Original LVQ models by Kohonen.
|
||||
Original LVQ models introduced by :cite:t:`kohonen1989`.
|
||||
These heuristic algorithms do not use gradient descent.
|
||||
|
||||
.. autoclass:: prototorch.models.lvq.LVQ1
|
||||
@ -22,7 +22,7 @@ These heuristic algorithms do not use gradient descent.
|
||||
.. autoclass:: prototorch.models.lvq.LVQ21
|
||||
:members:
|
||||
|
||||
It is also possible to use the GLVQ structure as shown in [Sato&Yamada].
|
||||
It is also possible to use the GLVQ structure as shown by :cite:t:`sato1996` in chapter 4.
|
||||
This allows the use of gradient descent methods.
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.GLVQ1
|
||||
@ -33,14 +33,15 @@ This allows the use of gradient descent methods.
|
||||
Generalized Learning Vector Quantization
|
||||
-----------------------------------------
|
||||
|
||||
:cite:t:`sato1996` presented a LVQ variant with a cost function called GLVQ.
|
||||
This allows the use of gradient descent methods.
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.GLVQ
|
||||
:members:
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.ImageGLVQ
|
||||
:members:
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.SiameseGLVQ
|
||||
:members:
|
||||
The cost function of GLVQ can be extended by a learnable dissimilarity.
|
||||
These learnable dissimilarities assign relevances to each data dimension during the learning phase.
|
||||
For example GRLVQ :cite:p:`hammer2002` and GMLVQ :cite:p:`schneider2009` .
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.GRLVQ
|
||||
:members:
|
||||
@ -48,11 +49,44 @@ Generalized Learning Vector Quantization
|
||||
.. autoclass:: prototorch.models.glvq.GMLVQ
|
||||
:members:
|
||||
|
||||
The dissimilarity from GMLVQ can be interpreted as a projection into another dataspace.
|
||||
Applying this projection only to the data results in LVQMLN
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.LVQMLN
|
||||
:members:
|
||||
|
||||
The projection idea from GMLVQ can be extended to an arbitrary transformation with learnable parameters.
|
||||
|
||||
.. autoclass:: prototorch.models.glvq.SiameseGLVQ
|
||||
:members:
|
||||
|
||||
Probabilistic Models
|
||||
--------------------------------------------
|
||||
|
||||
Probabilistic variants assume, that the prototypes generate a probability distribution over the classes.
|
||||
For a test sample they return a distribution instead of a class assignment.
|
||||
|
||||
The following two algorihms were presented by :cite:t:`seo2003` .
|
||||
Every prototypes is a center of a gaussian distribution of its class, generating a mixture model.
|
||||
|
||||
.. autoclass:: prototorch.models.probabilistic.LikelihoodRatioLVQ
|
||||
:members:
|
||||
|
||||
.. autoclass:: prototorch.models.probabilistic.RSLVQ
|
||||
:members:
|
||||
|
||||
Missing:
|
||||
|
||||
- PLVQ
|
||||
|
||||
Classification by Component
|
||||
-----------------------------------------
|
||||
--------------------------------------------
|
||||
|
||||
The Classification by Component (CBC) has been introduced by :cite:t:`saralajew2019` .
|
||||
In a CBC architecture there is no class assigned to the prototypes.
|
||||
Instead the dissimilarities are used in a reasoning process, that favours or rejects a class by a learnable degree.
|
||||
The output of a CBC network is a probability distribution over all classes.
|
||||
|
||||
.. autoclass:: prototorch.models.cbc.CBC
|
||||
:members:
|
||||
|
||||
@ -62,6 +96,15 @@ Classification by Component
|
||||
Visualization
|
||||
========================================
|
||||
|
||||
Visualization is very specific to its application.
|
||||
PrototorchModels delivers visualization for two dimensional data and image data.
|
||||
|
||||
The visulizations can be shown in a seperate window and inside a tensorboard.
|
||||
|
||||
.. automodule:: prototorch.models.vis
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
Bibliography
|
||||
========================================
|
||||
.. bibliography::
|
62
docs/source/refs.bib
Normal file
62
docs/source/refs.bib
Normal file
@ -0,0 +1,62 @@
|
||||
@article{sato1996,
|
||||
title={Generalized learning vector quantization},
|
||||
author={Sato, Atsushi and Yamada, Keiji},
|
||||
journal={Advances in neural information processing systems},
|
||||
pages={423--429},
|
||||
year={1996},
|
||||
publisher={MORGAN KAUFMANN PUBLISHERS},
|
||||
url={http://papers.nips.cc/paper/1113-generalized-learning-vector-quantization.pdf},
|
||||
}
|
||||
|
||||
@book{kohonen1989,
|
||||
doi = {10.1007/978-3-642-88163-3},
|
||||
year = {1989},
|
||||
publisher = {Springer Berlin Heidelberg},
|
||||
author = {Teuvo Kohonen},
|
||||
title = {Self-Organization and Associative Memory}
|
||||
}
|
||||
|
||||
@inproceedings{saralajew2019,
|
||||
author = {Saralajew, Sascha and Holdijk, Lars and Rees, Maike and Asan, Ebubekir and Villmann, Thomas},
|
||||
booktitle = {Advances in Neural Information Processing Systems},
|
||||
title = {Classification-by-Components: Probabilistic Modeling of Reasoning over a Set of Components},
|
||||
url = {https://proceedings.neurips.cc/paper/2019/file/dca5672ff3444c7e997aa9a2c4eb2094-Paper.pdf},
|
||||
volume = {32},
|
||||
year = {2019}
|
||||
}
|
||||
|
||||
@article{seo2003,
|
||||
author = {Seo, Sambu and Obermayer, Klaus},
|
||||
title = "{Soft Learning Vector Quantization}",
|
||||
journal = {Neural Computation},
|
||||
volume = {15},
|
||||
number = {7},
|
||||
pages = {1589-1604},
|
||||
year = {2003},
|
||||
month = {07},
|
||||
doi = {10.1162/089976603321891819},
|
||||
}
|
||||
|
||||
@article{hammer2002,
|
||||
title = {Generalized relevance learning vector quantization},
|
||||
journal = {Neural Networks},
|
||||
volume = {15},
|
||||
number = {8},
|
||||
pages = {1059-1068},
|
||||
year = {2002},
|
||||
doi = {https://doi.org/10.1016/S0893-6080(02)00079-5},
|
||||
author = {Barbara Hammer and Thomas Villmann},
|
||||
}
|
||||
|
||||
@article{schneider2009,
|
||||
author = {Schneider, Petra and Biehl, Michael and Hammer, Barbara},
|
||||
title = "{Adaptive Relevance Matrices in Learning Vector Quantization}",
|
||||
journal = {Neural Computation},
|
||||
volume = {21},
|
||||
number = {12},
|
||||
pages = {3532-3561},
|
||||
year = {2009},
|
||||
month = {12},
|
||||
doi = {10.1162/neco.2009.11-08-908},
|
||||
}
|
||||
|
@ -90,8 +90,6 @@ def robust_soft_loss(probabilities, target, prototype_labels):
|
||||
|
||||
class LikelihoodRatioLVQ(GLVQ):
|
||||
"""Learning Vector Quantization based on Likelihood Ratios
|
||||
|
||||
Based on "Soft Learning Vector Quantization" from Sambu Seo and Klaus Obermayer (2003).
|
||||
"""
|
||||
def __init__(self, hparams, **kwargs):
|
||||
super().__init__(hparams, **kwargs)
|
||||
@ -128,8 +126,6 @@ class LikelihoodRatioLVQ(GLVQ):
|
||||
|
||||
class RSLVQ(GLVQ):
|
||||
"""Learning Vector Quantization based on Likelihood Ratios
|
||||
|
||||
Based on "Soft Learning Vector Quantization" from Sambu Seo and Klaus Obermayer (2003).
|
||||
"""
|
||||
def __init__(self, hparams, **kwargs):
|
||||
super().__init__(hparams, **kwargs)
|
||||
|
Loading…
Reference in New Issue
Block a user