* chore: update pre-commit versions
* ci: remove old configurations
* ci: copy workflow from prototorch
* ci: run precommit for all files
* ci: add examples CPU test
* ci(test): failing example test
* ci: fix workflow definition
* ci(test): repeat failing example test
* ci: fix workflow definition
* ci(test): repeat failing example test II
* ci: fix test command
* ci: cleanup example test
* ci: remove travis badge
The early stopping callback does not work as expected, and crashes at the end of
max_epochs with:
```
~/miniconda3/envs/py38/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py in on_train_end(self)
155 """Called when the train ends."""
156 for callback in self.callbacks:
--> 157 callback.on_train_end(self, self.lightning_module)
158
159 def on_pretrain_routine_start(self) -> None:
~/work/repos/prototorch_models/prototorch/models/callbacks.py in on_train_end(self, trainer, pl_module)
18 def on_train_end(self, trainer, pl_module):
19 # instead, do it at the end of training loop
---> 20 self._run_early_stopping_check(trainer, pl_module)
21
22
TypeError: _run_early_stopping_check() takes 2 positional arguments but 3 were given
```
Pass the component initializer as an hparam slows down the script very much. The
API has now been changed to pass it as a kwarg to the models instead.
The example scripts have also been updated to reflect the new changes.
Also, ImageGMLVQ and an example script `gmlvq_mnist.py` that uses it have also
been added.