This week we worked on a few more cleanups for our mixin along with finally merging it into develop.
We also wrote a cookbook for converters along with adding a factory for them.

We decided to implement our mixin to another Iterative model as an additonal example.
For this we choose NewtonSVM LinearMachine. The class was a bit updated and we discovered a bug in IterativeMachine where end_training() was not called in case of premature stopping that was fixed in this PR.

The code of NewtonSVM is old with a lot of raw pointers, for memory allocations instead of SGVectors and Matrix along with many places that should use linalg instead of for loops.

As a start we implement the Iterative Machine to it as is. This requires making all the things that are shared over iterations a data member of the subclass and initializing them to avoid memory leaks.
Then implemeting init_model, iteration and end_training methods.

We worked on the benchmark for “put” this week. to implement various test cases we wrote a std::function member that can be provided a lambda function to implement various cases of updates. The benchmark shows that there is negligible loss of resources when updating members with put instead of assignment. This will be used to work along with ParameterObserver in subsequent weeks.

Another example for IterativeMachine is Least Angle Regression. To port it to IterativeMachine framework we will need to make Iteration templated. Currently most classes at shogun do not deal with feature dispatching at all and it is done in a redundant manner in classes that do so like LDA and LARS. To implement that we have tried a few things. We will be using mixins to solve this problem.


Contributions

NewtonSVM
Iterative Machine
Benchmark for put
Coverter factory