*Exact Incremental Learning*- One or more examples can be exactly incremented into the current SVM solution, resulting in a classifier that is valid for the entire training set seen up to that point

*Regularization Parameter Perturbation*- The current SVM valid for a given set of regularization parameters C can be exactly perturbed to the valid SVM for a set of regularization parameters C'

*Kernel Parameter Perturbation*- The current SVM valid for a given kernel parameter sigma can be exactly perturbed to the valid SVM for the kernel parameter sigma'

*Exact and Approximate Leave-One-Out (LOO) Error Estimation*- The exact LOO error estimate can be efficiently computed by exactly unlearning one example at a time and testing the classifier on the example. An efficient LOO approximation is also implemented that predicts the change in the example's margin based on the margin sensitivity. This approximation is referred to as the span bound in the literature, as introduced by Vapnik and Chapelle.

Incremental SVM Learning

This code is designed for training SVMs to solve binary classification problems. The code can typically handle training problems with dataset sizes ranging up to 10,000 examples. This constraint is driven primarily by the fact that this method is computing and updating the inverse of the kernel matrix for the support vectors.

- The primary benefits of this code are:

Download: IncrementalSVM.zip

Copyright © Chris Diehl 2008-2019