Changes
sfa-tk.V2.8. 1
Changes sfa-tk.V2.7. 1
Changes sfa-tk.V2.6. 2
Literature. 3
Author:
Wolfgang Konen, wolfgang.konen@fh-koeln.de
Created:
Nov’2011
Last modified: Nov’2011
This chapter is a summary of changes in
sfa-tk.V2.8 as compared to sfa-tk.V2.7:
- sfa_classify.m: added call of automatic parametric bootstrap (sfaPBootstrap.m)
- sfaPBootstrap.m: issue a warning, if training set to small, do training set
augmentation
- add_noisy_copies.m: parametric bootstrap
- gaussClassifier.m: regularize too small diagonal elements with opts.CL.epsD
- gaussClassifier.m: added optional Nearest Neighbour classifier
(opts.CL.algo=’nearneig’)
- sfa_step2.m: the default for epsC is now 0 (instead of 1e-7), if opts.epsC
is not present
- sfa_step2.m: in section ‘SVD_SFA’ the line [W1,D1] = eig(C)
is replaced with [tmp,D1,W1]=svd(C);
(better numeric stability in the degenerate
case, no negative eigenvalues)
- vprintf.m: display output only, if opts.verbose is above a certain level
- All parameters related to the classifiers
(Gauss or Nearest Neighbour) are now concentrated in struct opts.CL:
- opts.CL.aligned, opts.CL.algo,
opts.CL.epsD
If opts.CL is
missing, sfa_classify.m will generate a default object
All parameters related to the classifiers (Gauss or
Nearest Neighbour) are now concentrated in struct opts.CL:
The new things
in this version are: automatic parametric bootstrap, regularization of Gauss
classifier and optional Nearest Neighbour classifier. This all should make the
default SFA classification algorithm fairly robust for marginal training data.
For a description of demo files and demo
datasets now included in the distribution see demo\AAREADME.htm.
Author:
Wolfgang Konen, wolfgang.konen@fh-koeln.de
Created:
Jan’2011
Last modified: Jan’2011
This chapter is a summary of changes in
sfa-tk.V2.7 as compared to sfa-tk.V2.6:
The new thing is
here that sfaClassModel.m can store everything relevant for the classifier
model (e.g. a model built from training data), so that at a later point in time
new test data can be classified without access to the training data and without
a need to run SFA again.
Scripts and other files for experiments
have been moved to sfa-tk/experim (may not be included in distribution)
For a description of demo files and demo
datasets now included in the distribution see demo\AAREADME.htm.
Author V2.6: Wolfgang Konen, wolfgang.konen@fh-koeln.de
Created:
Nov’2009
Last modified: Nov’2009
(Author V1.0: Pietro Berkes, berkes@brandeis.edu)
For an in-depth description of the new
algorithm SVD_SFA see [Konen09b].
This chapter is a summary of changes in
sfa-tk.V2.6 as compared to sfa-tk.V1.0.1 [Berkes03]:
- lcov_pca2.m: same interface as lcov_pca.m but better numerical stability in the case where the
covariance matrix object becomes singular.
- sfa2_create.m: new parameter opts,
allows to bind additional parameter settings like opts.dbg (def. 0), opts.epsC
(def. 1e-7) to the SFA2-object.
- sfa_step.m: new parameter method,
handed over to sfa2_step.m. Added default settings for method and dbg,
depending on step.
- sfa2_step.m:
- In step ‘sfa’: method=’GEN_EIG’
is the original [Berkes03]-code; method=’SVD_SFA’ is the new [Konen09b]-method using lcov_pca2.m, along the lines of
[WisSej02]. If dbg>0,
then both branches are executed and their results are compared with sfa2_dbg.m. If method=’SVD_SFA’, the matrices DSF and SF are reduced to those dimensions with
eigenvalue ¹ 0 in D.
- Added check and warning w.r.t.
ill-conditionedness of matrix B for method=’GEN_EIG’. Always print rank(B).
- In step ‘expansion’: method=’TIMESERIES’ is the original Berkes-code where the goal is to minimize the
time difference signal. The method=’CLASSIF’ is new for classification purposes, along the lines of [Berkes05].
Each data chunk is assumed to be a set of patterns for the same
class, and the goal is to minimize the pairwise pattern difference for all
pairs existing in this chunk of data. (I.e. if the data of a certain
class label are presented in several chunks, then only the pairs among
each chunk are formed. Therefore it is recommended to add each class
label in one chunk, if possible.)
- In step ‘preprocessing -> expansion’:
New branch for SFA_STRUCTS{hdl}.pp_type=’PCA2’: similar to ‘PCA’, but use lcov_pca2.m instead of lcov_pca.m for whitening of input data and throw away those dimensions
with eigenvalue close to zero.
- sfa2_dbg.m: called from sfa2_step.m if opts.dbg>0 and method=’SVD_SFA’. Is
only needed for debugging purposes.
- Several checks are performed on the
results from ’GEN_EIG’ and ’SVD_SFA’. See rich documented source code for
further details.
- The part with OLD_VERSION==1 is only for
debugging comparisions, it contains the old, no longer recommended
version of ‘SVD_SFA’. Note that in case OLD_VERSION==1 the call to step
‘sfa’ in sfa_step.m (or sfa2_step.m) has to be
sfa_step(hdl,
x, 'sfa', method);
instead of
sfa_step(hdl, [], 'sfa',
method);
and the data in x have
to contain all training data in one chunk
- sfa2.m:
- additional check with matrix Cslow,
which prints a warning, if some of the slow (training) signals have
variance <>1.
- added parameter method
(=’GEN_EIG’,’SVD_SFA’) to the input of sfa2.
- added parameter pp_type
(=’PCA’,’PCA2’ ,’PCAVAR’) to the input of sfa2.
- gaussCreate.m: create an Gaussian classifier object
- gaussClassifier.m: train or apply a
Gaussian classifier HDL, which has been created before with gaussCreate.m.
Remark: if the data entered during training are not normally distributed,
the inversion of the full covariance matrix can become numerically
difficult. It is the recommended to call gaussClassifier.m with aligned=1 which zeros the off-diagonal elements of the covariance matrix
(and aligns the Gaussian ellipsoid with the coordinate axes). This is
always numerically robust.
- gauss_clearall.m: clear all Gaussian classifier objects.
- drive1.m: demo for driving force experiments [Konen09a, Wis03c].
Additional parameter noiseperc allows to inject noise into the time series.
- class_demo1.m: demo for classification experiments acc. to [Berkes05] on
synthetic data.
- class_demo2.m: demo for classification experiments acc. to [Berkes05] on
real data (UCI repository or gesture data).
- class_CVdemo2.m: demo for classification experiments acc. to [Berkes05]
with cross validataion on real data (UCI repository or
gesture data). [Koch10a]
- [WisS02] Wiskott, L.
and Sejnowski, T.J. (2002), Slow Feature Analysis: Unsupervised
Learning of Invariances, Neural Computation, 14(4):715-770
- [Wis03c] Wiskott, L. Estimating
driving forces of nonstationary time series with slow feature analysis. arXiv.org
e-Print archive, http://arxiv.org/abs/cond-mat/0312317, December 2003.
- [Berkes05] Berkes,
P. Pattern recognition with Slow Feature Analysis. Cognitive Sciences EPrint Archive (CogPrint) 4104, http://cogprints.org/4104/ (2005).
- [Berkes03] Berkes,
P. SFA-TK: Slow Feature Analysis Toolkit for Matlab
(v.1.0.1).
http://itb.biologie.hu-berlin.de/~berkes/software/sfa-tk/sfa-tk.shtml
or
http://people.brandeis.edu/~berkes/software/sfa-tk/index.html.
- [Konen09a] Konen, W., Koch, P.
(2009). How slow is slow? SFA detects signals
that are slower than the driving force.
arXiv.org e-Print archive, http://arxiv.org/abs/0911.4397v1, November 2009 (PDF)
- [Konen09b] Konen, W.
(2009). On the numeric stability of the SFA implementation sfa-tk. arXiv.org e-Print archive, http://arxiv.org/abs/0912.1064,
December 2009. (PDF)
· [Koch10a] Koch, P., Konen, W., Hein, K., Gesture
Recognition on Few Training Data using Slow Feature Analysis and Parametric
Bootstrap. In P. Sobrevilla (ed.), Proc. IEEE World
Congress on Computational Intelligence (WCCI),
Barcelona, July 2010. (PDF)
· [Konen11a] Konen, W. (2011). Der SFA-Algorithmus für
Klassifikation. CIOP Technical Report 08/2011, Cologne University of
Applied Sciences. (PDF)
· [Konen11b] Konen, W. (2011). SFA classification with few training
data: Improvements with parametric bootstrap. CIOP Technical Report
09/2011, Cologne University of Applied Sciences. (PDF)