Special issue on new challenges in neural computation 2012
暂无分享,去创建一个
This special issue on New Challenges in Neural Computation accompanies the workshop NC, which took place for the third time in 2012, accompanying the prestigious DAGM'12 conference in Graz, Austria. The workshop has been jointly organized by the GI Fachgruppe Neural Networks of the German Computer Science society and the German Neural Networks society, and it centered around exemplary challenges and novel developments of neural systems, spanning a broad spectrum from applications in robotics, for example, up to theoretical investigations concerning challenges of data visualization or missing value imputation. Out of 14 submissions which have been presented at the workshop, this special issue includes five extended contributions which center around interesting novel research in neural systems and machine learning. The contribution Explorative Learning of Inverse Models: a Theoretical Perspective by Rolf and Steil deals with learning inverse functions and the role of redundancy in this realm. Interestingly, exact results can be obtained if restricted to linear models; the findings can empirically be compared to non-linear domains to some extend, as also investigated in the article. At NC, the presentation accompanying the short version of this contribution has been awarded with the best presentation award at the workshop. The approach Classification in High-dimensional Spectral Data: Accuracy vs. Interpretability vs. Model Size by Backhaus and Seiffert also investigates to some extend redundancy as occurs when classifying high-dimensional data sets from hyperspectral imaging or spectroscopy. More generally, multiple criteria of classification which extend the mere classification error by the aspects of model interpretability and model sparseness are investigated in this contribution, yielding quantitative measurements in some parts as well as a comparison of popular classifiers based on benchmark data. This way, the contribution constitutes one of the first attempts to scientifically evaluate important ‘soft’ properties of models such as model interpretability. One instance of data where the dimensionality is taken to the limit infinity is present in functional data, i.e. instances of theory continuous measurements. In this context, regularizing assumptions can help to avoid the curse of dimensionality, relying e.g. on smoothness conditions of the signals. Such background can be modeled by specific metric adaptation schemes, which take into account local functional characteristics, as demonstrated in the article Lateral Enhancement in Adaptative Metric Learning for Functional Data presented by Villmann et al. in this volume. Missing value imputation constitutes the topic of the contribution Mixture of Gaussians for distance estimation with missing data by Eirola et al. Based on the observation that many algorithms rely on pairwise distances rather than the missing values itself, a technology is proposed which estimates the latter. Interestingly, also in this case the dedicated treatment of high dimensional data shows promising results. One specific technique for dissimilarity learning is investigated in the contribution Learning vector quantization for (dis-) similarities by Hammer et al. Here, learning vector quantization (LVQ) for similarity or dissimilarity data by means of kernelization or relationalization is considered, based on several existing proposals in the literature. The contribution manages to explain all these diverse contributions as instantiations of two different gradient optimization schemes of LVQ techniques, which are related to cost functions, accompanied by a characterization of the theoretical properties of the techniques as concerns convergence and learnability as well as experimental comparisons on a benchmark suite. All together, the contributions of this special issue put some light on interesting novel research topics in the area of neural computation.