In recent years, machine learning (ML) has been used more and more to solve complex tasks in different disciplines, ranging from Data Mining to Information Retrieval or Natural Language Processing (NLP). These tasks often require the processing of structured input, e.g., the ability to extract salient features from syntactic/semantic structures is critical to many NLP systems. Mapping such structured data into explicit feature vectors for ML algorithms requires large expertise, intuition and deep knowledge about the target linguistic phenomena. Kernel Methods (KM) are powerful ML tools (see e.g., (Shawe-Taylor and Cristianini, 2004)), which can alleviate the data representation problem. They substitute feature-based similarities with similarity functions, i.e., kernels, directly defined between training/test instances, e.g., syntactic trees. Hence feature vectors are not needed any longer. Additionally, kernel engineering, i.e., the composition or adaptation of several prototype kernels, facilitates the design of effective similarities required for new tasks, e.g., (Moschitti, 2004; Moschitti, 2008).
[1]
Nello Cristianini,et al.
Kernel Methods for Pattern Analysis
,
2003,
ICTAI.
[2]
Alessandro Moschitti,et al.
A Study on Convolution Kernels for Shallow Statistic Parsing
,
2004,
ACL.
[3]
Alessandro Moschitti,et al.
Efficient Linearization of Tree Kernel Functions
,
2009,
CoNLL.
[4]
Roberto Basili,et al.
Structured Lexical Similarity via Convolution Kernels on Dependency Trees
,
2011,
EMNLP.
[5]
Alessandro Moschitti,et al.
Kernel methods, syntax and semantics for relational text categorization
,
2008,
CIKM '08.
[6]
Alessandro Moschitti,et al.
Fast Support Vector Machines for Structural Kernels
,
2011,
ECML/PKDD.