Hierarchical Machine Learning - A Learning Methodology Inspired by Human Intelligence

One of the basic characteristics in human problem solving, including learning, is the ability to conceptualize the world at different granularities and translate from one abstraction level to the others easily, i.e., deal with them hierarchically[1]. But computers can only solve problems in one abstraction level generally. This is one of the reasons that human beings are superior to computers in problem solving and learning. In order to endow the computers with the human's ability, several mathematical models have been presented such as fuzzy set, rough set theories [2, 3]. Based on the models, the problem solving and machine learning can be handled at different grain-size worlds. We proposed a quotient space based model [4, 5] that can also deal with the problems hierarchically. In the model, the world is represented by a semi-lattice composed by a set of quotient spaces: each of them represents the world at a certain grain-size and is denoted by a triplet, (X, F, f) where X is a domain, F- the structure of X, f -the attribute of X In this talk, we will discuss the hierarchical machine learning based on the proposed model. From the quotient space model point of view, a supervised learning (classification) can be regarded as finding a mapping from a low-level feature space to a high-level conceptual space, i.e., from a fine space to its quotient space (a coarse space) in the model. Since there is a big semantic gap between the low-level feature spaces and the conceptual spaces, finding the mapping is quite difficult and inefficiency. For example, it needs a large number of training samples and a huge amount of computational cost generally. In order to reduce the computational complexity in machine learning, the characteristics of human learning are adopted. In human learning, people always use a multi-level learning strategy, including multi-level classifiers and multi-level features, instead of one-level, i.e., learning at spaces with different grain-size. We call this kind of machine learning the hierarchical learning. So the hierarchical learning is a powerful strategy for improving machine learning Taking the image retrieval as an example, we'll show how to use the hierarchical learning strategy to the field. Given a query (an image) by a user, the aim of image retrieval is to find a set of similar images from a collection of images. This is a typical classification problem and can be regarded as a supervised learning. The first problem is how to represent an image so that the similar images can be found from the collection of images precisely and entirely. So far in image retrieval, an image was represented by several forms with different grain-size. The finest representation of an image is by an n(n matrix, each of its elements represents a pixel. Using this representation to image retrieval, the precision will be high but the robustness (recall) will be low. Since it has the precise detail of an image, it is sensitive to noises. Therefore, the pixel-based representation was used in image retrieval rarely. The common used representation in image retrieval is the coarsest one, i.e., so called global visual features [6]. Here, an image is represented by a visual feature (a vector) such as color moments, color correlograms, wavelet transforms, Gabor transform, etc. In the coarsest representations, most of the details in an image lose so that the retrieval precision decreases but the robustness (recall) increases. The coarsest representations are suitable for seeking a class of similar images due to their robustness. Therefore, the global visual features were used for image retrieval widely. In order to overcome the low precision introduced by the coarsest representations, global features, the middle-size representation of an image was presented recently such as region-based representation [7]. In the representation, an image is partitioned into several consistent regions and each region is represented by a visual feature (a vector) extracted from the region. The whole image is represented by a set of features (vectors). Since the region-based representation has more details of an image than the global one, the retrieval precision increases but the robustness decreases. Therefore, the quality, including precision and recall, of image retrieval will be improved by using multi-level features. One of the strategies for hierarchical learning is to integrating the features with different grain-size, including the global, the region-based, and the pixel-based features One of the main goals in hierarchical learning is to reduce the computational complexity. Based on the proposed model we know that the learning cost can be reduced by using a set of multi-level classifiers. Certainly, the set of multi-level classifiers composes a hierarchical learning framework. A set of experimental results in hand-written Chinese character recognition and image retrieval are given to verify the advantage of the approach Hierarchical learning inspired by human's learning is one of the methodologies for improving the performances of machine learning

[1]  Lotfi A. Zadeh,et al.  Fuzzy Sets , 1996, Inf. Control..

[2]  Bo Zhang,et al.  Theory and Applications of Problem Solving , 1992 .

[3]  Bo Zhang,et al.  The Quotient Space Theory of Problem Solving , 2003, Fundam. Informaticae.

[4]  Bo Zhang,et al.  An efficient and effective region-based image retrieval framework , 2004, IEEE Transactions on Image Processing.

[5]  Jing Huang,et al.  Image indexing using color correlograms , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  Z. Pawlak Rough Sets: Theoretical Aspects of Reasoning about Data , 1991 .