A Review on Conventional Machine Learning vs Deep Learning

In now days, deep learning has become a prominent and emerging research area in computer vision applications. Deep learning permits the multiple layers models for computation to learn representations of data by processing in their original form while it is not possible in conventional machine learning. These methods surprisingly improved the accuracy of various image processing domains such as speech recognition, face recognition, object detection and in biomedical applications. Deep neural networks (DNN) such as convolutional neural network (CNN) provide tremendous results in processing of images and videos, while another approach of deep network i.e. recurrent neural network (RNN) gives better performance with sequential data such as text and speech.

[1]  P. Bagavathi Sivakumar,et al.  Generic Feature Learning in Computer Vision , 2015 .

[2]  Amanpreet Singh,et al.  A review of supervised machine learning algorithms , 2016, 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom).

[3]  Neena Aloysius,et al.  A review on deep convolutional neural networks , 2017, 2017 International Conference on Communication and Signal Processing (ICCSP).

[4]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[5]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[6]  Shih-Wei Lin,et al.  Particle swarm optimization for parameter determination and feature selection of support vector machines , 2008, Expert Syst. Appl..

[7]  Rob Fergus,et al.  Stochastic Pooling for Regularization of Deep Convolutional Neural Networks , 2013, ICLR.

[8]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[9]  D. Hubel,et al.  Receptive fields and functional architecture of monkey striate cortex , 1968, The Journal of physiology.

[10]  G. Wahba,et al.  Multicategory Support Vector Machines , Theory , and Application to the Classification of Microarray Data and Satellite Radiance Data , 2004 .

[11]  Nitish Srivastava,et al.  Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.

[12]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Xiaogang Wang,et al.  Saliency detection by multi-context deep learning , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[15]  Qiang Chen,et al.  Network In Network , 2013, ICLR.

[16]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[17]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Igi Ardiyanto,et al.  A review of optimization method in face recognition: Comparison deep learning and non-deep learning methods , 2017, 2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE).

[19]  Andrew Zisserman,et al.  Deep Features for Text Spotting , 2014, ECCV.

[20]  Sanjay Ranka,et al.  An effic ient k-means clustering algorithm , 1997 .

[21]  Jasper Snoek,et al.  Spectral Representations for Convolutional Neural Networks , 2015, NIPS.

[22]  Camille Couprie,et al.  Learning Hierarchical Features for Scene Labeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Koby Crammer,et al.  On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines , 2002, J. Mach. Learn. Res..

[24]  Quoc V. Le,et al.  Listen, attend and spell: A neural network for large vocabulary conversational speech recognition , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[25]  Mircea Andrecut,et al.  Parallel GPU Implementation of Iterative PCA Algorithms , 2008, J. Comput. Biol..

[26]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[27]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[28]  P Subathra,et al.  Deep learning – An overview , 2015 .

[29]  Yihong Gong,et al.  Human Tracking Using Convolutional Neural Networks , 2010, IEEE Transactions on Neural Networks.

[30]  Lior Rokach,et al.  Top-down induction of decision trees classifiers - a survey , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[31]  Svetlana Lazebnik,et al.  Multi-scale Orderless Pooling of Deep Convolutional Activation Features , 2014, ECCV.

[32]  Majid Ahmadi,et al.  Investigating the Performance of Naive- Bayes Classifiers and K- Nearest Neighbor Classifiers , 2010, 2007 International Conference on Convergence Information Technology (ICCIT 2007).

[33]  Ludmila I. Kuncheva,et al.  On the optimality of Naïve Bayes with dependent binary features , 2006, Pattern Recognit. Lett..