暂无分享,去创建一个
Baptiste Barreau | Damien Challet | Laurent Carlier | D. Challet | Laurent Carlier | Baptiste Barreau
[1] Geoffrey E. Hinton,et al. Adaptive Mixtures of Local Experts , 1991, Neural Computation.
[2] James Bennett,et al. The Netflix Prize , 2007 .
[3] Kaiming He,et al. Focal Loss for Dense Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[4] Tie-Yan Liu,et al. LightGBM: A Highly Efficient Gradient Boosting Decision Tree , 2017, NIPS.
[5] Peter I. Frazier,et al. A Tutorial on Bayesian Optimization , 2018, ArXiv.
[6] Nitesh V. Chawla,et al. SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..
[7] Justin A. Sirignano,et al. Universal features of price formation in financial markets: perspectives from deep learning , 2018, Machine Learning and AI in Finance.
[8] Xin Yao,et al. Simultaneous training of negatively correlated neural networks in an ensemble , 1999, IEEE Trans. Syst. Man Cybern. Part B.
[9] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[10] Alexandre Arenas,et al. Mapping individual behavior in financial markets: synchronization and anticipation , 2019, EPJ Data Science.
[11] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[12] Geoffrey E. Hinton,et al. Lookahead Optimizer: k steps forward, 1 step back , 2019, NeurIPS.
[13] Joseph N. Wilson,et al. Twenty Years of Mixture of Experts , 2012, IEEE Transactions on Neural Networks and Learning Systems.
[14] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[15] Juho Kanniainen,et al. Multilayer Aggregation with Statistical Validation: Application to Investor Networks , 2017, Scientific Reports.
[16] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[17] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[18] Timothy Dozat,et al. Incorporating Nesterov Momentum into Adam , 2016 .
[19] Mark Goadrich,et al. The relationship between Precision-Recall and ROC curves , 2006, ICML.
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[22] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[23] Damien Challet,et al. Statistically validated lead-lag Networks and inventory Prediction in the Foreign Exchange Market , 2016, Adv. Complex Syst..
[24] Yoshua Bengio,et al. Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..
[25] Geoffrey E. Hinton,et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.
[26] Rosario N. Mantegna,et al. Long-term ecology of investors in a financial market , 2018, Palgrave Communications.
[27] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .