Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text
暂无分享,去创建一个
Marius Kloft | Thomas Schnake | Robert A. Vandermeulen | Lukas Ruff | Robert Vandermeulen | Yury Zemlyanskiy | M. Kloft | Lukas Ruff | Yury Zemlyanskiy | Thomas Schnake
[1] Sanjeev Arora,et al. A Simple but Tough-to-Beat Baseline for Sentence Embeddings , 2017, ICLR.
[2] Charu C. Aggarwal,et al. Outlier Detection with Autoencoder Ensembles , 2017, SDM.
[3] Iryna Gurevych,et al. Concatenated Power Mean Word Embeddings as Universal Cross-Lingual Sentence Representations , 2018, 1803.01400.
[4] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[5] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[6] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[7] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[8] Kevin Gimpel,et al. Towards Universal Paraphrastic Sentence Embeddings , 2015, ICLR.
[9] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[10] Qiang Zhou,et al. CSE: Conceptual Sentence Embeddings based on Attention Model , 2016, ACL.
[11] Takehisa Yairi,et al. Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction , 2014, MLSDA'14.
[12] Malik Yousef,et al. One-class document classification via Neural Networks , 2007, Neurocomputing.
[13] Marius Kloft,et al. Image Anomaly Detection with Generative Adversarial Networks , 2018, ECML/PKDD.
[14] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[15] Quoc V. Le,et al. Distributed Representations of Sentences and Documents , 2014, ICML.
[16] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[17] Robert P. W. Duin,et al. Support Vector Data Description , 2004, Machine Learning.
[18] Alexander Binder,et al. Deep One-Class Classification , 2018, ICML.
[19] Jaideep Srivastava,et al. Contextual Anomaly Detection in Text Data , 2012, Algorithms.
[20] Charu C. Aggarwal,et al. Outlier Analysis , 2013, Springer New York.
[21] Nicu Sebe,et al. Learning Deep Representations of Appearance and Motion for Anomalous Event Detection , 2015, BMVC.
[22] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[23] Sergei Vassilvitskii,et al. k-means++: the advantages of careful seeding , 2007, SODA '07.
[24] Ran El-Yaniv,et al. Deep Anomaly Detection Using Geometric Transformations , 2018, NeurIPS.
[25] M. M. Moya,et al. One-class classifier networks for target recognition applications , 1993 .
[26] Sanjay Chawla,et al. Deep Learning for Anomaly Detection: A Survey , 2019, ArXiv.
[27] Franck Dernoncourt,et al. Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks , 2016, NAACL.
[28] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[29] Malik Yousef,et al. One-Class SVMs for Document Classification , 2002, J. Mach. Learn. Res..
[30] Bowen Zhou,et al. A Structured Self-attentive Sentence Embedding , 2017, ICLR.
[31] Sriraam Natarajan,et al. Anomaly Detection in Text: The Value of Domain Knowledge , 2015, FLAIRS.
[32] Tomas Mikolov,et al. Bag of Tricks for Efficient Text Classification , 2016, EACL.
[33] Charu C. Aggarwal,et al. Outlier Detection for Text Data , 2017, SDM.
[34] Nhien-An Le-Khac,et al. Collective Anomaly Detection Based on Long Short-Term Memory Recurrent Neural Networks , 2016, FDSE.
[35] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[36] Lovekesh Vig,et al. LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection , 2016, ArXiv.
[37] Christopher Leckie,et al. High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning , 2016, Pattern Recognit..
[38] Philip S. Yu,et al. Partially Supervised Classification of Text Documents , 2002, ICML.
[39] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[40] VARUN CHANDOLA,et al. Anomaly detection: A survey , 2009, CSUR.
[41] David Guthrie. Unsupervised detection of anomalous text , 2008 .
[42] Éric Gaussier,et al. Deep k-Means: Jointly Clustering with k-Means and Learning Representations , 2018, Pattern Recognit. Lett..
[43] Bernhard Schölkopf,et al. Estimating the Support of a High-Dimensional Distribution , 2001, Neural Computation.
[44] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[45] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[46] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[47] Felix Hill,et al. Learning Distributed Representations of Sentences from Unlabelled Data , 2016, NAACL.
[48] Lovekesh Vig,et al. Long Short Term Memory Networks for Anomaly Detection in Time Series , 2015, ESANN.
[49] Georg Langs,et al. Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery , 2017, IPMI.
[50] Wiebke Wagner,et al. Steven Bird, Ewan Klein and Edward Loper: Natural Language Processing with Python, Analyzing Text with the Natural Language Toolkit , 2010, Lang. Resour. Evaluation.
[51] David A. Clifton,et al. A review of novelty detection , 2014, Signal Process..