The impact of class imbalance techniques on crashing fault residence prediction models
暂无分享,去创建一个
[1] Zhou Xu,et al. Effort-Aware Just-in-Time Bug Prediction for Mobile Apps Via Cross-Triplet Deep Feature Embedding , 2022, IEEE Transactions on Reliability.
[2] Zhou Xu,et al. A comprehensive investigation of the impact of feature selection techniques on crashing fault residence prediction models , 2021, Inf. Softw. Technol..
[3] Zhou Xu,et al. Predicting Crash Fault Residence via Simplified Deep Forest Based on A Reduced Feature Set , 2021, 2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC).
[4] Tao Zhang,et al. Simplified Deep Forest Model based Just-In-Time Defect Prediction for Android Mobile Apps , 2020, 2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS).
[5] Xiaohong Zhang,et al. Imbalanced metric learning for crashing fault residence prediction , 2020, J. Syst. Softw..
[6] Xin Wang,et al. Detecting and Explaining Self-Admitted Technical Debts with Attention-based Neural Networks , 2020, 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[7] Qingkai Shi,et al. Functional code clone detection with syntax and semantics fusion learning , 2020, ISSTA.
[8] Kay Chen Tan,et al. Understanding the Automated Parameter Optimization on Transfer Learning for Cross-Project Defect Prediction: An Empirical Study , 2020, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE).
[9] David Lo,et al. Chaff from the Wheat: Characterizing and Determining Valid Bug Reports , 2020, IEEE Transactions on Software Engineering.
[10] Qinbao Song,et al. A Comprehensive Investigation of the Role of Imbalanced Learning for Software Defect Prediction , 2019, IEEE Transactions on Software Engineering.
[11] Xiapu Luo,et al. LDFR: Learning deep feature representation for software defect prediction , 2019, J. Syst. Softw..
[12] Jin Liu,et al. Identifying Crashing Fault Residence Based on Cross Project Model , 2019, 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE).
[13] Tie-Yan Liu,et al. Self-paced Ensemble for Highly Imbalanced Massive Data Classification , 2019, 2020 IEEE 36th International Conference on Data Engineering (ICDE).
[14] Mozhan Soltani,et al. A benchmark-based evaluation of search-based crash reproduction , 2019, Empirical Software Engineering.
[15] J. Grundy,et al. Neural Network-based Detection of Self-Admitted Technical Debt , 2019, ACM Transactions on Software Engineering and Methodology.
[16] Leandro L. Minku,et al. Class Imbalance Evolution and Verification Latency in Just-in-Time Software Defect Prediction , 2019, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[17] Gemma Catolino,et al. Cross-Project Just-in-Time Bug Prediction for Mobile Apps: An Empirical Assessment , 2019, 2019 IEEE/ACM 6th International Conference on Mobile Software Engineering and Systems (MOBILESoft).
[18] Hongyu Zhang,et al. Does the fault reside in a stack trace? Assisting crash localization by predicting crashing fault residence , 2019, J. Syst. Softw..
[19] Shi Ying,et al. EH-Recommender: Recommending Exception Handling Strategies Based on Program Context , 2018, 2018 23rd International Conference on Engineering of Complex Computer Systems (ICECCS).
[20] Akito Monden,et al. On the relative value of data resampling approaches for software defect prediction , 2018, Empirical Software Engineering.
[21] Ming Wen,et al. ChangeLocator: locate crash-inducing changes based on crash reports , 2018, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).
[22] Ahmed E. Hassan,et al. The Impact of Class Rebalancing Techniques on the Performance and Interpretation of Defect Prediction Models , 2018, IEEE Transactions on Software Engineering.
[23] Steffen Herbold,et al. Comments on ScottKnottESD in Response to “An Empirical Comparison of Model Validation Techniques for Defect Prediction Models” , 2017, IEEE Transactions on Software Engineering.
[24] Gemma Catolino,et al. Just-In-Time Bug Prediction in Mobile Applications: The Domain Matters! , 2017, 2017 IEEE/ACM 4th International Conference on Mobile Software Engineering and Systems (MOBILESoft).
[25] A. Panichella,et al. A guided genetic algorithm for automated crash reproduction , 2017, ICSE 2017.
[26] Tim Menzies,et al. Is "Better Data" Better Than "Better Data Miners"? , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).
[27] A. Hamou-Lhadj,et al. A bug reproduction approach based on directed model checking and crash traces , 2017, J. Softw. Evol. Process..
[28] A. Hassan,et al. Studying just-in-time defect prediction using cross-project models , 2016, Empirical Software Engineering.
[29] Renaud Pawlak,et al. SPOON: A library for implementing analyses and transformations of Java source code , 2016, Softw. Pract. Exp..
[30] Luís Torgo,et al. A Survey of Predictive Modeling on Imbalanced Domains , 2016, ACM Comput. Surv..
[31] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[32] Martin Monperrus,et al. Crash reproduction via test case mutation: let existing test cases help , 2015, ESEC/SIGSOFT FSE.
[33] Baowen Xu,et al. Heterogeneous cross-company defect prediction by unified metric representation and CCA-based transfer learning , 2015, ESEC/SIGSOFT FSE.
[34] Sashank Dara,et al. Online Defect Prediction for Imbalanced Data , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.
[35] Ning Chen,et al. STAR: Stack Trace Based Automatic Crash Reproduction via Symbolic Execution , 2015, IEEE Transactions on Software Engineering.
[36] Andrian Marcus,et al. On the Use of Stack Traces to Improve Text Retrieval-Based Bug Localization , 2014, 2014 IEEE International Conference on Software Maintenance and Evolution.
[37] Lu Zhang,et al. Boosting Bug-Report-Oriented Fault Localization with Segmentation and Stack-Trace Analysis , 2014, 2014 IEEE International Conference on Software Maintenance and Evolution.
[38] Rongxin Wu,et al. CrashLocator: locating crashing faults based on crash stacks , 2014, ISSTA 2014.
[39] Tony R. Martinez,et al. An instance level analysis of data complexity , 2014, Machine Learning.
[40] Liang Gong,et al. Locating Crashing Faults based on Crash Stack Traces , 2014, ArXiv.
[41] Audris Mockus,et al. A large-scale empirical study of just-in-time quality assurance , 2013, IEEE Transactions on Software Engineering.
[42] Sinno Jialin Pan,et al. Transfer defect learning , 2013, 2013 35th International Conference on Software Engineering (ICSE).
[43] Xin Yao,et al. Using Class Imbalance Learning for Software Defect Prediction , 2013, IEEE Transactions on Reliability.
[44] Gilles Louppe,et al. Ensembles on Random Patches , 2012, ECML/PKDD.
[45] Chih-Jen Lin,et al. Dual coordinate descent methods for logistic regression and maximum entropy models , 2011, Machine Learning.
[46] Foutse Khomh,et al. Classifying field crash reports for fixing bugs: A case study of Mozilla Firefox , 2011, 2011 27th IEEE International Conference on Software Maintenance (ICSM).
[47] Rahul Premraj,et al. Do stack traces help developers fix bugs? , 2010, 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010).
[48] Hien M. Nguyen,et al. Borderline over-sampling for imbalanced data classification , 2009, Int. J. Knowl. Eng. Soft Data Paradigms.
[49] Xin Yao,et al. Diversity analysis on imbalanced data sets by using ensemble models , 2009, 2009 IEEE Symposium on Computational Intelligence and Data Mining.
[50] Haibo He,et al. ADASYN: Adaptive synthetic sampling approach for imbalanced learning , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).
[51] Friedrich Leisch,et al. A toolbox for K-centroids cluster analysis , 2006 .
[52] Hui Han,et al. Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning , 2005, ICIC.
[53] Gustavo E. A. P. A. Batista,et al. A study of the behavior of several methods for balancing machine learning training data , 2004, SKDD.
[54] Nitesh V. Chawla,et al. SMOTEBoost: Improving Prediction of the Minority Class in Boosting , 2003, PKDD.
[55] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[56] Jorma Laurikkala,et al. Improving Identification of Difficult Small Classes by Balancing Class Distribution , 2001, AIME.
[57] Paul A. Viola,et al. Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade , 2001, NIPS.
[58] Salvatore J. Stolfo,et al. AdaCost: Misclassification Cost-Sensitive Boosting , 1999, ICML.
[59] JOHANNES FÜRNKRANZ,et al. Separate-and-Conquer Rule Learning , 1999, Artificial Intelligence Review.
[60] John Shawe-Taylor,et al. Optimizing Classifers for Imbalanced Training Sets , 1998, NIPS.
[61] Tin Kam Ho,et al. The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..
[62] Yoav Freund,et al. A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.
[63] David W. Opitz,et al. An Empirical Evaluation of Bagging and Boosting , 1997, AAAI/IAAI.
[64] Geoffrey E. Hinton. Connectionist Learning Procedures , 1989, Artif. Intell..
[65] Dennis L. Wilson,et al. Asymptotic Properties of Nearest Neighbor Rules Using Edited Data , 1972, IEEE Trans. Syst. Man Cybern..
[66] Peter E. Hart,et al. The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.
[67] Seetha Hari,et al. Learning From Imbalanced Data , 2019, Advances in Computer and Electrical Engineering.
[68] Zhenchang Xing,et al. Neural Network-based Detection of Self-Admitted Technical Debt: From Performance to Explainability , 2019, ACM Trans. Softw. Eng. Methodol..
[69] Shane McIntosh,et al. An Empirical Comparison of Model Validation Techniques for Defect Prediction Models , 2017, IEEE Transactions on Software Engineering.
[70] Wei-Yin Loh,et al. Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..
[71] Taghi M. Khoshgoftaar,et al. RUSBoost: A Hybrid Approach to Alleviating Class Imbalance , 2010, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.
[72] Zhi-Hua Zhou,et al. Exploratory Undersampling for Class-Imbalance Learning , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[73] Chao Chen,et al. Using Random Forest to Learn Imbalanced Data , 2004 .
[74] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[75] Ana L. C. Bazzan,et al. Balancing Training Data for Automated Annotation of Keywords: a Case Study , 2003, WOB.
[76] Nitesh V. Chawla,et al. SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..
[77] John C. Platt,et al. Probabilistic Outputs for Support vector Machines and Comparisons to Regularized Likelihood Methods , 1999 .
[78] Stan Matwin,et al. Addressing the Curse of Imbalanced Training Sets: One-Sided Selection , 1997, ICML.
[79] S. Yitzhaki,et al. A note on the calculation and interpretation of the Gini index , 1984 .
[80] I. Tomek. An Experiment with the Edited Nearest-Neighbor Rule , 1976 .
[81] I. Tomek,et al. Two Modifications of CNN , 1976 .
[82] Peter E. Hart,et al. Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.