CodeEditor: Learning to Edit Source Code with Pre-trained Models
暂无分享,去创建一个
[1] Ge Li,et al. Towards Enhancing In-Context Learning for Code Generation , 2023, ArXiv.
[2] Ge Li,et al. SkCoder: A Sketch-based Approach for Automatic Code Generation , 2023, ArXiv.
[3] A. Eghbali,et al. CrystalBLEU: Precisely and Efficiently Measuring the Similarity of Code , 2022, ASE.
[4] A. Eghbali,et al. CrystalBLEU: Precisely and Efficiently Measuring the Similarity of Code , 2022, 2022 IEEE/ACM 44th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion).
[5] C. Tantithamthavorn,et al. AutoTransform: Automated Code Transformation to Support Modern Code Review Process , 2022, 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE).
[6] Gabriele Bavota,et al. Using Pre-Trained Models to Boost Code Review Automation , 2022, 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE).
[7] Baishakhi Ray,et al. CODIT: Code Editing With Tree-Based Neural Models , 2018, IEEE Transactions on Software Engineering.
[8] Yelong Shen,et al. CodeRetriever: Unimodal and Bimodal Contrastive Learning , 2022, ArXiv.
[9] Zhi Jin,et al. EditSum: A Retrieve-and-Edit Framework for Source Code Summarization , 2021, 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[10] Yue Wang,et al. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation , 2021, EMNLP.
[11] Baishakhi Ray,et al. On Multi-Modal Learning of Editing Source Code , 2021, 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[12] Kai-Wei Chang,et al. Unified Pre-training for Program Understanding and Generation , 2021, NAACL.
[13] Neel Sundaresan,et al. CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation , 2021, NeurIPS Datasets and Benchmarks.
[14] G. Bavota,et al. Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks , 2021, 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE).
[15] Gabriele Bavota,et al. Towards Automating Code Review Activities , 2021, 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE).
[16] Furu Wei,et al. Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting , 2021, EMNLP.
[17] Ming Zhou,et al. GraphCodeBERT: Pre-training Code Representations with Data Flow , 2020, ICLR.
[18] Fang Liu,et al. Multi-task Learning based Pre-trained Language Model for Code Completion , 2020, 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[19] Quoc V. Le,et al. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators , 2020, ICLR.
[20] Ting Liu,et al. CodeBERT: A Pre-Trained Model for Programming and Natural Languages , 2020, FINDINGS.
[21] Lili Mou,et al. TreeGen: A Tree-Based Transformer Architecture for Code Generation , 2019, AAAI.
[22] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[23] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[24] Marc Brockschmidt,et al. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search , 2019, ArXiv.
[25] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[26] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[27] Gabriele Bavota,et al. On Learning Meaningful Code Changes Via Neural Machine Translation , 2019, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).
[28] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[29] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[30] Graham Neubig,et al. TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation , 2018, EMNLP.
[31] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[32] Danny Dig,et al. API code recommendation using statistical learning from fine-grained changes , 2016, SIGSOFT FSE.
[33] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[34] Jeffrey C. Carver,et al. Impact of Peer Code Review on Peer Impression Formation: A Survey , 2013, 2013 ACM / IEEE International Symposium on Empirical Software Engineering and Measurement.
[35] Miryung Kim,et al. Detecting and characterizing semantic inconsistencies in ported code , 2013, 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[36] Hridesh Rajan,et al. A study of repetitiveness of code changes in software evolution , 2013, 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[37] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.