Interactive Code Generation via Test-Driven User-Intent Formalization

Pre-trained large language models (LLMs) such as OpenAI Codex have shown immense potential in automating significant aspects of coding by producing natural code from informal natural language (NL) intent. However, the code produced does not have any correctness guarantees around satisfying user's intent. In fact, it is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics. In this paper, we take a first step towards addressing the problem above by proposing the workflow of test-driven user-intent formalization (TDUIF), which leverages lightweight user feedback to jointly (a) formalize the user intent as tests (a partial specification), and (b) generates code that meets the formal user intent. To perform a scalable and large-scale automated evaluation of the algorithms without requiring a user in the loop, we describe how to simulate user interaction with high-fidelity using a reference solution. We also describe and implement alternate implementations of several algorithmic components (including mutating and ranking a set of tests) that can be composed for efficient solutions to the TDUIF problem. We have developed a system TICODER that implements several solutions to TDUIF, and compare their relative effectiveness on the MBPP academic code generation benchmark. Our results are promising with using the OpenAI Codex LLM on MBPP: our best algorithm improves the pass@1 code generation accuracy metric from 48.39% to 70.49% with a single user query, and up to 85.48% with up to 5 user queries. Second, we can generate a non-trivial functional unit test consistent with the user intent within an average of 1.69 user queries for 90.40% of the examples for this dataset.

[1]  Weizhu Chen,et al.  CodeT: Code Generation with Generated Tests , 2022, ICLR.

[2]  Eirini Kalliamvakou,et al.  Productivity assessment of neural code completion , 2022, MAPS@PLDI.

[3]  Elena L. Glassman,et al.  Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models , 2022, CHI Extended Abstracts.

[4]  Sida I. Wang,et al.  InCoder: A Generative Model for Code Infilling and Synthesis , 2022, ICLR.

[5]  S. Savarese,et al.  A Conversational Paradigm for Program Synthesis , 2022, ArXiv.

[6]  Frank F. Xu,et al.  A systematic evaluation of large language models of code , 2022, MAPS@PLDI.

[7]  Cherepanov,et al.  Competition-level code generation with AlphaCode , 2022, Science.

[8]  Nagarajan Natarajan,et al.  Jigsaw: Large Language Models meet Program Synthesis , 2021, 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE).

[9]  Todd Mytkowicz,et al.  TOGA: A Neural Method for Test Oracle Generation , 2021, International Conference on Software Engineering.

[10]  Neel Sundaresan,et al.  Generating Accurate Assert Statements for Unit Test Cases using Pretrained Transformers , 2020, 2022 IEEE/ACM International Conference on Automation of Software Test (AST).

[11]  Sumit Gulwani,et al.  Multi-modal program inference: a marriage of pre-trained language models and component-based synthesis , 2021, Proc. ACM Program. Lang..

[12]  Charles Sutton,et al.  Program Synthesis with Large Language Models , 2021, ArXiv.

[13]  Laria Reynolds,et al.  Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm , 2021, CHI Extended Abstracts.

[14]  Neel Sundaresan,et al.  Unit Test Case Generation with Transformers and Focal Context , 2020, 2009.05617.

[15]  Zhenjiang Hu,et al.  Question selection for interactive program synthesis , 2020, PLDI.

[16]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[17]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[18]  Darko Marinov,et al.  An empirical analysis of flaky tests , 2014, SIGSOFT FSE.

[19]  Gordon Fraser,et al.  Evolutionary Generation of Whole Test Suites , 2011, 2011 11th International Conference on Quality Software.

[20]  Sumit Gulwani,et al.  Automating string processing in spreadsheets using input-output examples , 2011, POPL '11.

[21]  Armando Solar-Lezama,et al.  The Sketching Approach to Program Synthesis , 2009, APLAS.

[22]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).