Representing Partial Programs with Blended Abstract Semantics

Synthesizing programs from examples requires searching over a vast, combinatorial space of possible programs. In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next. We introduce a general technique for representing partially written programs in a program synthesis engine. We take inspiration from the technique of abstract interpretation, in which an approximate execution model is used to determine if an unfinished program will eventually satisfy a goal specification. Here we learn an approximate execution model implemented as a modular neural network. By constructing compositional program representations that implicitly encode the interpretation semantics of the underlying programming language, we can represent partial programs using a flexible combination of concrete execution state and learned neural representations, using the learned approximate semantics when concrete semantics are not known (in unfinished parts of the program). We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs, such as loops and higher-order functions, and can be used to synthesize programs more accurately for a given search budget than pure neural approaches in several domains.

[1]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[2]  Dan Klein,et al.  Neural Module Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Li Fei-Fei,et al.  Inferring and Executing Programs for Visual Reasoning , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  Matthew J. Hausknecht,et al.  Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis , 2018, ICLR.

[5]  Isil Dillig,et al.  Program synthesis using abstraction refinement , 2017, Proc. ACM Program. Lang..

[6]  Charles Sutton,et al.  Learning to Represent Programs with Property Signatures , 2020, ICLR.

[7]  J. W. Backus,et al.  The FORTRAN automatic coding system , 1899, IRE-AIEE-ACM '57 (Western).

[8]  TF-Coder: Program Synthesis for Tensor Manipulations , 2020, ACM Transactions on Programming Languages and Systems.

[9]  Pushmeet Kohli,et al.  RobustFill: Neural Program Learning under Noisy I/O , 2017, ICML.

[10]  Armando Solar-Lezama,et al.  The three pillars of machine programming , 2018, MAPL@PLDI.

[11]  Oleksandr Polozov,et al.  Generative Code Modeling with Graphs , 2018, ICLR.

[12]  Eran Yahav,et al.  Programming with a read-eval-synth loop , 2020, Proc. ACM Program. Lang..

[13]  Rishabh Singh,et al.  Synthesizing data structure manipulations from storyboards , 2011, ESEC/FSE '11.

[14]  Armando Solar-Lezama,et al.  Write, Execute, Assess: Program Synthesis with a REPL , 2019, NeurIPS.

[15]  Thomas Reps,et al.  Exact and approximate methods for proving unrealizability of syntax-guided synthesis problems , 2020, PLDI.

[16]  Le Song,et al.  Hoppity: Learning Graph Transformations to Detect and Fix Bugs in Programs , 2020, ICLR.

[17]  Pieter Abbeel,et al.  Tractability of Planning with Loops , 2015, AAAI.

[18]  Swarat Chaudhuri,et al.  Neural Sketch Learning for Conditional Program Generation , 2017, ICLR.

[19]  Marc Brockschmidt,et al.  Learning to Represent Programs with Graphs , 2017, ICLR.

[20]  Patrick Cousot,et al.  Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints , 1977, POPL.

[21]  Lior Wolf,et al.  Automatic Program Synthesis of Long Programs with a Learned Garbage Collector , 2018, NeurIPS.

[22]  Mirella Lapata,et al.  Coarse-to-Fine Decoding for Neural Semantic Parsing , 2018, ACL.

[23]  Armando Solar-Lezama,et al.  Program synthesis by sketching , 2008 .