A novel syntax-aware automatic graphics code generation with attention-based deep neural network

Abstract Recent advances in deep learning have made it possible to automatically translate graphical user interface (GUI) into code by an encoder-decoder framework. This framework generally uses deep convolutional neural network (CNN) to extract image features, which are then translated into hundreds of code tokens by a code generator based on a recurrent neural network (RNN). However, there are two challenges in the implementation of this framework: one is how to make full use of the information contained in the GUI and domain specified language (DSL) code, the other is how to make generated DSL code conform to syntax rules. To fully leverage the information in GUI and DSL code, we first propose a model named HGui2Code that integrates visual attention-enabled GUI features (extracted by CNN) with DSL attention-enabled semantic features (extracted by LSTM). Besides, we propose SGui2Code, a novel model that makes use of a ON-LSTM network to generate DSL code that is correct in syntax. HGui2code pays more attention to semantic information, while SGui2code focuses on grammar rules. Extensive experimental results show that our models outperform state-of-the-art methods on the web dataset, yielding 5.5% higher accuracy with the HGui2Code model and 1.5% using the SGui2Code model respectively. Although our models do not have huge boost on IOS and Android dataset, DSL code generated by our models are very close to the layout of components in corresponding GUI.

[1]  R. Harald Baayen,et al.  Productivity in language production , 1994 .

[2]  Quoc V. Le,et al.  Neural Programmer: Inducing Latent Programs with Gradient Descent , 2015, ICLR.

[3]  Zejian Yuan,et al.  Automatic Graphics Program Generation using Attention-Based Hierarchical Decoder , 2018, ACCV.

[4]  Armando Solar-Lezama,et al.  Learning to Infer Graphics Programs from Hand-Drawn Images , 2017, NeurIPS.

[5]  Justin Emile Gottschlich,et al.  AI programmer: autonomously creating software programs using genetic algorithms , 2017, GECCO Companion.

[6]  Tim Rocktäschel,et al.  Programming with a Differentiable Forth Interpreter , 2016, ICML.

[7]  Christopher D. Manning,et al.  Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.

[8]  Rahul Gupta,et al.  DeepFix: Fixing Common C Language Errors by Deep Learning , 2017, AAAI.

[9]  Neel Kant,et al.  Recent Advances in Neural Program Synthesis , 2018, ArXiv.

[10]  Weishan Zhang,et al.  An intelligent power distribution service architecture using cloud computing and deep learning techniques , 2018, J. Netw. Comput. Appl..

[11]  John Hale,et al.  LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better , 2018, ACL.

[12]  Yoshua Bengio,et al.  Hierarchical Multiscale Recurrent Neural Networks , 2016, ICLR.

[13]  Quoc V. Le,et al.  Neural Program Synthesis with Priority Queue Training , 2018, ArXiv.

[14]  Sebastian Nowozin,et al.  DeepCoder: Learning to Write Programs , 2016, ICLR.

[15]  Lihong Li,et al.  Neuro-Symbolic Program Synthesis , 2016, ICLR.

[16]  Lei Liu,et al.  TreeGAN: Syntax-Aware Sequence Generation with Generative Adversarial Networks , 2018, 2018 IEEE International Conference on Data Mining (ICDM).

[17]  Alex Graves,et al.  Neural Turing Machines , 2014, ArXiv.

[18]  Tuan Anh Nguyen,et al.  Reverse Engineering Mobile Application User Interfaces with REMAUI (T) , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[19]  Aaron C. Courville,et al.  Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks , 2018, ICLR.

[20]  Tony Beltramelli,et al.  pix2code: Generating Code from a Graphical User Interface Screenshot , 2017, EICS.