Exploring the GLIDE model for Human Action Effect Prediction

We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions.

[1]  Prafulla Dhariwal,et al.  GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models , 2021, ICML.

[2]  Dong Huk Park,et al.  More Control for Free! Image Synthesis with Semantic Diffusion Guidance , 2021, 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

[3]  Prafulla Dhariwal,et al.  Diffusion Models Beat GANs on Image Synthesis , 2021, NeurIPS.

[4]  Alec Radford,et al.  Zero-Shot Text-to-Image Generation , 2021, ICML.

[5]  James M. Rehg,et al.  In the Eye of the Beholder: Gaze and Actions in First Person Video , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Dima Damen,et al.  The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Pieter Abbeel,et al.  Denoising Diffusion Probabilistic Models , 2020, NeurIPS.

[8]  David F. Fouhey,et al.  Understanding Human Hands in Contact at Internet Scale , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[10]  Ross B. Girshick,et al.  Mask R-CNN , 2017, 1703.06870.

[11]  Abdulmotaleb El Saddik,et al.  Deep Learning in Next-Frame Prediction: A Benchmark Review , 2020, IEEE Access.

[12]  Shaohua Yang,et al.  What Action Causes This? Towards Naive Physical Action-Effect Prediction , 2018, ACL.

[13]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[14]  Ernest Davis,et al.  Naive Physics Perplex , 1997, AI Mag..