暂无分享,去创建一个
James Pustejovsky | Nikhil Krishnaswamy | R. Pito Salas | Katherine Krajovic | Nathaniel J. Dimick | J. Pustejovsky | Nikhil Krishnaswamy | Katherine Krajovic | R Pito Salas
[1] Elizabeth Boyle,et al. Mixed Reality Deictic Gesture for Multi-Modal Robot Communication , 2019, 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[2] Nikhil Krishnaswamy,et al. Monte Carlo Simulation Generation Through Operationalization of Spatial Primitives , 2017 .
[3] João Manuel R. S. Tavares,et al. A new approach for merging edge line segments , 1995 .
[4] Yi Li,et al. Hand gesture recognition using Kinect , 2012, 2012 IEEE International Conference on Computer Science and Automation Engineering.
[5] Hadas Kress-Gazit,et al. Robots That Use Language , 2020, Annu. Rev. Control. Robotics Auton. Syst..
[6] James Pustejovsky,et al. User-Aware Shared Perception for Embodied Agents , 2019, 2019 IEEE International Conference on Humanized Computing and Communication (HCC).
[7] James Pustejovsky,et al. Situational Grounding within Multimodal Simulations , 2019, ArXiv.
[8] James Pustejovsky,et al. VoxSim: A Visual Platform for Modeling Motion Language , 2016, COLING.
[9] Matthias Scheutz,et al. Developing a Corpus of Indirect Speech Act Schemas , 2020, LREC.
[10] Luke S. Zettlemoyer,et al. Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions , 2014, AAAI.
[11] James Pustejovsky,et al. VoxML: A Visualization Modeling Language , 2016, LREC.
[12] Elpida S. Tzafestas,et al. The Blackboard Architecture in Knowledge-Based Robotic Systems , 1991 .
[13] Mark T. Keane,et al. Conditionals: a theory of meaning, pragmatics, and inference. , 2002, Psychological review.