Learning Prototypical Functions for Physical Artifacts

Humans create things for a reason. Ancient people created spears for hunting, knives for cutting meat, pots for preparing food, etc. The prototypical function of a physical artifact is a kind of commonsense knowledge that we rely on to understand natural language. For example, if someone says “She borrowed the book” then you would assume that she intends to read the book, or if someone asks “Can I use your knife?” then you would assume that they need to cut something. In this paper, we introduce a new NLP task of learning the prototypical uses for human-made physical objects. We use frames from FrameNet to represent a set of common functions for objects, and describe a manually annotated data set of physical objects labeled with their prototypical function. We also present experimental results for this task, including BERT-based models that use language model predictions from masked patterns as well as artifact sense definitions from WordNet and frame definitions from FrameNet.

[1]  Yejin Choi,et al.  COMET: Commonsense Transformers for Automatic Knowledge Graph Construction , 2019, ACL.

[2]  Noah A. Smith,et al.  Learning Joint Semantic Parsers from Disjoint Data , 2018, NAACL.

[3]  Ellen Riloff,et al.  Exploiting Definitions for Frame Identification , 2021, EACL.

[4]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[5]  Yejin Choi,et al.  Verb Physics: Relative Physical Knowledge of Actions and Objects , 2017, ACL.

[6]  Doug Downey,et al.  Extracting Commonsense Properties from Embeddings with Limited Human Guidance , 2018, ACL.

[7]  Roger C. Schank,et al.  SCRIPTS, PLANS, GOALS, AND UNDERSTANDING , 1988 .

[8]  Amy Beth Warriner,et al.  Concreteness ratings for 40 thousand generally known English word lemmas , 2014, Behavior research methods.

[9]  Noah A. Smith,et al.  Frame-Semantic Parsing , 2014, CL.

[10]  Catherine Havasi,et al.  ConceptNet 5.5: An Open Multilingual Graph of General Knowledge , 2016, AAAI.

[11]  C. Fillmore FRAME SEMANTICS AND THE NATURE OF LANGUAGE * , 1976 .

[12]  Ellen Riloff,et al.  Learning Prototypical Goal Activities for Locations , 2018, ACL.

[13]  Marie-Francine Moens,et al.  Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates , 2017, AAAI.

[14]  Yejin Choi,et al.  Event2Mind: Commonsense Inference on Events, Intents, and Reactions , 2018, ACL.

[15]  Chris Callison-Burch,et al.  FrameNet+: Fast Paraphrastic Tripling of FrameNet , 2015, ACL.

[16]  Mark H. Burstein The Use of Object-Specific Knowledge in Natural Language Processing , 1979, ACL.

[17]  John B. Lowe,et al.  The Berkeley FrameNet Project , 1998, ACL.

[18]  Yejin Choi,et al.  ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning , 2019, AAAI.

[19]  Alexander M. Rush,et al.  Commonsense Knowledge Mining from Pretrained Models , 2019, EMNLP.

[20]  Lei Shi,et al.  Putting Pieces Together: Combining FrameNet, VerbNet and WordNet for Robust Semantic Parsing , 2005, CICLing.

[21]  Josef Ruppenhofer,et al.  FrameNet II: Extended theory and practice , 2006 .

[22]  Ali Farhadi,et al.  Situation Recognition: Visual Semantic Role Labeling for Image Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Push Singh,et al.  The Public Acquisition of Commonsense Knowledge , 2002 .

[24]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[25]  Douglas B. Lenat,et al.  CYC: a large-scale investment in knowledge infrastructure , 1995, CACM.

[26]  Óscar Ferrández,et al.  Aligning FrameNet and WordNet based on Semantic Neighborhoods , 2010, LREC.

[27]  Eugene Charniak,et al.  Toward a model of children's story comprehension , 1972 .

[28]  Wendy G. Lehnert,et al.  The Role of Object Primitives in Natural Language Processing , 1979, IJCAI.