Mostrar el registro sencillo del ítem

dc.contributor.author
Benotti, Luciana  
dc.contributor.author
Lau, Tessa  
dc.contributor.author
Villalba, Martin  
dc.date.available
2018-01-30T18:10:21Z  
dc.date.issued
2014-10  
dc.identifier.citation
Benotti, Luciana; Lau, Tessa; Villalba, Martin; Interpreting Natural Language Instructions Using Language, Vision, and Behavior; Association for Computing Machinery; ACM Transactions on Interactive Intelligent Systems; 4; 3; 10-2014  
dc.identifier.issn
2160-6455  
dc.identifier.uri
http://hdl.handle.net/11336/35034  
dc.description.abstract
We define the problem of automatic instruction interpretation as follows. Given a natural language instruction, can we automatically predict what an instruction follower, such as a robot, should do in the environment to follow that instruction? Previous approaches to automatic instruction interpretation have required either extensive domain-dependent rule writing or extensive manually annotated corpora. This article presents a novel approach that leverages a large amount of unannotated, easy-to-collect data from humans interacting in a game-like environment. Our approach uses an automatic annotation phase based on artificial intelligence planning, for which two different annotation strategies are compared: one based on behavioral information and the other based on visibility information. The resulting annotations are used as training data for different automatic classifiers. This algorithm is based on the intuition that the problem of interpreting a situated instruction can be cast as a classification problem of choosing among the actions that are possible in the situation. Classification is done by combining language, vision, and behavior information. Our empirical analysis shows that machine learning classifiers achieve 77% accuracy on this task on available English corpora and 74% on similar German corpora. Finally, the inclusion of human feedback in the interpretation process is shown to boost performance to 92% for the English corpus and 90% for the German corpus.  
dc.format
application/pdf  
dc.language.iso
eng  
dc.publisher
Association for Computing Machinery  
dc.rights
info:eu-repo/semantics/openAccess  
dc.rights.uri
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/  
dc.subject
Natural Language Interpretation  
dc.subject
Multi-Modal Understanding  
dc.subject
Action Recognition  
dc.subject
Situated Virtual Agent  
dc.subject.classification
Ciencias de la Computación  
dc.subject.classification
Ciencias de la Computación e Información  
dc.subject.classification
CIENCIAS NATURALES Y EXACTAS  
dc.title
Interpreting Natural Language Instructions Using Language, Vision, and Behavior  
dc.type
info:eu-repo/semantics/article  
dc.type
info:ar-repo/semantics/artículo  
dc.type
info:eu-repo/semantics/publishedVersion  
dc.date.updated
2018-01-29T19:43:37Z  
dc.journal.volume
4  
dc.journal.number
3  
dc.journal.pais
Estados Unidos  
dc.description.fil
Fil: Benotti, Luciana. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física. Sección Ciencias de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina  
dc.description.fil
Fil: Lau, Tessa. Savioke; Estados Unidos  
dc.description.fil
Fil: Villalba, Martin. Universitat Potsdam; Alemania. Universidad Nacional de Córdoba; Argentina  
dc.journal.title
ACM Transactions on Interactive Intelligent Systems  
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/url/http://dl.acm.org/citation.cfm?id=2629632  
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1145/2629632