9:00 |
Opening |
9:05 |
Opening invited talk Semantic Parsing: Past, Present, and Future SLIDES Raymond MooneyI first review the long history of semantic parsing from its roots in logic, to manually developed parsers, and finally to machine learning approaches. I then survey recent work, focusing on the issues of reducing supervision, scaling to broader coverage, and grounding in perception and action. Finally, I discuss what I believe is an important issue for future research: integrating distributional and logical approaches to semantics. |
9:50 |
Invited talk Can a Machine Translate Without Knowing What Translation Is? Kevin KnightThis talk will address the possibility of statistical semantics-based machine translation. What components are necessary? How can they be wired together? How can we best capture (describe) the required linguistic transformations? What data is needed to train the components? |
10:20 |
Exceptional submission talk Low-dimensional Embeddings of Logic PDF SLIDES Tim Rocktäschel, Matko Bošnjak, Sameer Singh and Sebastian RiedelMany machine reading approaches, from shallow information extraction to deep semantic parsing, map natural language to symbolic representations of meaning. Representations such as first-order logic capture the richness of natural language and support complex reasoning, but often fail in practice due to their reliance on logical background knowledge and the difficulty of scaling up inference. In contrast, low-dimensional embeddings (i.e. distri- butional representations) are efficient and enable generalization, but it is unclear how reasoning with embeddings could support the full power of symbolic representations such as first-order logic. In this proof-of-concept paper we address this by learning embeddings that simulate the behavior of first-order logic. |
10:30 |
Coffee break |
11:00 |
Exceptional submission talk Combining Formal and Distributional Models of Temporal and Intensional Semantics PDF SLIDES Mike Lewis and Mark SteedmanWe outline a vision for computational semantics in which formal compositional semantics is combined with a powerful, structured lexical semantics derived from distributional statistics. We consider how existing work (Lewis and Steedman, 2013) could be extended with a much richer lexical semantics using recent techniques for modelling processes (Scaria et al., 2013) - for example, learning that visiting events start with arriving and end with leaving. We show how to closely integrate this information with theories of formal semantics, allowing complex compositional inferences such as is visiting -> has arrived in but will leave, which requires interpreting both the function and content words. This will allow machine reading systems to understand not just what has happened, but when. |
11:10 |
Exceptional submission talk Cooking with Semantics PDF SLIDES Jonathan Malmaud, Earl Wagner, Nancy Chang and Kevin MurphyWe are interested in the automatic interpretation of how-to instructions, such as cooking recipes, into semantic representations that can facilitate sophisticated question answering. Recent work has shown impressive results on semantic parsing of instructions with minimal supervision, but such techniques cannot handle much of the situated and ambiguous language used in instructions found on the web. In this paper, we suggest how to extend such methods using a model of pragmatics, based on a rich representation of world state. |
11:20 |
Poster session |
12:30 |
Lunch break |
2:10 |
Invited talk Semantic Parsing for Cancer Panomics SLIDES Hoifung PoonAdvances in sequencing technology have made available a plethora of panomics data for cancer research, yet the search for disease genes and drug targets remains a formidable challenge. Biological knowledge such as pathways can play an important role by constraining the search space and boosting the signal-to-noise ratio. The majority of knowledge resides in text (e.g., journal publications), which has been undergoing its own exponential growth, making it mandatory to develop machine reading methods for automatic knowledge extraction. In this talk, I will formulate the machine reading task for pathway extraction as a semantic parsing problem, review the state of the art and open challenges, and present our Literome project and latest attack to the problem using grounded unsupervised semantic parsing. |
2:50 |
Invited talk Robust Semantics for Semantic Parsers SLIDES Mark SteedmanSemantic parsing---that is, is the automatic induction of parsers that build meaning representations by being trained on paired sentences and meanings, the way that a child learns their native language---is potentially a very powerful mechanism for natural language processing. For example, if we had the same access to the universal conceptual language as the child, we could in principle train parser-generators for interlingual machine translation using it, possibly with greater efficiency than synchronous grammars trained on parallel text. The fact that we do not have access to such meaning representations is merely symptomatic of a much more serious shortage of training datasets for semantic parser induction. In fact there are almost no large datasets of this kind. As a result, there has been a search for proxies for string-meaning representation datasets, such as question-answer pairs and large knowledge structures such as FreeBase. This talk will examine a number of distributional and nondistributional semantic representations that can be used to mediate semantic parsing more directly at the scale of FreeBase queries and Text Entailment tasks. It draws on joint work with Mike Lewis, Siva Reddy, and Mirella Lapata. |
3:30 |
Coffee break |
4:00 |
Invited talk Asking for Help Using Inverse Semantics SLIDES Stefanie TellexRobots inevitably fail, often without the ability to recover
autonomously. We demonstrate an approach for enabling a robot to
recover from failures by communicating its need for specific help to a
human partner using natural language. Our approach automatically
detects failures, then generates targeted spoken-language requests for
help such as ``Please give me the white table leg that is on the black
table.'' Once the human partner has repaired the failure condition,
the system resumes full autonomy. We present a novel inverse
semantics algorithm for generating effective help requests. In
contrast to forward semantic models that interpret natural language in
terms of robot actions and perception, our inverse semantics algorithm
generates requests by emulating the human's ability to interpret a
request using the Generalized Grounding Graph framework. To
assess the effectiveness of our approach, we present a corpus-based
online evaluation, as well as an end-to-end user study, demonstrating
that our approach increases the effectiveness of human interventions
compared to static requests for help. |
4:40 |
Invited talk Computing with Natural Language SLIDES Percy LiangToday, the tremendous growth of semi-structured data far outpaces our
ability to ask interesting deep questions of it. Moreover, much of
the advanced data processing and analysis is limited to the privileged
few who have programming expertise. In this talk, I will entertain
the possibility of using natural language as a universal and agile
interface for querying, and more generally computing with, data.
Specifically, I will discuss two recent projects: (i) learning to map
natural language questions into database queries on Freebase; and (ii)
learning to map natural language queries onto XPath expressions that
extract entities from web pages. In both these projects, I will
discuss the statistical and computational challenges of learning from
weakly-supervised data. |
5:20 |
Closing |