# Fall 2017 CS6741: Structured Prediction for NLP

Time: Monday and Wednesday, 10:10-11:25am
Room: Gates 416 / TBA
Instructor: Yoav Artzi, yoav@cs (office hours by appointment)
Class listing: CS6741
CMS, CMT (paper reviewing), AOI (topic voting)

All students are asked to bring their laptops to class. We will use laptops to broadcast content. If you can not bring a laptop, you will need to share a laptop with another student (this should work well for pairs).

In this course, we will study various topics in NLP. We will focus on research results, and every few meetings switch topics. In general, most topic discussions will include technical overview, data analysis, a classical result, and recent results. Discussion of results will be done through research papers, and will include a class discussion and paper reviews. We will use CMT to read and review papers. Each meeting will start with a 10-15min presentation followed by a discussion. In addition, we will dedicate relatively more time to a deep dive into a single focus topic. This semester the focus topic will be reinforcement learning for NLP. The focused sequence will include, in addition to paper reviews and discussions, implementation and analysis of core algorithms.

We will use All Our Ideas to vote on topics. We will select topics based on votes and the general agenda of the course. There is no guarantee that the top-voted topic will be the next to be discussed (but it is very likely). Please vote a lot. The more you vote, the better the ranking is. The password for the website will be given during the first lecture.

## Possible Topics (not exhaustive)

• Tagging (e.g., part-of-speech, named-entities)
• Dependency parsing
• Constituency parsing
• Semantic parsing (i.e., compositional semantics)
• Discourse parsing
• Language modeling
• Machine translation
• Semantic role labeling
• Textual entailment
• Sentiment analysis
• Co-reference resolution (including Winograd Schemas)
• Word embeddings and distributional semantics
• Lexical semantics
• Vision+language (e.g., VQA, caption generation)
• Information extraction
• Time and event extraction
• Math word problems
• Chat bots

## Schedule

Aug 23 Introduction

## Procedurals

Project: There are two project options: research and survey. If you choose the research option, you will do a research project (can be your own research if relevant). The survey option will require to write a survey paper for selected area in NLP. Both options will include: (a) proposal presentation, (b) final presentation, and (c) paper submission.

Auditing: Auditing is allowed with instructor permission, and requires attending all classes, submitting reviews, presenting papers, and participating in the discussion. Auditing does not require completing the project or doing any of the project related presentations. The goal is to allow interested students to join while creating a lively and productive discussion group.

Grading: the grade will include paper reviews, participation, project, and an intro questionnaire.

## Paper Reviewing Guidelines

Each paper review will require a short summary of the paper and the actual review. Some questions you may use to guide your review are (many others are valid too):

• Did you like the paper? Did you find it interesting? Be honest!
• What are the most important things you learned from the paper? Why are they important?
• Do the lessons learned generalize beyond the specific task? Do they promote our understanding of language? Do they contribute towards building an important system or application?
• Is the experimental setup satisfying? Any experiments missing? Any obvious or important baseline missing? Is the ablation analysis sufficient?
• If a theoretical analysis is included, do you find it satisfying? If none is included, is it missing?
• Is the problem/approach well motivated?
• Are you convinced by the results? Why?
• Is the writing clear? Is the paper well structured?

Since this is not a real conference review, please also write what you learned form this paper and why, in your opinion, it was a good choice for reading (or why it was a bad choice). Reviews are due at 5pm the day before class.

## Paper Presentation and Discussion Guidelines

Each meeting, if readings are discussed, one student will present the papers for 10-15 minutes. The presentation can use slides or can be just verbal. You should use data examples, if data was assigned to the topic, to illustrate your point. We will then go around the room and each student will contribute to the discussion.

### Data Analysis Guidelines

Pick at least 2-3 examples to discuss during your presentation in class. Examples should be prepared to display on screen. We will share your screen as necessary. Pick the examples to illustrate various aspects of the paper and task. The questions you should think about include (but not limited to):

• Why is this task difficult?
• What are the hard cases?
• What are the easy cases?
• Can you think of a simple baseline? How well will it perform?
• Why are the models discussed in class and readings appropriate?
• Do these models make assumptions that hurt performance? How much do these assumptions hurt?
• Is there an upper bound on performance?
• What about the assumptions built into the annotation scheme? Any of them arbitrary?
• If you see an example that is particularly fascinating, why is that?

## References

• Recommended: Michael Collins, Notes on Statistical NLP (on Michael's website)
• Recommended: D. Jurafsky & James H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition, Prentice Hall, Second Edition, 2009. (J&M)
• Recommended: Y. Goldberg, Neural Network Methods in Natural Language Processing, 2017.
• Optional: C.D. Manning & H. Schuetze, Foundations of Statistical Natural Language Processing, Cambridge: MIT Press, 1999. (M&S) (available online, free within the Cornell network)

• Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, 2016.
• Emily Bender, Linguistics Fundamental for Natural Language Processing, Morgan & Claypool, 2013.
• Noah Smith, Linguistic Structure Prediction, Mogran & Claypool, 2011. (available online)

## Short and Incomplete List of NLP Pointers

### NLP Conferences and Journals

The main publication venues are ACL, NACCL, EMNLP, TACL, EACL, CoNLL, and CL. All the paper from these publications can be found in the ACL Anthology. In addition, NLP publications often appear in ML and AI conferences, including ICML, NIPS, ICLR, AAAI, IJCAI. A calendar of NLP events is available here, and ACL sponsored events are listed here.

### Corpora and Other Data

#### Tagging

##### Part-of-speech Tags

Both parsing corpora below (PTB and UD) contain POS tags. Each parse tree contains POS tags for all leaf nodes. You can view a sample of the PTB in NLTK:

>> import nltk
>> print ' '.join(map(lambda x: '/'.join(x), nltk.corpus.treebank.tagged_sents()[0]))
Pierre/NNP Vinken/NNP ,/, 61/CD years/NNS old/JJ ,/, will/MD join/VB the/DT board/NN as/IN a/DT nonexecutive/JJ director/NN Nov./NNP 29/CD ./.
>> print ' '.join(map(lambda x: '/'.join(x), nltk.corpus.treebank.tagged_sents(tagset='universal')[0]))
Pierre/NOUN Vinken/NOUN ,/. 61/NUM years/NOUN old/ADJ ,/. will/VERB join/VERB the/DET board/NOUN as/ADP a/DET nonexecutive/ADJ director/NOUN Nov./NOUN 29/NUM ./.

The universal tag set is described here. The PTB tag set is described here.

##### Named Entity Recognition Data

The CoNLL 2002 shared task is available in NLTK:

>> import nltk
>> len(nltk.corpus.conll2002.iob_sents())
35651
>> len(nltk.corpus.conll2002.iob_words())
678377
>> print ' '.join(map(lambda x: x[0] + '/' + x[2], nltk.corpus.conll2002.iob_sents()[0]))
Sao/B-LOC Paulo/I-LOC (/O Brasil/B-LOC )/O ,/O 23/O may/O (/O EFECOM/B-ORG )/O ./O

CoNLL 2002 is annotated with the IOB annotation scheme and multiple entity types.

##### NYT Recipe Data

This is another example of tagging. The task is explained here, and the data release is described here.

#### Dependency Parsing

The Universal Dependencies (UD) project is publicly available online. The website includes statistics for all annotated languages. You can easily download v1.3 from here. UD files follow the simple CoNLL-U format.

#### Constituency Parsing

The Penn Treebank is available from the LDC You will find tgrep useful for quickly searching the corpus for patterns. NLTK can also be used to load parse trees. A few more browsers are available online.

#### Machine Translation

The WMT shared task from 2016 is a good source for newswire bi-text.

#### Textual Entailment

TE has been studied extensively for more than a decade now. Recently, SNLI has been receiving significant attention.

#### Semantic Parsing

We will look at three data sets commonly used for semantic parsing:

1. GeoQuery: A natural language interface to a small US geography database. The original data is available here, and the original query language is described here. The data with lambda calculus logical forms is available here.
2. ATIS: A natural language interface for a flights database. The data is available from the LDC.
3. Navi: Instructional language for robot navigation. The original data is described here, but we recommend using the data here.