Python自然语言处理学习笔记(59):练习

6.10   Exercises 练习

  1. ☼ Read up on one of the language technologies mentioned in this section, such as word sense disambiguation, semantic role labeling, question answering, machine translation, named entity detection. Find out what type and quantity of annotated data is required for developing such systems. Why do you think a large amount of data is required?
  2. ☼ Using any of the three classifiers described in this chapter, and any features you can think of, build the best name gender classifier you can. Begin by splitting the Names Corpus into three subsets: 500 words for the test set, 500 words for the dev-test set, and the remaining 6900 words for the training set. Then, starting with the example name gender classifier, make incremental improvements. Use the dev-test set to check your progress. Once you are satisfied with your classifier, check its final performance on the test set. How does the performance on the test set compare to the performance on the dev-test set? Is this what you'd expect?
  3. ☼ The Senseval 2 Corpus contains data intended to train word-sense disambiguation classifiers. It contains data for four words: hard, interest, line, and serve. Choose one of these four words, and load the corresponding data:

 

>>> from nltk.corpus import senseval

>>> instances = senseval.instances('hard.pos')

>>> size = int(len(instances) * 0.1)

>>> train_set, test_set = instances[size:], instances[:size]

  1. Using this dataset, build a classifier that predicts the correct sense tag for a given instance. See the corpus HOWTO at http://www.nltk.org/howto for information on using the instance objects returned by the Senseval 2 Corpus.
  2. ☼ Using the movie review document classifier discussed in this chapter, generate a list of the 30 features that the classifier finds to be most informative. Can you explain why these particular features are informative? Do you find any of them surprising?
  3. ☼ Select one of the classification tasks described in this chapter, such as name gender detection, document classification, part-of-speech tagging, or dialog act classification. Using the same training and test data, and the same feature extractor, build three classifiers for the task: a decision tree, a naive Bayes classifier, and a Maximum Entropy classifier. Compare the performance of the three classifiers on your selected task. How do you think that your results might be different if you used a different feature extractor?
  4. ☼ The synonyms strong and powerful pattern differently (try combining them with chip and sales). What features are relevant in this distinction? Build a classifier that predicts when each word should be used.
  5. The dialog act classifier assigns labels to individual posts, without considering the context in which the post is found. However, dialog acts are highly dependent on context, and some sequences of dialog act are much more likely than others. For example, a ynQuestion dialog act is much more likely to be answered by a yanswer than by a greeting. Make use of this fact to build a consecutive classifier for labeling dialog acts. Be sure to consider what features might be useful. See the code for the consecutive classifier for part-of-speech tags in Example 6.7 to get some ideas.
  6. Word features can be very useful for performing document classification, since the words that appear in a document give a strong indication about what its semantic content is. However, many words occur very infrequently, and some of the most informative words in a document may never have occurred in our training data. One solution is to make use of a lexicon, which describes how different words relate to one another. Using WordNet lexicon, augment the movie review document classifier presented in this chapter to use features that generalize the words that appear in a document, making it more likely that they will match words found in the training data.
  7. The PP Attachment Corpus is a corpus describing prepositional phrase attachment decisions. Each instance in the corpus is encoded as a PPAttachment object:

 

>>> from nltk.corpus import ppattach

>>> ppattach.attachments('training')

[PPAttachment(sent='0', verb='join', noun1='board',

              prep='as', noun2='director', attachment='V'),

 PPAttachment(sent='1', verb='is', noun1='chairman',

              prep='of', noun2='N.V.', attachment='N'),

 ...]

>>> inst = ppattach.attachments('training')[1]

>>> (inst.noun1, inst.prep, inst.noun2)

('chairman', 'of', 'N.V.')

  1. Select only the instances where inst.attachment is N:

 

>>> nattach = [inst for inst in ppattach.attachments('training')

...            if inst.attachment == 'N']

  1. Using this sub-corpus, build a classifier that attempts to predict which preposition is used to connect a given pair of nouns. For example, given the pair of nouns "team" and "researchers," the classifier should predict the preposition "of". See the corpus HOWTO at http://www.nltk.org/howto for more information on using the PP attachment corpus.
  2. Suppose you wanted to automatically generate a prose description of a scene, and already had a word to uniquely describe each entity, such as the jar, and simply wanted to decide whether to use in or on in relating various items, e.g. the book is in the cupboard vs the book is on the shelf. Explore this issue by looking at corpus data; writing programs as needed.

(13)

 

a.

 

in the car versus on the train

b.

 

in town versus on campus

c.

 

in the picture versus on the screen

d.

 

in Macbeth versus on Letterman

posted @ 2011-09-05 23:33  牛皮糖NewPtone  阅读(1345)  评论(0编辑  收藏  举报