Overview - Question Answering 2001¶
Proceedings
| Data
| Runs
| Participants
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. In its third year, the track continued to focus on retrieving small snippets of text that contain an answer to a question. However, several new conditions were added to increase the realism, and the difficulty, of the task. In the main task, questions were no longer guaranteed to have an answer in the collection; systems returned a response of 'NIL' to indicate their belief that no answer was present. In the new list task, systems assembled a set of instances as the response for a question, requiring the ability to distinguish among instances found in multiple documents. Another new task, the context task, required systems to track discourse objects through a series of questions.
Track coordinator(s):
- E. Voorhees, National Institute of Standards and Technology (NIST)
Tasks:
main
: Main Tasklist
: List Taskcontext
: Context Task
Track Web Page: https://trec.nist.gov/data/qamain.html