Text REtrieval Conference (TREC) 1999¶
Adhoc¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
The ad hoc retrieval task investigates the performance of systems that search a static set of documents using new questions (called topics in TREC). This task is similar to how a researcher might use a library—the collection is known but the questions likely to be asked are not known. NIST provides the participants approximately 2 gigabytes worth of documents and a set of 50 natural language topic statements. The participants produce a set of queries from the topic statements and run those queries against the documents. The output from this run is the official test result for the ad hoc task. Participants return the best 1000 documents retrieved for each topic to NIST for evaluation.
Track coordinator(s):
- E. Voorhees, National Institute of Standards and Technology (NIST)
- D. Harman, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/test_coll.html
Filtering¶
Overview
| Proceedings
| Data
| Runs
| Participants
The TREC-8 filtering track measures the ability of systems to build persistent user profiles which successfully separate relevant and non-relevant documents. It consists of three major subtasks: adaptive filtering, batch filtering, and routing. In adaptive filtering, the system begins with only a topic statement and must learn a better profile from on-line feedback. Batch filtering and routing are more traditional machine learning tasks where the system begins with a large sample of evaluated training documents.
Track coordinator(s):
- D. Hull, Xerox Research Centre Europe
- S. Robertson, Microsoft Research
Track Web Page: https://trec.nist.gov/data/filtering.html
Large Web¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
The TREC-8 Web Track defined ad hoc retrieval tasks over the 100 gigabyte VLC2 collection (Large Web Task) and a selected 2 gigabyte subset known as WT2g (Small Web Task).
Track coordinator(s):
- Martin Braschler, Eurospider Information Tech. AG
- Carol Peters, Istituto Elaborazione Informazione
- Peter Schäuble, Eurospider Information Tech. AG
Track Web Page: https://trec.nist.gov/data/t8.web.html
Query¶
Overview
| Proceedings
| Data
| Runs
| Participants
The Query Track in TREC-8 is a bit different from all the other tracks. It is a cooperative effort among the participating groups to look at the issue of query variability.
Track coordinator(s):
- C. Buckley, SabIr Research, Inc.
- J. Walz, SabIr Research, Inc.
Question Answering¶
Overview
| Proceedings
| Data
| Runs
| Participants
The TREC-8 Question Answering track was the first large scale evaluation of domain-independent question answering systems. This paper summarizes the results of the track by giving a brief overview of the different approaches taken to solve the problem. The most accurate systems found a correct response for more than 2/3 of the questions. Relatively simple bag-of-words approaches were adequate for finding answers when responses could be as long as a paragraph (250 bytes), but more sophisticated processing was necessary for more direct responses (50 bytes).
Track coordinator(s):
- E. Voorhees, National Institute of Standards and Technology (NIST)
- D. Tice, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/qamain.html
Spoken Document Retrieval¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
SDR involves the search and retrieval of excerpts from spoken audio recordings using a combination of automatic speech recognition and information retrieval technologies. The TREC SDR Track has provided an infrastructure for the development and evaluation of SDR technology and a common forum for the exchange of knowledge between the speech recognition and information retrieval research communities. The SDR Track can be declared a success in that it has provided objective, demonstrable proof that this technology can be successfully applied to realistic audio collections using a combination of existing technologies and that it can be objectively evaluated.
Track coordinator(s):
- J. Garofolo, National Institute of Standards and Technology (NIST)
- C. Auzanne, National Institute of Standards and Technology (NIST)
- E. Voorhees, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/sdr.html
Cross-Language¶
Overview
| Proceedings
| Results
| Runs
| Participants
A cross-language retrieval track was offered for the third time at TREC-8. The main task was the same as that of the previous year: the goal was for groups to use queries written in a single language in order to retrieve documents from a multilingual pool of documents written in many different languages. Compared to the usual definition of cross-language information retrieval, where systems work with a single language pair, retrieving documents in a language L1 using queries in language L2, this is a slightly more comprehensive task, and we feel one that more closely meets the demands of real world applications.
Track coordinator(s):
- M. Braschler, Eurospider Information Technology AG
- P. Schäuble, Eurospider Information Technology AG)
- C. Peters, Instituto Elaborazione Informazione (CNR)
GIRT¶
Overview
| Proceedings
| Runs
| Participants
Track coordinator(s):
- M. Braschler, Eurospider Information Technology AG
- P. Schäuble, Eurospider Information Technology AG
- C. Peters, Instituto Elaborazione Informazione (CNR)
Interactive¶
Overview
| Proceedings
| Data
| Runs
| Participants
For TREC-8 the high-level goal of the Interactive Track remained the investigation of searching as an interactive task by examining the process as well as the outcome.
Track coordinator(s):
- W. Hersh, Oregon Health Sciences University
- P. Over, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/t8i/t8i.html