Text REtrieval Conference (TREC) 2002¶
Cross-Language¶
Overview
| Proceedings
| Results
| Runs
| Participants
Nine teams participated in the TREC-2002 cross-language information retrieval track, which focused on retrieving Arabic language documents based on 50 topics that were originally prepared in English. Arabic translations of the topic descriptions were also made available to facilitate monolingual Arabic runs. This was the second year in which a large Arabic document collection was available. Three new teams joined the evaluation, and the cross-language aspect of the evaluation received more attention this year than in TREC-2001. A set of standard linguistic resources was made available to facilitate cross-system comparisons, and their use as a contrastive condition was encouraged. Unique contributions to the relevance pools were more typical of previous TREC evaluations then the results of TREC-2001 had been for the same document collection, with no run uniquely contributing more than 6% of the known relevant documents.
Track coordinator(s):
- D.W. Oard, University of Maryland, College Park
- F.C. Gey, University of California, Berkeley
Web¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
The TREC-2002 Web Track moved away from non-Web relevance ranking and towards Webspecific tasks on a 1.25 million page crawl “.GOV”. The topic distillation task involved finding pages which were relevant, but also had characteristics which would make them desirable inclusions in a distilled list of key pages. The named page task is a variant of last year’s homepage finding task. The task is to find a particular page, but in this year’s task the page need not be a home page.
Track coordinator(s):
- N. Craswell, CSIRO
- and D. Hawking, CSIRO
Track Web Page: https://trec.nist.gov/data/t11.web.html
Question Answering¶
Overview
| Proceedings
| Data
| Runs
| Participants
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. The track contained two tasks in TREC 2002, the main task and the list task. Both tasks required that the answer strings returned by the systems consist of nothing more or less than an answer in contrast to the text snippets containing an answer allowed in previous years. A new evaluation measure in the main task, the confidence-weighted score, tested a system’s ability to recognize when it has found a correct answer.
Track coordinator(s):
- E.M. Voorhees, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/qamain.html
Filtering¶
Overview
| Proceedings
| Data
| Runs
| Participants
Given a topic description and some example relevant documents, build a filtering profile which will select the most relevant examples from an incoming stream of documents. In the TREC 2002 filtering task we will continue to stress adaptive filtering. However, the batch filtering and routing tasks will also be available.
Track coordinator(s):
- S. Robertson, Microsoft Research
- I. Soboroff, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/filtering/T11filter_guide.html
Novelty¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
The TREC 2002 novelty track is designed to investigate systems' abilities to locate relevant AND new information within the ranked set of documents retrieved in answer to a TREC topic. This track is new for TREC 2002 and should be regarded as an interesting (and hopefully fun) learning experience.
Track coordinator(s):
- D. Harman, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec.nist.gov/data/t11_novelty/novelty_guidelines.html
Interactive¶
Overview
| Proceedings
| Data
| Runs
| Participants
TREC is organized along several tracks, each of which addresses a particular retrieval problem or scenario. The high-level goal of the Interactive Track is the investigation of searching as an interactive task by examining the process as well as the outcome. The remainder of this page comprises information basic to the definition of the track for TREC-11. For additional information about TREC and its schedules see the TREC homepage.
Track coordinator(s):
- W. Hersh, Oregon Health and Science University
Track Web Page: https://trec.nist.gov/data/t11_interactive/t11i.html
Video¶
Overview
| Proceedings
| Runs
| Participants
TREC-2002 saw the second running of the Video Track, the goal of which was to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. The track used 73.3 hours of publicly available digital video (in MPEG1/VCD format) downloaded by the participants directly from the Internet Archive (Prelinger Archives) and some from the Open Video Project. The material comprised advertising, educational, industrial, and amateur films produced between the 1930’s and the 1970’s by corporations, nonprofit organizations, trade associations, community and interest groups, educational institutions, and individuals. 17 teams representing 5 companies and 12 universities — 4 from Asia, 9 from Europe, and 4 from the US — participated in one or more of three tasks in the 2001 video track: shot boundary determination, feature extraction, and search (manual or interactive). Results were scored by NIST using manually created truth data for shot boundary determination and manual assessment of feature extraction and search results.
Track coordinator(s):
- A.F. Smeaton, Dublin City University
- P. Over, National Institute of Standards and Technology (NIST)