Skip to content

Text REtrieval Conference (TREC) 2008

Blog

Overview | Proceedings | Data | Results | Runs | Participants

The Blog track explores the information seeking behaviour in the blogosphere. The track was introduced in 2006, with a main pilot search task, namely the opinion-finding task. In TREC 2007, the track investigated two main tasks inspired by the analysis of a commercial blog-search query log: the opinion-finding task (i.e. “What do people think about X?”) and the blog distillation task (i.e. “Find me a blog with a principal, recurring interest in X.”).

Track coordinator(s):

  • I. Ounis, University of Glasgow
  • C. Macdonald, University of Glasgow
  • I. Soboroff, National Institute of Standards and Technology (NIST)

Track Web Page: https://www.dcs.gla.ac.uk/wiki/TREC-BLOG


Million Query

Overview | Proceedings | Data | Runs | Participants

The Million Query (1MQ) track ran for the second time in TREC 2008. The track is designed to serve two purposes: first, it is an exploration of ad-hoc retrieval over a large set of queries and a large collection of documents; second, it investigates questions of system evaluation, in particular whether it is better to evaluate using many shallow judgments or fewer thorough judgments.

Track coordinator(s):

  • J. Allan, University of Massachusetts
  • J. A. Aslam, Northeastern University
  • V. Pavlu, Northeastern University
  • E. Kanoulas, Northeastern University
  • B. Carterette, University of Delaware

Track Web Page: https://web.archive.org/web/20090311232726/http://ciir.cs.umass.edu/research/million/


Enterprise

Overview | Proceedings | Data | Results | Runs | Participants

The goal of the enterprise track is to conduct experiments with enterprise data that reflect the experiences of users in real organizations. This year, we continued with the CERC collection introduced in TREC 2007. Topics were developed in conjunction with CSIRO Enquiries, who field email and telephone questions about CSIRO research from the public.

Track coordinator(s):

  • K. Balog, University of Amsterdam
  • I. Soboroff, National Institute of Standards and Technology (NIST)
  • P. Thomas, CSIRO
  • P. Bailey, Microsoft
  • N. Craswell, Microsoft
  • A. de Vries, CWI

Track Web Page: https://trec.nist.gov/data/enterprise.html


Overview | Proceedings | Data | Results | Runs | Participants

TREC 2008 was the third year of the Legal Track, which focuses on evaluation of search technology for discovery of electronically stored information in litigation and regulatory settings. The track included three tasks: Ad Hoc (i.e., single-pass automatic search), Relevance Feedback (two-pass search in a controlled setting with some relevant and nonrelevant documents manually marked after the first pass) and Interactive (in which real users could iteratively refine their queries and/or engage in multi-pass relevance feedback).

Track coordinator(s):

  • D. W. Oard, University of Maryland, College Park
  • B. Hedin, H5
  • S. Tomlinson, Open Text Corporation
  • J. R. Baron, National Archives and Records Administration

Track Web Page: http://trec-legal.umiacs.umd.edu/


Relevance Feedback

Overview | Proceedings | Data | Runs | Participants

There were 3 main goals for this track: 1. Target evaluating and comparing just the RF algorithm - all groups will work with exactly the same relevance judgments (for the most part). (Next year, the relevance evidence groups can use will expand). Hopefully compare both statistical and NLP-intensive use of relevance information (what makes a document relevant). 2. Establish good baseline RF results for multiple amounts of relevance info. 3. Try to establish, for these runs, the amount of improvement possible with more relevance info.

Track coordinator(s):

  • C. Buckley, Sabir Research
  • S. Robertson, Microsoft

Track Web Page: https://trec.nist.gov/data/relevance.feedback08.html