Skip to content

Text REtrieval Conference (TREC) 2010

Blog

Overview | Proceedings | Data | Results | Runs | Participants

The Blog track aims to investigate the information seeking behaviour in the blogosphere. The track was initiated in 2006, and has used an incremental approach in tackling several search tasks by their level of difficulty.

Track coordinator(s):

  • Iadh Ounis, University of Glasgow
  • Ian Soboroff, National Institute of Standards and Technology (NIST)

Track Web Page: https://www.dcs.gla.ac.uk/wiki/TREC-BLOG


Web

Overview | Proceedings | Data | Results | Runs | Participants

The TREC Web Track explores and evaluates Web retrieval technology over large collections of Web data. In its current incarnation, the Web Track has been active for two years. For TREC 2010, the track includes three tasks: 1) an adhoc retrieval task, 2) a diversity task, and 3) a spam task. As we did for TREC 2009, we based our experiments on the billion-page ClueWeb09 data set created by the Language Technologies Institute at Carnegie Mellon University.

Track coordinator(s):

  • Charles L.A. Clarke, University of Waterloo
  • Nick Craswell, Microsoft
  • Ian Soboroff, National Institute of Standards and Technology (NIST)
  • Gordon V. Cormack, University of Waterloo

Track Web Page: https://plg.uwaterloo.ca/~trecweb/2010.html


Chemical

Overview | Proceedings | Data | Results | Runs | Participants

The TREC Chemical IR Track is a domain-specific evaluation campaign working with documents containing specific lexica, including chemical formulas and specific names. The 2010 edition of the track also included supporting material in addition to text: images and structure information files. As in the previous year, we had two tasks: a patent focused prior-art (PA) task and a user-focused Technology Survey task (TS). The data collection includes patent files as well as scientific articles, together with their attachments, if any. Topics and relevance judgments were either automatically or manually created.

Track coordinator(s):

  • Mihai Lupu, York University
  • John Tait, York University
  • Jimmy Huang, York University
  • Jianhan Zhu, York University

Track Web Page: https://trec.nist.gov/data/chemical10.html


Relevance Feedback

Overview | Proceedings | Data | Runs | Participants

This is the third year of the TREC relevance feedback track. The first year concentrated on the RF algorithm itself. All participants were given the same sets of judged docs, and used their own algorithms to retrieve a new set of docs. In the second year, the concentration shifted to finding good sets of docs to base their retrieval on. Each participant submitted one or two sets of 5 docs for each topic, and 3-5 other participants ran with those docs, thus getting a nonsystem dependent score on how good those docs were.

Track coordinator(s):

  • Chris Buckley
  • Mark Smucker
  • Matt Lease

Track Web Page: https://trec.nist.gov/data/relevance.feedback10.html


Overview | Proceedings | Data | Runs | Participants

TREC 2010 was the fifth year of the Legal Track, which focuses on evaluation of search technology for discovery of electronically stored information in litigation and regulatory settings. The TREC 2010 Legal Track consisted of two distinct tasks: the Learning task, in which participants were required to estimate the probability of relevance for each document in a large collection, given a seed set of documents, each coded as responsive or non-responsive; and the Interactive task, in which participants were required to identify all relevant documents using a human-in-the-loop process.

Track coordinator(s):

  • Gordon V. Cormack, University of Waterloo
  • Maura R. Grossman, Wachtell, Lipton, Rosen & Katz
  • Bruce Hedin, H5
  • Douglas W. Oard, University of Maryland, College Park

Track Web Page: http://trec-legal.umiacs.umd.edu/


Session

Overview | Proceedings | Data | Runs | Participants

Research in Information Retrieval has traditionally focused on serving the best results for a single query. In practice however users often enter queries in sessions of reformulations. The Sessions Track at TREC 2010 implements an initial experiment to evaluate the effectiveness of retrieval systems over single query reformulations.

Track coordinator(s):

  • Evangelos Kanoulas, University of Sheffield
  • Paul Clough, University of Sheffield
  • Ben Carterette, University of Delaware
  • Mark Sanderson, Royal Melbourne Institute of Technology (RMIT University)

Track Web Page: http://ir.cis.udel.edu/sessions


Entity

Overview | Proceedings | Data | Results | Runs | Participants

The overall goal of the track is to perform entity-oriented search tasks on the World Wide Web. Many user information needs concern entities (people, organizations, locations, products, ...); these are better answered by returning specific objects instead of just any type of documents.

Track coordinator(s):

  • Krisztian Balog, Norwegian University of Science and Technology (NTNU)
  • Pavel Serdyukov, Yandex, Russia
  • Arjen P. de Vries, CWI

Track Web Page: https://web.archive.org/web/20110811014305/http://ilps.science.uva.nl/trec-entity/