Skip to content

Overview - Tasks 2017

Proceedings | Data | Results | Runs | Participants

Research in Information Retrieval has traditionally focused on serving the best results for a single query, ignoring the reasons (or the task) that might have motivated the user to submit that query. Often times search engines are used to complete complex tasks; achieving these tasks with current search engines requires users to issue multiple queries. For example, booking travel to a location such as London could require the user to submit various queries such as flights to London, hotels in London, points of interest around London, etc. Standard evaluation mechanisms focus on evaluating the quality of a retrieval system in terms of the topical relevance of the results retrieved, completely ignoring the fact that user satisfaction mainly depends on the usefulness of the system in helping the user complete the actual task that led the user issue the query. The TREC Tasks Track is an attempt in devising mechanisms for evaluating quality of retrieval systems in terms of (1) how well they can understand the underlying task that led the user submit a query, and (2) how useful they are for helping users complete their tasks.

Track coordinator(s):

  • Evangelos Kanoulas, University of Amsterdam
  • Emine Yilmaz, University College London
  • Rishabh Mehrotra, University College London
  • Ben Carterette, University of Delaware
  • Nick Craswell and Peter Bailey, Microsoft

Tasks:

  • understanding: Task Understanding
  • completion: Task Completion
  • adhoc: Adhoc Retrieval

Track Web Page: http://www.cs.ucl.ac.uk/tasks-track-2017/