Skip to content

Overview - Tasks 2015

Proceedings | Data | Runs | Participants

Research in Information Retrieval has traditionally focused on serving the best results for a single query, ignoring the reasons (or the task) that might have motivated the user to submit that query. Often times search engines are used to complete complex tasks (information needs); achieving these tasks with current search engines requires users to issue multiple queries. For example, booking travel to a location such as London could require the user to submit various queries such as flights to London, hotels in London, points of interest around London, etc. Standard evaluation mechanisms focus on evaluating the quality of a retrieval system in terms of the relevance of the results retrieved, completely ignoring the fact that user satisfaction mainly depends on the usefullness of the system in helping the user complete the actual task that led the user issue the query. The TREC 2015 Tasks Track is an attempt in devising mechanisms for evaluating quality of retrieval systems in terms of (1) how well they can understand the underlying task that led the user submit a query, and (2) how useful they are for helping users complete their tasks. In this overview, we first summarise the three categories of evaluation mechanisms used in the track and briefly describe the corpus, topics, and tasks that comprise the test collections. We then give an overview of the runs submitted to the Tasks Track and present evaluation results and analysis

Track coordinator(s):

  • Emine Yilmaz, University College London
  • Manisha Verma, University College London
  • Rishabh Mehrotra, University College London
  • Evangelos Kanoulas, University of Amsterdam
  • Ben Carterette, University of Delaware
  • Nick Craswell, Microsoft

Tasks:

  • understanding: Task Understanding
  • completion: Task Completion
  • web: Adhoc Retrieval

Track Web Page: http://www.cs.ucl.ac.uk/tasks-track-2015/