Text REtrieval Conference (TREC) 2017¶
Common Core¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
The primary goal of the TREC Common Core track is three-fold: (a) bring the information retrieval community back into a traditional ad-hoc search task; (b) attract a diverse set of participating runs and build a new test collections using more recently created documents; (c) establish a (new) test collection construction methodology that avoids the pitfalls of depth-k pooling. A number of side-goals are also set, including studying the shortcomings of test collections constructed in the past; experimenting with new ideas for constructing test collections; expand test collections by new participant tasks (ad-hoc/interactive), new relevance judgments (binary/multilevel), new pooling methods, new assessment resources (NIST / crowd-sourcing) and new retrieval systems contributing documents (manual/neural/strong baselines).
Track coordinator(s):
- Evangelos Kanoulas, University of Amsterdam
- James Allan, University of Massachusetts
- Donna Harman, National Institute of Standards and Technology (NIST)
Track Web Page: https://trec-core.github.io/2017/
Precision Medicine¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
For many complex diseases, there is no “one size fits all” solutions for patients with a particular diagnosis. The proper treatment for a patient depends upon genetic, environmental, and lifestyle choices. The ability to personalize treatment in a scientifically rigorous manner based on these factors is the hallmark of the emerging “precision medicine” paradigm. Nowhere is the potential impact of precision medicine more closely felt than in cancer, where lifesaving treatments for particular patients could prove ineffective or even deadly for other patients based entirely upon the particular genetic mutations in the patient’s tumor(s). Significant effort, therefore, has been devoted to deepening the scientific research surrounding precision medicine. This includes a Precision Medicine Initiative launched by former President Barack Obama in 2015, now known as the All of Us Research Program.
Track coordinator(s):
- Kirk Roberts, The University of Texas Health Science Center
- Dina Demner-Fushman, U.S. National Library of Medicine
- Ellen M. Voorhees, National Institute of Standards and Technology (NIST)
- William R. Hersh, Oregon Health & Science University
- Steven Bedrick, Oregon Health & Science University
- Alexander J. Lazar, The University of Texas MD Anderson Cancer Center
- Shubham Pant, The University of Texas MD Anderson Cancer Center
Track Web Page: https://www.trec-cds.org/
LiveQA¶
Overview
| Proceedings
| Data
| Runs
| Participants
The task addresses the automatic answering of consumer health questions received by the U.S. National Library of Medicine. We provided both training question-answer pairs, and test questions with reference answers. All questions were manually annotated with the main entities (foci) and question types. The medical task received eight runs from five participating teams. Different approaches have been applied, including classical answer retrieval based on question analysis and similar question retrieval. In particular, several deep learning approaches were tested, including attentional encoder-decoder networks, long short-term memory networks and convolutional neural networks. The training datasets were both from the open domain and the medical domain.
Track coordinator(s):
- Asma Ben Abacha, U.S. National Library of Medicine
- Eugene Agichtein, Emory University
- Yuval Pinter, Georgia Institute of Technology
- Dina Demner-Fushman, U.S. National Library of Medicine
Track Web Page: https://web.archive.org/web/20170729204820/https://sites.google.com/site/trecliveqa2017/
Real-time Summarization¶
Overview
| Proceedings
| Data
| Runs
| Participants
The TREC 2017 Real-Time Summarization (RTS) Track is the second iteration of a community effort to explore techniques, algorithms, and systems that automatically monitor streams of social media posts such as tweets on Twier to address users’ prospective information needs. These needs are articulated as “interest profiles”, akin to topics in ad hoc retrieval. In real-time summarization, the goal is for a system to deliver interesting and novel content to users in a timely fashion. We refer to these messages generically as “updates”.
Track coordinator(s):
- Jimmy Lin, University of Waterloo
- Salman Mohammed, University of Waterloo
- Royal Sequiera, University of Waterloo
- Luchen Tan, University of Waterloo
- Nimesh Ghelani, University of Waterloo
- Mustafa Abualsaud, University of Waterloo
- Richard McCreadie, University of Glasgow
- Dmitrijs Milajevs, National Institute for Standards and Technology (NIST)
- Ellen Voorhees, National Institute for Standards and Technology (NIST)
Track Web Page: https://trecrts.github.io/
Complex Answer Retrieval¶
Overview
| Proceedings
| Data
| Runs
| Participants
The SWIRL 2012 workshop on frontiers, challenges, and opportunities for information retrieval report [1] noted many important challenges. Among them, challenges such as conversational answer retrieval, subdocument retrieval, and answer aggregation share commonalities: We desire answers to complex needs, and wish to find them in a single and well-presented source. Advancing the state of the art in this area is the goal of this TREC track. Consider a user investigating a new and unfamiliar topic. This user would often be best served with a single summary, rather than being required to synthesize his or her own summary from multiple sources. This is especially the case in mobile environments with restricted interaction capabilities. While these have led to extensive work on finding the best short answer, the target in this track is the retrieval of comprehensive answers that are composed of multiple text fragments from multiple sources. Retrieving high-quality longer answers is challenging as it is not sufficient to choose a lower rank-cutoff with the same techniques as for short answers. Instead, we need new approaches for finding relevant information in a complex answer space. Many examples of manually created complex answers exist on the Web. Famous examples are articles from how-stuff-works.com, travel guides, or fanzines. These are collections of articles, that each constitutes a long answer to an information need represented by the title of the article. The fundamental task of collecting references, facts, and opinions into a single coherent summary has traditionally been a manual process. We envision that automated information retrieval systems can relieve users from a large amount of manual work through sub-document retrieval, consolidation and organization. Ultimately, the goal is to retrieve synthesized information rather than documents.
Track coordinator(s):
- Laura Dietz, University of New Hampshire
- Manisha Verma, University College London
- Filip Radlinski, Google
- Nick Craswell, Microsoft
Track Web Page: https://trec-car.cs.unh.edu/
Tasks¶
Overview
| Proceedings
| Data
| Results
| Runs
| Participants
Research in Information Retrieval has traditionally focused on serving the best results for a single query, ignoring the reasons (or the task) that might have motivated the user to submit that query. Often times search engines are used to complete complex tasks; achieving these tasks with current search engines requires users to issue multiple queries. For example, booking travel to a location such as London could require the user to submit various queries such as flights to London, hotels in London, points of interest around London, etc. Standard evaluation mechanisms focus on evaluating the quality of a retrieval system in terms of the topical relevance of the results retrieved, completely ignoring the fact that user satisfaction mainly depends on the usefulness of the system in helping the user complete the actual task that led the user issue the query. The TREC Tasks Track is an attempt in devising mechanisms for evaluating quality of retrieval systems in terms of (1) how well they can understand the underlying task that led the user submit a query, and (2) how useful they are for helping users complete their tasks.
Track coordinator(s):
- Evangelos Kanoulas, University of Amsterdam
- Emine Yilmaz, University College London
- Rishabh Mehrotra, University College London
- Ben Carterette, University of Delaware
- Nick Craswell and Peter Bailey, Microsoft
Track Web Page: http://www.cs.ucl.ac.uk/tasks-track-2017/
Dynamic Domain¶
Overview
| Proceedings
| Data
| Runs
| Participants
The goal of dynamic domain track is promoting the research of dynamic, exploratory search within complex information domains, where the search process is usually interactive and user’s information need is also complex. Dynamic Domain (DD) track has been held in the past three years. This track's name includes two parts. “Dynamic” means the search process may contain multiple runs of iteration, and the participating system is expected to adapt its search algorithm based on the relevance feedback. “Domain” means the search task focuses on special domains, where user’s information need consists of multiple aspects, and the participating system is expected to help the user explore the domain through rich interaction. This task has received great attention and this track is inspired by interested groups in government, including DARPA MEMEX program.
Track coordinator(s):
- Grace Hui Yang, Georgetown University
- Zhiwen Tang, Georgetown University
- Ian Soboroff, National Institute of Standards and Technology (NIST)
Track Web Page: https://infosense.cs.georgetown.edu/trec_dd/index.html
OpenSearch¶
Overview
| Proceedings
| Runs
| Participants
The OpenSearch track provides researchers the opportunity to have their retrieval approaches evaluated in a live setting with real users. We focus on the academic search domain with the Social Science Open Access Repository (SSOAR) search engine and report our results.
Track coordinator(s):
- Rolf Jagerman, University of Amsterdam
- Martin de Rijke, University of Amsterdam
- Krisztian Balog, University of Stavanger
- Phillip Schaer, TH Köln
- Johann Schaible, GESIS - Leibniz Institute for the Social Sciences
- Narges Tavakolpoursaleh, GESIS - Leibniz Institute for the Social Sciences
Track Web Page: https://web.archive.org/web/20170617095056/http://trec-open-search.org/