Skip to content

Proceedings - Interactive 2000

The TREC-9 Interactive Track Report

William R. Hersh, Paul Over

Abstract

The TREC Interactive Track has the goal of investigating interactive information retrieval by examining the process as well as the results. In TREC-9 six research groups ran a total of 12 interactive information retrieval (IR) system variants on a shared problem: a fact-finding task, eight questions, and newspaper/newswire documents from the TREC col-lections. This report summarizes the shared experimental framework, which for TREC-9 was designed to support analysis and comparison of system performance only within sites. The report refers the reader to separate discussions of the experiments performed by each participating group - their hypotheses, experimental systems, and results. The papers from each of the participating groups and the raw and evaluated results are available via the TREC home page (trec.nist.gov).

Bibtex
@inproceedings{DBLP:conf/trec/HershO00,
    author = {William R. Hersh and Paul Over},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {The {TREC-9} Interactive Track Report},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/t9irep.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/HershO00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Sheffield Interactive Experiment at TREC-9

Micheline Hancock-Beaulieu, Helene Fowkes, Hideo Joho

Abstract

The paper reports on the experiment conducted by the University of Sheffield in the Interactive Track of TREC-9 based on the Okapi probabilistic ranking system. A failure analysis of results was undertaken to correlate search outcomes with query characteristics. A detailed comparison of Sheffield results with the aggregate for the track reveals that the time element, topic type, and searcher characteristics and behaviour are interdependent success factors. An analysis of the ranking of documents retrieved by the Okapi system and deemed relevant by the assessors also revealed that more than 50% appeared in the top 10 and 80% in the top 30. However the searchers did not necessarily view these and over half of the items deemed relevant by the assessors and examined by the searchers were actually rejected.

Bibtex
@inproceedings{DBLP:conf/trec/Hancock-BeaulieuFJ00,
    author = {Micheline Hancock{-}Beaulieu and Helene Fowkes and Hideo Joho},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {Sheffield Interactive Experiment at {TREC-9}},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/sheffield.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Hancock-BeaulieuFJ00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Melbourne TREC-9 Experiments

Daryl J. D'Souza, Michael Fuller, James A. Thom, Phil Vines, Justin Zobel

Abstract

We report results for experiments conducted in Melbourne at CSIRO, RMIT, and The University of Melbourne for TREC-9. We present results for the interactive track, cross-lingual track, main web track, and the query track.

Bibtex
@inproceedings{DBLP:conf/trec/DSouzaFTVZ00,
    author = {Daryl J. D'Souza and Michael Fuller and James A. Thom and Phil Vines and Justin Zobel},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {Melbourne {TREC-9} Experiments},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/mds.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/DSouzaFTVZ00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Question Answering, Relevance Feedback and Summarisation: TREC-9 Interactive Track Report

Neill Alexander, Craig Brown, Joemon M. Jose, Ian Ruthven, Anastasios Tombros

Abstract

In this paper we report on the effectiveness of query-biased summaries for a question-answering task. Our summarisation system presents searchers with short summaries of documents, composed of a series of highly matching sentences extracted from the documents. These summaries are also used as evidence for a query expansion algorithm to test the use of summaries as evidence for interactive and automatic query expansion.

Bibtex
@inproceedings{DBLP:conf/trec/AlexanderBJRT00,
    author = {Neill Alexander and Craig Brown and Joemon M. Jose and Ian Ruthven and Anastasios Tombros},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {Question Answering, Relevance Feedback and Summarisation: {TREC-9} Interactive Track Report},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/glasgow\_proceedings.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/AlexanderBJRT00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Support for Question-Answering in Interactive Information Retrieval: Rutgers' TREC-9 Interactive Track Experience

Nicholas J. Belkin, Amymarie Keller, Diane Kelly, Jose Perez Carballo, C. Sikora, Ying Sun

Abstract

We compared two different interfaces to the InQuery IR system with respect to their support for the TREC-9 Interactive Track Question-Answering task. One interface presented search results as a ranked list of document titles (displayed ten at one time), with the text of one document (the first, or any selected one) displayed in a scrollable window. The other presented search results as a ranked series of scrollable windows of the texts of the retrieved documents, displayed six documents at a time, each document display beginning at the system-computed “best passage”. Our hypotheses were that: multiple-text, best passage display would have an overall advantage for question answering; single-text, multiple title display would have an advantage for the list-oriented question types; and that multiple-text, best passage display would have an advantage for the comparison-oriented question types. The two interfaces were compared on effectiveness, usability and preference measures for sixteen subjects. Results were equivocal.

Bibtex
@inproceedings{DBLP:conf/trec/BelkinKKCSS00,
    author = {Nicholas J. Belkin and Amymarie Keller and Diane Kelly and Jose Perez Carballo and C. Sikora and Ying Sun},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {Support for Question-Answering in Interactive Information Retrieval: Rutgers' {TREC-9} Interactive Track Experience},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/rutgers-int.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/BelkinKKCSS00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}