Skip to content

Proceedings - Query 2000

The TREC-9 Query Track

Chris Buckley

Abstract

The Query Track in TREC-9 is unlike the other tracks in TREC. The other tracks attempt to compare systems to determine the best approaches to solve the particular track problem. This comparison is normally done over a given set of topics, with a single query per topic. The Query Track, on the other hand, compares multiple queries on a single topic to determine which queries perform best with which systems. There is no emphasis on system-system comparisons: none of the participating systems were even the most advanced system from that particular participating group. Instead, the goal is to try and understand how the statement (query) of the user's information need (topic) affects retrieval. [...]

Bibtex
@inproceedings{DBLP:conf/trec/Buckley00,
    author = {Chris Buckley},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {The {TREC-9} Query Track},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/query\_track.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Buckley00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Hummingbird's Fulcrum SearchServer at TREC-9

Stephen Tomlinson, Tom Blackwell

Abstract

Hummingbird submitted ranked result sets for the Main Web Task (10GB of web data) and Large Web Task (100GB) of the TREC-9 Web Track, and for Stage 2 of the TREC-9 Query Track (43 variations of 50 queries). SearchServer's Intuitive Searching produced the highest Precision@5 score (averaged over 50 web queries) of all Title-only runs submitted to the Main Web Task. SearchServer's approximate text searching and linguistic expansion each increased average precision for web queries by 5%. Enabling SearchServer's document length normalization increased average precision for web queries by 10-30% and for long queries by 100%. Squaring the importance of the inverse document frequency (relevance method 'V2:4') increased average precision in the query track by 5%. Blind query expansion decreased average precision of highly relevants for web queries by almost 15%; the same method was neutral when counting all relevants the same.

Bibtex
@inproceedings{DBLP:conf/trec/TomlinsonB00,
    author = {Stephen Tomlinson and Tom Blackwell},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {Hummingbird's Fulcrum SearchServer at {TREC-9}},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/humtrec9.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/TomlinsonB00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Melbourne TREC-9 Experiments

Daryl J. D'Souza, Michael Fuller, James A. Thom, Phil Vines, Justin Zobel

Abstract

We report results for experiments conducted in Melbourne-at CSIRO, RMIT, and The University of Melbourne for TREC-9. We present results for the interactive track, cross-lingual track, main web track, and the query track.

Bibtex
@inproceedings{DBLP:conf/trec/DSouzaFTVZ00,
    author = {Daryl J. D'Souza and Michael Fuller and James A. Thom and Phil Vines and Justin Zobel},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {Melbourne {TREC-9} Experiments},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/mds.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/DSouzaFTVZ00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

SabIR Research at TREC-9

Chris Buckley, Janet A. Walz

Abstract

Sabir Research participated in TREC-9 in a somewhat lower key fashion than normal. We participated only in the Main Web Task, and in the Query Track. Most of our interesting work and analysis was done in the Query Track, and is reported in the TREC-9 Query Track Overview. Here we report very briefly on the Main Web Task; briefly because there really isn't much interesting to say this year! [...]

Bibtex
@inproceedings{DBLP:conf/trec/BuckleyW00,
    author = {Chris Buckley and Janet A. Walz},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {SabIR Research at {TREC-9}},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/sabir.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/BuckleyW00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

INQUERY and TREC-9

James Allan, Margaret E. Connell, W. Bruce Croft, Fangfang Feng, David Fisher, Xiaoyan Li

Abstract

This year the Center for Intelligent Information Retrieval (CIIR) at the University of Massachusetts participated in three of the tracks: the cross-language, question answering, and query tracks. We used approaches that were similar to those used in past years. In the next section, we describe some of the basic processing that was applied across most of the tracks. We then describe the details for each of the tracks and in some cases present some modest analysis of the effectiveness of our results.

Bibtex
@inproceedings{DBLP:conf/trec/AllanCCFFL00,
    author = {James Allan and Margaret E. Connell and W. Bruce Croft and Fangfang Feng and David Fisher and Xiaoyan Li},
    editor = {Ellen M. Voorhees and Donna K. Harman},
    title = {{INQUERY} and {TREC-9}},
    booktitle = {Proceedings of The Ninth Text REtrieval Conference, {TREC} 2000, Gaithersburg, Maryland, USA, November 13-16, 2000},
    series = {{NIST} Special Publication},
    volume = {500-249},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2000},
    url = {http://trec.nist.gov/pubs/trec9/papers/umass-trec9.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/AllanCCFFL00.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}