Proceedings - Interactive 2001¶
The TREC 2001 Interactive Track Report¶
William R. Hersh, Paul Over
Abstract
In the TREC 2001 Interactive Track six research teams carried out observational studies which increased the realism of the searching by allowing the use of data and search systems/tools publicly accessible via the Internet. To the extent possible, searchers were allowed to choose tasks and systems/tools for accomplishing those tasks. At the same time, the studies for TREC 2001 were designed to maximize the likelihood that groups would find in their observations the germ of a hypothesis they could test for TREC 2002. This suggested that there be restrictions - some across all sites, some only within a given site - to make it more likely that patterns would emerge. The restrictions were formalized in two sorts of guidelines: one set for all sites and another set that applied only within a site.
Bibtex
@inproceedings{DBLP:conf/trec/HershO01,
author = {William R. Hersh and Paul Over},
editor = {Ellen M. Voorhees and Donna K. Harman},
title = {The {TREC} 2001 Interactive Track Report},
booktitle = {Proceedings of The Tenth Text REtrieval Conference, {TREC} 2001, Gaithersburg, Maryland, USA, November 13-16, 2001},
series = {{NIST} Special Publication},
volume = {500-250},
publisher = {National Institute of Standards and Technology {(NIST)}},
year = {2001},
url = {http://trec.nist.gov/pubs/trec10/papers/t10ireport.pdf},
timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
biburl = {https://dblp.org/rec/conf/trec/HershO01.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Observations of Searchers: OHSU TREC 2001 Interactive Track¶
William R. Hersh, Lynetta Sacherek, Daniel Olson
- Participant: OHSU
- Paper: http://trec.nist.gov/pubs/trec10/papers/Hersh.pdf
- Runs: ohsuI
Abstract
The goal of the TREC 2001 Interactive Track was to carry out observational experiments of Web-based searching to develop hypotheses for experiments in subsequent years. Each participating group was asked to undertake exploratory experiments based on a general protocol. For the OHSU Interactive Track experiments this year, we chose to perform a pure observational study of watching searchers carry out tasks on the Web. We found users were able to complete almost all the tasks within the time limits of the protocol. Future experimental studies aiming to discern differences among systems may need to provide more challenging tasks to detect such differences.
Bibtex
@inproceedings{DBLP:conf/trec/HershSO01,
author = {William R. Hersh and Lynetta Sacherek and Daniel Olson},
editor = {Ellen M. Voorhees and Donna K. Harman},
title = {Observations of Searchers: {OHSU} {TREC} 2001 Interactive Track},
booktitle = {Proceedings of The Tenth Text REtrieval Conference, {TREC} 2001, Gaithersburg, Maryland, USA, November 13-16, 2001},
series = {{NIST} Special Publication},
volume = {500-250},
publisher = {National Institute of Standards and Technology {(NIST)}},
year = {2001},
url = {http://trec.nist.gov/pubs/trec10/papers/Hersh.pdf},
timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
biburl = {https://dblp.org/rec/conf/trec/HershSO01.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
TREC10 Web and Interactive Tracks at CSIRO¶
Nick Craswell, David Hawking, Ross Wilkinson, Mingfang Wu
- Participant: CSIRO
- Paper: http://trec.nist.gov/pubs/trec10/papers/csiro-trec-2001.pdf
- Runs: csiroI
Abstract
For the 2001 round of TREC, the TED group of CSIRO participated and completed runs in two tracks: web and interactive. Our primary goals in the Web track participation were two-fold: A) to confirm our earlier finding [1] that anchor text is useful in a homepage finding task, and B) to provide an interactive search engine style interface to searching the WT10g data. In addition, three title-only runs were submitted, comparing two different implementations of stemming to unstemmed processing of the raw query. None of these runs used pseudo relevance feedback. In the interactive track, our investigation was focused on whether there exists any correlation between delivery (searching/presentation) mechanisms and searching tasks. Our experiment involved three delivery mechanisms and two types of searching tasks. The three delivery mechanisms are: a ranked list interface, a clustering interface, and an integrated interface with ranked list, clustering structure, and Expert Links. The two searching tasks are searching for an individual document and searching for a set of documents. Our experiment result shows that subjects usually used only one delivery mechanism regardless of the searching task. No delivery mechanism was found to be superior for any particular task, the only difference was the time used to complete a search, that favored the ranked list interface.
Bibtex
@inproceedings{DBLP:conf/trec/CraswellHWW01,
author = {Nick Craswell and David Hawking and Ross Wilkinson and Mingfang Wu},
editor = {Ellen M. Voorhees and Donna K. Harman},
title = {{TREC10} Web and Interactive Tracks at {CSIRO}},
booktitle = {Proceedings of The Tenth Text REtrieval Conference, {TREC} 2001, Gaithersburg, Maryland, USA, November 13-16, 2001},
series = {{NIST} Special Publication},
volume = {500-250},
publisher = {National Institute of Standards and Technology {(NIST)}},
year = {2001},
url = {http://trec.nist.gov/pubs/trec10/papers/csiro-trec-2001.pdf},
timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
biburl = {https://dblp.org/rec/conf/trec/CraswellHWW01.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Selecting Versus Describing: A Preliminary Analysis of the Efficacy of Categories in Exploring the Web¶
Elaine G. Toms, Richard W. Kopak, Joan C. Bartlett, Luanne Freund
- Participant: toronto
- Paper: http://trec.nist.gov/pubs/trec10/papers/toms-trec10.pdf
- Runs: torontoI
Abstract
This paper reports the findings of an exploratory study carried out as part of the Interactive Track at the 10th annual Text Retrieval Conference (TREC). Forty-eight, non-expert participants each completed four Web search tasks from among four specified topic areas: shopping, medicine, travel, and research. Participants were given a choice of initiating the search with a query or with a selection of a category from a pre-defined list. Participants were also asked to phrase a selected number of their search queries in the form of a complete statement or question. Results showed that there was little effect of the task domain on the search outcome. Exceptions to this were the problematic nature of the Shopping tasks, and the preference for query over category when the search task was general, i.e. when the semantics of the task did not map directly onto one of the available categories. Participants also evidenced a reluctance/inability to phrase search queries in the form of a complete statement or question. When keywords were used, they were short, averaging around two terms per query statement.
Bibtex
@inproceedings{DBLP:conf/trec/TomsKBF01,
author = {Elaine G. Toms and Richard W. Kopak and Joan C. Bartlett and Luanne Freund},
editor = {Ellen M. Voorhees and Donna K. Harman},
title = {Selecting Versus Describing: {A} Preliminary Analysis of the Efficacy of Categories in Exploring the Web},
booktitle = {Proceedings of The Tenth Text REtrieval Conference, {TREC} 2001, Gaithersburg, Maryland, USA, November 13-16, 2001},
series = {{NIST} Special Publication},
volume = {500-250},
publisher = {National Institute of Standards and Technology {(NIST)}},
year = {2001},
url = {http://trec.nist.gov/pubs/trec10/papers/toms-trec10.pdf},
timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
biburl = {https://dblp.org/rec/conf/trec/TomsKBF01.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Comparing Explicit and Implicit Feedback Techniques for Web Retrieval: TREC-10 Interactive Track Report¶
Ryen White, Joemon M. Jose, Ian Ruthven
- Participant: glasgow
- Paper: http://trec.nist.gov/pubs/trec10/papers/glasgow.pdf
- Runs: glasgowI
Abstract
In this paper we examine the extent to which implicit feedback (where the system attempts to estimate what the user may be interested in) can act as a substitute for explicit feedback (where searchers explicitly mark documents relevant). Therefore, we attempt to side-step the problem of getting users to explicitly mark documents relevant by making predictions on relevance through analysing the user’s interaction with the system. Specifically, we hypothesised that implicit and explicit feedback were interchangeable as sources of relevance information for relevance feedback. Through developing a system that utilised each type of feedback we were able to compare the two approaches in terms of search effectiveness.
Bibtex
@inproceedings{DBLP:conf/trec/WhiteJR01,
author = {Ryen White and Joemon M. Jose and Ian Ruthven},
editor = {Ellen M. Voorhees and Donna K. Harman},
title = {Comparing Explicit and Implicit Feedback Techniques for Web Retrieval: {TREC-10} Interactive Track Report},
booktitle = {Proceedings of The Tenth Text REtrieval Conference, {TREC} 2001, Gaithersburg, Maryland, USA, November 13-16, 2001},
series = {{NIST} Special Publication},
volume = {500-250},
publisher = {National Institute of Standards and Technology {(NIST)}},
year = {2001},
url = {http://trec.nist.gov/pubs/trec10/papers/glasgow.pdf},
timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
biburl = {https://dblp.org/rec/conf/trec/WhiteJR01.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Rutgers' TREC 2001 Interactive Track Experience¶
Nicholas J. Belkin, Colleen Cool, J. Jeng, Amymarie Keller, Diane Kelly, Ja-Young Kim, Hyuk-Jin Lee, Muh-Chyun (Morris) Tang, Xiaojun Yuan
- Participant: rutgers-belkin
- Paper: http://trec.nist.gov/pubs/trec10/papers/rutgers-interactive-paper.pdf
- Runs: rutgersI
Abstract
Our focus this year was to investigate methods for increasing query length in interactive information searching in the Web context, and to see if these methods led to changes in task performance and/or interaction. Thirty-four subjects each searched on four of the Interactive Track topics, in one of two conditions: a “box” query input mode; and a “line” query input mode. One-half of the subjects were instructed to enter their queries as complete sentences or questions; the other half as lists of words or phrases. Results are that: queries entered as questions or statements were significantly longer than those entered as words or phrases (twice as long); that there was no difference in query length between the box and line modes (except for medical topics, where keyword mode led to significantly more unique terms per search); and, that longer queries led to better performance. Other results of note are that satisfaction with the search was negatively correlated with length of time searching and other measures of interaction effort, and that the “buying” topics were significantly more difficult than the other three types.
Bibtex
@inproceedings{DBLP:conf/trec/BelkinCJKKKLTY01,
author = {Nicholas J. Belkin and Colleen Cool and J. Jeng and Amymarie Keller and Diane Kelly and Ja{-}Young Kim and Hyuk{-}Jin Lee and Muh{-}Chyun (Morris) Tang and Xiaojun Yuan},
editor = {Ellen M. Voorhees and Donna K. Harman},
title = {Rutgers' {TREC} 2001 Interactive Track Experience},
booktitle = {Proceedings of The Tenth Text REtrieval Conference, {TREC} 2001, Gaithersburg, Maryland, USA, November 13-16, 2001},
series = {{NIST} Special Publication},
volume = {500-250},
publisher = {National Institute of Standards and Technology {(NIST)}},
year = {2001},
url = {http://trec.nist.gov/pubs/trec10/papers/rutgers-interactive-paper.pdf},
timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
biburl = {https://dblp.org/rec/conf/trec/BelkinCJKKKLTY01.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}