Runs - Million Query 2007¶
exegyexact¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: exegyexact
- Participant: exegy.indeck
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: The results were generated automatically using the Exegy TextMiner engine. The dataset is not indexed, the engine performs search on the data as it streams through. The engine looks for the queries exactly as they appear in the query file. Documents are ranked according to the number of occurrences of a particular query in a document.
ffind07c¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: ffind07c
- Participant: ualaska.newby
- Track: Million Query
- Year: 2007
- Submission: 6/17/2007
- Type: automatic
- Task: official
- Run description: I used the TREC TB track qrels to choose a subset of GOV2 to search. This is a distributed/grid IR simulation.
ffind07d¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: ffind07d
- Participant: ualaska.newby
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: This is a large-scale distributed/grid IR simulation. I used the qrels from the previous TREC Terabyte tracks to pick a subset of GOV2 to search.
hedge0¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: hedge0
- Participant: northeasteru.aslam
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: We used several standard Lemur built in systems (tfidf_bm25, tfidf_log, kl_abs,kl_dir,inquery,cos, okapi) and combined their output (metasearch) using the hedge algorithm.
hitir2007mq¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: hitir2007mq
- Participant: heilongjiang-it.qi
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: We are a new group in IR society from Heilongjiang Institute of Technology, China. This is our first time to participate TREC evaluation. LEMUR is used in our run.
indriDM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: indriDM
- Participant: umass.allan
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: we use don's dependency model and terms in topic title to generate queries, and run by using indri
indriDMCSC¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: indriDMCSC
- Participant: umass.allan
- Track: Million Query
- Year: 2007
- Submission: 6/16/2007
- Type: automatic
- Task: official
- Run description: use all the terms in the topic title and don's denpendence model, but we do spell checking by using the typical aspell(or ispell) tool in unix system, and then use wsyn operator of indri to put the spell checking results into the queries
indriQL¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: indriQL
- Participant: umass.allan
- Track: Million Query
- Year: 2007
- Submission: 6/16/2007
- Type: automatic
- Task: official
- Run description: use all the terms in the topic title, and directly use combination operator of indri
indriQLSC¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: indriQLSC
- Participant: umass.allan
- Track: Million Query
- Year: 2007
- Submission: 6/16/2007
- Type: automatic
- Task: official
- Run description: use all the terms in the topic title but we do spell checking by using the typical aspell(or ispell) tool in unix system, and then use weight and combination operator of indri to run the query
JuruSynE¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: JuruSynE
- Participant: ibm.carmel
- Track: Million Query
- Year: 2007
- Submission: 6/16/2007
- Type: automatic
- Task: official
- Run description: Basic Juru run. Docs are scored according to their textual similarity to the query and their number of in-links. Queries are expanded by a short list of sysnoyms related to the gov domain
LucSpel0¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: LucSpel0
- Participant: ibm.carmel
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Lucene, doc text + anchor text, text queries with phrase and proximity elements, stopwords, query expansion by synonyms and spell correction (index based), modified doc length normalization, modified tf()
LucSyn0¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: LucSyn0
- Participant: ibm.carmel
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Lucene, doc text + anchor text, text queries with phrase and proximity elements, stopwords, synonyms query expansion, modified doc length normalization, modified tf().
LucSynEx¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: LucSynEx
- Participant: ibm.carmel
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: Lucene, doc text + anchor text, text queries with phrase and proximity elements, stopwords, synonyms query expansion (expansions will have greater impact in this run, comparing to LucSyn0) , modified doc length normalization, modified tf().
rmitbase¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: rmitbase
- Participant: rmitu.scholer
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Zettair Dirichlet smoothed language model run.
sabmq07a1¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: sabmq07a1
- Participant: sabir.buckley
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: Standard smart ltu.Lnu run
sabmq07sam¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: sabmq07sam
- Participant: sabir.buckley
- Track: Million Query
- Year: 2007
- Submission: 5/25/2007
- Type: automatic
- Task: trial
- Run description: Straight simple Lnu-ltu weighted vector run
UAmsT06tAnLM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT06tAnLM
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 5/23/2007
- Type: automatic
- Task: trial
- Run description: Anchor-texts index, using the Snowball stemming algorithm, standard multinomial language model with Jelinek-Mercer smoothing, lambda = .9
UAmsT06tAnVS¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT06tAnVS
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 5/23/2007
- Type: automatic
- Task: trial
- Run description: Anchor-texts index, using the Snowball stemming algorithm, standard Lucene vector-space model
UAmsT06tTeLM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT06tTeLM
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 5/23/2007
- Type: automatic
- Task: trial
- Run description: Full-text index, using the Snowball stemming algorithm, standard multinomial language model with Jelinek-Mercer smoothing, lambda = .9
UAmsT06tTeVS¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT06tTeVS
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 5/23/2007
- Type: automatic
- Task: trial
- Run description: Full-text index, using the Snowball stemming algorithm, standard Lucene vector-space model
UAmsT06tTiLM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT06tTiLM
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 5/23/2007
- Type: automatic
- Task: trial
- Run description: Title fields index, using the Snowball stemming algorithm, standard multinomial language model with Jelinek-Mercer smoothing, lambda = .9
UAmsT07MAnLM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: UAmsT07MAnLM
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Anchor-texts index, using the Snowball stemming algorithm, standard multinomial language model with Jelinek-Mercer smoothing, lambda = .9
UAmsT07MSm8L¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT07MSm8L
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 7/16/2007
- Type: automatic
- Task: unpooled
- Run description: Weighted CombSUM of language model runs (lambda = .9) on the full-text index (relative weight 0.8), anchor-text index (relative weight 0.1), and titles index (relative weight 0.1), all using the Snowball stemming algorithm.
UAmsT07MSum6¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: UAmsT07MSum6
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Weighted CombSUM of vector-space model runs on the full-text index (relative weight 0.6), anchor-text index (relative weight 0.2), and titles index (relative weight 0.2), all using the Snowball stemming algorithm.
UAmsT07MSum8¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: UAmsT07MSum8
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Weighted CombSUM of vector-space model runs on the full-text index (relative weight 0.8), anchor-text index (relative weight 0.1), and titles index (relative weight 0.1), all using the Snowball stemming algorithm.
UAmsT07MTeLM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UAmsT07MTeLM
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 7/16/2007
- Type: automatic
- Task: unpooled
- Run description: Full-text index, using the Snowball stemming algorithm, standard multinomial language model with Jelinek-Mercer smoothing, lambda = .9
UAmsT07MTeVS¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: UAmsT07MTeVS
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: Full-text index, using the Snowball stemming algorithm, standard Lucene vector-space model.
UAmsT07MTiLM¶
Results
| Participants
| Proceedings
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: UAmsT07MTiLM
- Participant: uamsterdam.deRijke
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: Title fields index, using the Snowball stemming algorithm, standard multinomial language model with Jelinek-Mercer smoothing, lambda = .9
UiucMQbl¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UiucMQbl
- Participant: uc-zhai
- Track: Million Query
- Year: 2007
- Submission: 7/6/2007
- Type: automatic
- Task: unpooled
- Run description: baseline run, using axiomatic approach,
UiucMQqe1¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UiucMQqe1
- Participant: uc-zhai
- Track: Million Query
- Year: 2007
- Submission: 7/9/2007
- Type: automatic
- Task: unpooled
- Run description: using axiomatic approach + semantic query expansion. We used the top 100 snippets returned by Yahoo as resources to select expanded query terms.
UiucMQqe2¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
- Run ID: UiucMQqe2
- Participant: uc-zhai
- Track: Million Query
- Year: 2007
- Submission: 7/9/2007
- Type: automatic
- Task: unpooled
- Run description: using axiomatic approach + semantic query expansion. We used the top 100 snippets returned by Yahoo as resources to select expanded query terms.
umelbexp¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: umelbexp
- Participant: umelbourne.ngoc-ahn
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: Submit query to public web search engine, retrieve snippet information for top 5 documents, add unique terms from snippets to query, run expanded query using same similarity metric as umelbstd
umelbimp¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: umelbimp
- Participant: umelbourne.ngoc-ahn
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: standard impact-based ranking
umelbsim¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: umelbsim
- Participant: umelbourne.ngoc-ahn
- Track: Million Query
- Year: 2007
- Submission: 6/19/2007
- Type: automatic
- Task: official
- Run description: merging of the language modelling and the impact runs
umelbstd¶
Results
| Participants
| Input
| Summary (tb-topics)
| Summary (statMAP)
| Appendix
- Run ID: umelbstd
- Participant: umelbourne.ngoc-ahn
- Track: Million Query
- Year: 2007
- Submission: 6/18/2007
- Type: automatic
- Task: official
- Run description: topic-only run using a similarity metric based on a language model with Dirichlet smoothing as describe by Zhai and Lafferty (2004)