Runs - HARD 2005¶
ALLcs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ALLcs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Please give a short description of this run, to be used by the track coodinator to understand what happened and describe it for the track report. These will not be looked at until judgments have been returned.
AS1cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: AS1cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: OQ + 0.1(All CF1 suggested terms)
BF3cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BF3cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: RUTGBF3 OQ + Pseudo-RF terms with parameters pseudoDocCount = 20; feedbackTermCount = 100; feedbackPosCoeff = 0.3.
BLcs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BLcs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: RUTGALL Original Query (OQ) + All CF1 terms + All CF2 terms, with equal weight
CASP1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CASP1
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 7/5/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
CASP2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CASP2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 7/5/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
CASS1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CASS1
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
CASS2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CASS2
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
cassallfb¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassallfb
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: combining the query expantion of "cass2" CF and cassself run.
cassallfb2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassallfb2
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: manual
- Task: final
- Run description: feedback according to the result of "cassallfb" run.
cassallfb2re¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassallfb2re
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: manual
- Task: final
- Run description: rerank the result of "cassallfb2" run.
cassallre¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassallre
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: rerank the result of "cassallfb" run
cassbase¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassbase
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: baseline system changing query by adding the top 40 extracted words from toplist results
cassbasere¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassbasere
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: rerank the result of cassbase run.
cassgoogle¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassgoogle
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: extend the query by adding the google retrieval result
cassself¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassself
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: manual
- Task: final
- Run description: extend the query by adding the relavant doc's title and top keywords.
cassselfre¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cassselfre
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: rerank the result of "cassself" run.
casstopdoc¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: casstopdoc
- Participant: cas.zhang
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: extend the query by adding the TREC2004 corpus retrieval result
DUBL1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: DUBL1
- Participant: ucollege-dublin.toolan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: Word Sense disambiguation. Assessor asked to judge terms semantically related to topic terms.
DUBLB¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: DUBLB
- Participant: ucollege-dublin.toolan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: This is a baseline run. Topic titles were stopped and stemmed. BM25 was used for retrieval via Lemur IR Toolkit.
DUBLF¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: DUBLF
- Participant: ucollege-dublin.toolan
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Documents re-ranked according to readability classification.
INDI1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: INDI1
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: We also ask users to judge the relationships, such as 'AND', 'OR' and 'NOT', between search phrases.
INDI2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: INDI2
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: this form ask user to make judgement on the top sentence we extracted from top 25 sentence clusters of the top 200 ranked retrieval results produced by our best fusion run
LS1cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: LS1cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: res-struct-hard2005-title+desc-all-internet-.1-prf-5-5-.3.txt
MARY05C1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARY05C1
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: manual
- Task: final
- Run description: Documents judged as relevant and possibly relevant by a librarian after reviewing clarification forms are at the top of the ranked list. The list is padded with InQuery retrieval results for a weighted sum query on title and description fields plus librarian's relevance feedback.
MARY05C2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARY05C2
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: InQuery retrieval results for a weighted sum query on title and description fields plus additional search terms from clarification forms
MARY05C3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARY05C3
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: InQuery retrieval results for a weighted sum query on title and description fields plus blind relevance feedback and additional search terms from clarification forms
MARY1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARY1
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: A reference librarian generated questions based on hard to judge documents
MARYB1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARYB1
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: manual
- Task: baseline
- Run description: Documents judged as relevant and possibly relevant by a librarian are at the top of the ranked list. The list is padded with InQuery retrieval results for a weighted sum query on title and description fields plus librarian's relevance feedback.
MARYB2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARYB2
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: manual
- Task: baseline
- Run description: Title and description terms combined with a librarian's relevance feedback
MARYB3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MARYB3
- Participant: umaryland.oard
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: Title and description fields plus blind relevance feedback
MASS1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASS1
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
MASS2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASS2
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
MASSbaseDEE3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASSbaseDEE3
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: We used 10+ years of LDC news corpora in order build an expanded query where other techniques might just use the evaluation corpora. Previous relevance judgments were used for meta-parameter tuning. No relevance judgments were memorized for either the training topics or this year's evaluation topics.
MASSbaseDRM3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASSbaseDRM3
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: We used traditional pseudorelevance feedback with relevance models.
MASSbaseTEE3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASSbaseTEE3
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: We used 10+ years of LDC news corpora in order build an expanded query where other techniques might just use the evaluation corpora. Previous relevance judgments were used for meta-parameter tuning. No relevance judgments were memorized for either the training topics or this year's evaluation topics.
MASSbaseTRM3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASSbaseTRM3
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: We used traditional pseudorelevance feedback with relevance models.
MASSpsgRM3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASSpsgRM3
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Used passage judgements of a superset of AQUAINT to perform a language model-based retrieval.
MASSpsgRM3R¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASSpsgRM3R
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Used passage judgements of a superset of AQUAINT to perform a language model-based retrieval. These scores were then regularized with judgments fixed at 1 for relevant documents and 0 for non-relevant documents.
MASStrmR¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASStrmR
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Used term judgements to adjust expansion term weights.
MASStrmS¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MASStrmS
- Participant: umass.allan
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Used term judgements to construct weighted, structured queries.
MEIJ1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MEIJ1
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: We generated word level cluster and document level cluster. We use k-means clustering algorithm.
MEIJ2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MEIJ2
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: We ganerated expanded terms using WordNet2.0.
MeijiHilBL1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilBL1
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: Tf-idf score using Lucene 1.4.
MeijiHilBL2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilBL2
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: Tf-idf score using Lucene 1.4.
MeijiHilCWE1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilCWE1
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We expanded query by extracted words using positive cluster information.
MeijiHilCWE2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilCWE2
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We expanded query by extracted words using positive / negative cluster information.
MeijiHilMrg¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilMrg
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Query expansion by using the word and cluster information checked and the word extracted from cluster
MeijiHilRC¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilRC
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Query expansion by using the cluster information checked
MeijiHilRW¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilRW
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Query expansion by using the word checked
MeijiHilRWC¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilRWC
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Query expansion by using the word and cluster information checked
MeijiHilWN¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MeijiHilWN
- Participant: meijiu.kakuta
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We expanded Query by using Synonym of the layered structure of WordNet.
NCAR1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCAR1
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
NCAR2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCAR2
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: Next to each term, we showed an example sentence for context. However, users still made judgments about terms.
NCAR3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCAR3
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: We displayed example sentences to users, but DID NOT ask users to judge the sentences. Instead we placed a textarea next to the sentences and asked users to input terms that they were interested in adding to their search statements.
NCARhard05B¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCARhard05B
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: We used OKAPI BM-25 for retrieval. We did not use pseudo-relevance feedback. We used the Lemur default for indexing.
NCARhard05F1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCARhard05F1
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/27/2005
- Type: automatic
- Task: final
- Run description: Terms obtained from the CF were used for query expansion. New queries were re-submitted and new results were obtained.
NCARhard05F2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCARhard05F2
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/27/2005
- Type: automatic
- Task: final
- Run description: Terms obtained from the CF were used for query expansion. New queries were re-submitted and new results were obtained.
NCARhard05F3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NCARhard05F3
- Participant: unc.kelly
- Track: HARD
- Year: 2005
- Submission: 7/27/2005
- Type: automatic
- Task: final
- Run description: Terms obtained from the CF were used for query expansion. New queries were re-submitted and new results were obtained.
NLPRB¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRB
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 7/5/2005
- Type: automatic
- Task: baseline
- Run description: This is our baseline run .We use BM25 as our baseline with a slight improvement, that is to take a new featrue selection algorithm when selecting feedback terms instead of Robertson Selection.
NLPRCF1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: Query expansion based on CASP1
NLPRCF1CF2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1CF2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: Please give a short description of this run, to be used by the track coodinator to understand what happened and describe it for the track report. These will not be looked at until judgments have been returned.
NLPRCF1S1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1S1
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: This run generates query expansion based on sentences of CASP1.
NLPRCF1S1CF2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1S1CF2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: This run generates query expansion based on the relevant documents judgement of CASP1 and CASP2.
NLPRCF1S2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1S2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: This run generates query expansion based on the relevant documents judgement of CASP1.
NLPRCF1S2CF2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1S2CF2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: This run generates query expansion based on the relevant documents judgement of CASP1 and CASP2.
NLPRCF1W¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1W
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: This run generates query expansion based on the relevant words judgement of CASP1.
NLPRCF1WCF2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF1WCF2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: This run generates query expansion based on the relevant words judgement of CASP1 and CASP2.
NLPRCF2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: NLPRCF2
- Participant: cas.nlpr.jzhao
- Track: HARD
- Year: 2005
- Submission: 8/7/2005
- Type: automatic
- Task: final
- Run description: Query expansion based on CASP2
PITT1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITT1
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: using SOM graphic interface
PITT2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITT2
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: using SOM and clustering techniques to identify top several possible clusters , and ask the annotator to select as many relevant clusters as possible by looking at the term based cluster representatives
PITTBTD¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITTBTD
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: query genated by title and description. using Indri as the search engine use Indri's BRF with 20 docs, 20 terms and 0.5 weight
PITTBTDN225¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITTBTDN225
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: query genated by title, description and phrases extracted from narrative using Indri as the search engine use Indri's BRF with 20 docs, 20 terms and 0.5 weight
PITTEC1BWWB¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITTEC1BWWB
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: based on PITTBTD225 which uses Indri 1.0 and its BRF feature (parameters 20 docs, 20 terms with 0.5 weight) and clarification form PITT1. The selected terms are used as expanded query term part. The weights used to select terms for PITT1 are preserved in expanded queries
PITTEC2B225A¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITTEC2B225A
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: based on PITTBTD which uses Indri 1.0 and its BRF feature (parameters 20 docs, 20 terms with 0.5 weight) and clarification form PITT2. The selected terms are used as expanded query term part. The final run also uses the Indri's BRF feature
PITTEC2NOB1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITTEC2NOB1
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: based on PITTBTD225 which uses Indri 1.0 and its BRF feature (parameters 20 docs, 20 terms with 0.5 weight) and clarification form PITT2. The selected terms are used as expanded query term part
PITTHDCOMB1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PITTHDCOMB1
- Participant: upittsburgh.he
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: evidence combination using PITTBTD, PITTBTDN225, PITTEC2NOB1, PITTEC1BWWB. since the system would not recognize two runid and cf id, here is the correction information 1. baseline runs used in ths run are PITTBTD, PITTBTDN225. 2. clarification forms used in this run are PITT1, PITT2
RS1cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RS1cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: OQ + 0.1(n randomly selected terms from CF1 suggested terms, where n = the number of terms selected by the user from that CF1).
RUTBE¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUTBE
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: DSW .3 Selected
RUTG1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUTG1
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 7/8/2005
- Task: clarification
- Run description: * This form solicits the assessors spontaneous (unaided) generation of additional terms and descriptive text
RUTG2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUTG2
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 7/8/2005
- Task: clarification
- Run description: * extraction of specific terminology from the world-wide-web; * generation of terms based on clarity (KL divergence from PRF retrieved set and background model)
RUTGBLDR¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUTGBLDR
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: This run was done with Lemur, using BM 25 and all other default parameters
RUTIN¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUTIN
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: res-struct-hard2005-title+desc-all-internet-.1-prf-5-5-.3.txt
SAIC1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAIC1
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: This is the traditional document level feedback. 8 document from the retrieval results are selected for feedback. Each document is shown with its title and abstraction, with key terms highlighed in the abstraction area.
SAIC2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAIC2
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: This is also a document level feedback. But instead of showing the content of this document, its neighborhood is shown to the user. 8 document from the retrieval results are selected for feedback, and their neighborhood's description (including key terms and abstractions) are shown to the user.
SAICBASE1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICBASE1
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: This run use 20 terms from the given topic and issue them as a vector query in Lucene (re-ranked by BM25), without blind feedback.
SAICBASE2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICBASE2
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: Extract 20 terms from topic and issue it as a vector query to Lucene (BM25 re-ranked). Then use the top 30 documents and top 40 terms to generate the blind feedback result.
SAICFINAL1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICFINAL1
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: Standard rocchio method feedback
SAICFINAL2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICFINAL2
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: Standard rocchio method feedback
SAICFINAL3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICFINAL3
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: Conservative rocchio method feedback
SAICFINAL4¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICFINAL4
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: Conservative rocchio method feedback
SAICFINAL5¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICFINAL5
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: Standard rocchio method feedback. With CF1 and CF2 merged.
SAICFINAL6¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: SAICFINAL6
- Participant: saic.michel
- Track: HARD
- Year: 2005
- Submission: 8/5/2005
- Type: automatic
- Task: final
- Run description: Conservative rocchio method feedback. With CF1 and CF2 merged.
STRA1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRA1
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: We used the standard settings for the OKAPI retreival model that are bundled with the Lemur Framework. We used the topic titles to rank the top 1000 documents using OKAPI.
STRA2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRA2
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: We used a binary structure to display competing document summaries generated from general and discriminative query terms that were generated from a pseudo-relevance feedback phase. This was to determine if users preferred summaries that related to their topic familiarity.
STRA3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRA3
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Task: clarification
- Run description: We posed a number of questions to the user to assess their knowledge of the specefic topic and of the more wider domain the topic was drawn from.
STRAxmta¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxmta
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: On the top 1000 documents ranked with the original query, query expansion was executed adding "motivational terms" selected based on the topic model. The topic model was formed using the top 10 documents.
STRAxmtg¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxmtg
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: On the top 1000 documents ranked with the original query, query expansion was executed adding "motivational terms" selected based on the topic model. The topic model was formed using the top 10 documents.
STRAxprfb¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxprfb
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Using the top N documents, pseudo-relevance feedback was performed.
STRAxqedt¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxqedt
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: On the top 1000 documents ranked with the original query, query expansion was executed adding "discriminative terms" selected based on the topic model. The topic model was formed using the top 10 documents. The number of terms used for QE was based on user familiarity.
STRAxqert¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxqert
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: On the top 1000 documents ranked with the original query, query expansion was executed adding "representative terms" selected based on the topic model. The topic model was formed using the top 10 documents. The number of terms used for QE was based on user familiarity.
STRAxreadA¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxreadA
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Using the Flesch readibility score documents with a high readiblity score were assigned more weight and pushed up the final ranking.
STRAxreadG¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: STRAxreadG
- Participant: ustrathclyde.baillie
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Using the feedback gathered from the clarification form, we defined 4 groups based on the users interest in the topic. Depending on the user group we gave more weight to those documents that were considered easier to read using the Flesch readibility score.. e.g. a group less interested in the topic, easy to read documents would be assigned more weight.
TWEN1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWEN1
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: - we generated subject/temporal profiles of the queries by analysing the top x retrieved documents. - we displayed peaks of such profiles to the user and asked for relevance judgement.
TWEN2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWEN2
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: - we generated subject/temporal profiles of the queries by analysing the top x retrieved documents. - we displayed peaks of such profiles to the user and asked for relevance judgement.
TWENall1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENall1
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Use all information from the forms. Topic-specific and time-specific language models are used to rerank the results.
TWENall2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENall2
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Use all information from the forms. Topic-specific and time-specific language models are used to rerank the results.
TWENbase1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENbase1
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: retrieval model language modeling approach (linear interpolation smoothing) stemming and stopword removal used.
TWENbase2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENbase2
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: retrieval model language modeling approach (linear interpolation smoothing) stemming and stopword removal used.
TWENblind1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENblind1
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: The run uses the system-suggested options from the clarification forms only (somewhat like blind feedback).
TWENblind2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENblind2
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: The run uses the system-suggested options from the clarification forms only (somewhat like blind feedback).
TWENdiff1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENdiff1
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Use only those selections from the user differing from the system's suggestions. Topic-specific and time-specific language models are used to rerank the results.
TWENdiff2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWENdiff2
- Participant: utwente.rode
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Use only those selections from the user differing from the system's suggestions. Topic-specific and time-specific language models are used to rerank the results.
UG1cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UG1cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: RUTGUS1 OQ + 0.1(All user selected terms from CF1)
UIUC05Hardb0¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUC05Hardb0
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: We use Dirichlet Prior method as retrieval method. The preprocessing only involves stemming. Only title fields are used to create the queries.
UIUC1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUC1
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: We generate 6 term clusters from top 60 documents. 8 unique terms from each cluster are presented to the user for judgment.
UIUC2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUC2
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: We select 48 terms from the top 60 documents to present to the user for judgment.
UIUC3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUC3
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 7/8/2005
- Task: clarification
- Run description: We generated 3 term clusters from the top 60 documents and present 18 terms from each cluster to the user for judgements.
UIUChCFB1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUChCFB1
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We present 48 terms to the user for judgment. Then we obtain a language model based on whether a term is checked or not. Finally we interpolate it with the language model derived from pseudo-feedback documents.
UIUChCFB3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUChCFB3
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We extract 3 clusters of terms from the top 5 pseudo-feedback documents. From each cluster we present 16 terms to the user for judgment. Then we obtain a language model A based on whether a term is checked or not. We obtain another language model B by interpolating the 3 clusters of terms using the number of checked terms in each cluster as weights. Finally we interpolate A and B.
UIUChCFB6¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUChCFB6
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We extract 6 clusters of terms from the top 5 pseudo-feedback documents. From each cluster we present 8 terms to the user for judgment. Then we obtain a language model A based on whether a term is checked or not. We obtain another language model B by interpolating the 6 clusters of terms using the number of checked terms in each cluster as weights. Finally we interpolate A and B.
UIUChTFB1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUChTFB1
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We present 48 terms to the user for judgment. Then we obtain a language model based on whether a term is checked or not.
UIUChTFB3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUChTFB3
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We extract 3 clusters of terms from the top 5 documents. From each cluster we present 16 terms to the user for judgment. Then we obtain a language model based on whether a term is checked or not.
UIUChTFB6¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UIUChTFB6
- Participant: uiuc.zhai
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: We extract 6 clusters of terms from the top 5 documents. From each cluster we present 8 terms to the user for judgment. Then we obtain a language model based on whether a term is checked or not.
US1cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: US1cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: RUTGUS1 OQ + 0.1(All user selected terms from CF1)
UWAT1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWAT1
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: If you used any techniques not listed above, briefly list them at the bullet-list level of detail.
UWAT2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWAT2
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: - Asked the user to select a category from a list that best describes the topic.
UWAT3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWAT3
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: - phrases selected from text
UWATbaseT¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWATbaseT
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: All terms from the title were used. Documents were retrieved using Okapi.
UWATbaseTD¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWATbaseTD
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: All terms from the title and description were used. Documents were retrieved using Okapi.
UwatHARDexp1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UwatHARDexp1
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: - query expansion with user-selected terms from UWAT1 clarification form;
UwatHARDexp2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UwatHARDexp2
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: - query expansion with user-selected terms from UWAT2 clarification form;
UwatHARDexp3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UwatHARDexp3
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: - query expansion with user-selected terms and phrases from UWAT3 clarification form;
UWAThardLC1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWAThardLC1
- Participant: uwaterloo.vechtomova
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Query expansion terms were selected using lexical chains.
wdf1t10q1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wdf1t10q1
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: optimized fusion run of top 10 fusion runs that combine best runs within each query type
wdf1t3qf2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wdf1t3qf2
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: optimized fusion run of top 3 fusion runs
wdoqdn1d2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wdoqdn1d2
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: Combo stemmer, okapi weights, query expansion with noun and definition terms
wdoqsz1d2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wdoqsz1d2
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 7/6/2005
- Type: automatic
- Task: baseline
- Run description: Combo stemmer, okapi weights, query expansion with noun, acronym, and definition terms
wf1t10q1RCDX¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wf1t10q1RCDX
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: (weighted sum) fusion of 10 best runs from fusion optimization iteration 3 with rank-boosting of CF phrases and CF document IDs
wf1t10q1RODX¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wf1t10q1RODX
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: (weighted sum) fusion of 10 best runs from fusion optimization iteration 3 with rank-boosting of overlapping phrases and CF document IDs
wf1t3qdRC10¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wf1t3qdRC10
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: (weighted sum) fusion of 3 best description runs from fusion optimization iteration 2 with rank-boosting by CF phrases.
wf1t3qdROD10¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wf1t3qdROD10
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: (weighted sum) fusion of 3 best description runs from fusion optimization iteration 2 with rank-boosting of overlapping phrases and CF document IDs
wf2t3qs1RCX¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wf2t3qs1RCX
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: (overlap weighted sum) fusion of 3 best title runs from fusion optimization iteration 3 with rank-boosting by CF phrases.
wf2t3qs1RODX¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: wf2t3qs1RODX
- Participant: indianau.yang
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: (weighted sum) fusion of 3 best title runs from fusion optimization iteration 3 with rank-boosting of overlapping phrases and CF document IDs
WS1cs0807¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WS1cs0807
- Participant: rutgers.belkin
- Track: HARD
- Year: 2005
- Submission: 8/9/2005
- Type: automatic
- Task: final
- Run description: OQ + 0.1(All ten Web-suggested terms from CF1)
york05ha1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05ha1
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Model Probability model; Weighting BM25; Indexing Okapi standard indexing plus the dual index; Clarification forms result applied; Feedback Yes (in paragraph level); Co-training approach applied in Feedback phase.
york05ha2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05ha2
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Model Probability model; Weighting BM25; Indexing Okapi standard indexing plus the dual index; Clarification forms result applied; Feedback Yes (in paragraph level); Co-training approach applied in Feedback phase. Familarity Module applied, which trained from CF result.
york05ha3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05ha3
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Model Probability model; Weighting BM25; Indexing Okapi standard indexing plus the dual index; Feedback Yes (in paragraph level);
york05ha4¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05ha4
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Model Probability model; Weighting BM25; Indexing Okapi standard indexing plus the dual index; Clarification forms result applied; Feedback Yes (in paragraph level);
york05ha5¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05ha5
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 8/8/2005
- Type: automatic
- Task: final
- Run description: Model Probability model; Weighting BM25; Indexing Okapi standard indexing plus the dual index; Clarification forms result applied; Feedback Yes (in paragraph level); Co-training approach applied in Feedback phase. Familarity Module applied (Function 2), which trained from CF result.
york05hb1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05hb1
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: Model Probability model; Indexing Okapi2.31 standard indexing; Feedback No; Weighting BM25; Search level Documents and without using the merge function
york05hb2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05hb2
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: Model Probability model; Indexing Okapi standard indexing plus the dual index; Feedback No; Weighting BM25; Search level Paragraphs and update the paragraph weights using a merge function for passage retrieval
york05hb3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: york05hb3
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Type: automatic
- Task: baseline
- Run description: Model Probability model; Indexing Okapi standard indexing plus the dual index; Feedback No; Weighting BM25; Search level Documents and update the document weights using a merge function for document retrieval
YORK1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YORK1
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: CF forms are automatically generated from the base run "yorkhb1". No candidate term is generated.
YORK2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YORK2
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: CF forms are automatically generated from the base run "yorkhb2". No more than 5 candidate terms are selected for each paragraph according to their selective values. The selective value is calculated by multiplying the term's entropy value among the top 30 retrieved paragraphs and the term's frequency in each paragraph.
YORK3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YORK3
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: CF forms are automatically generated from the base run "yorkhb1". No candidate term is generated.
YORK4¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YORK4
- Participant: yorku.huang
- Track: HARD
- Year: 2005
- Submission: 7/7/2005
- Task: clarification
- Run description: CF forms are automatically generated from the base run "yorkhb2". No more than 5 candidate terms are selected for each paragraph according to their selective values. The selective value is calculated by multiplying the term's entropy value among the top 30 retrieved paragraphs and the term's frequency in each paragraph.