Skip to content

Runs - Relevance Feedback 2008

Brown.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.A1
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 0e5f5cdcad9fedcd62693e1c27daffdb
  • Run description: Indri system (language model paradigm), same parameterization as the "indri05AdmfS" run at the Terabyte'05 track except minor change of mu set to 1700 instead of 1500 based on additional training data available here. Other parameters (unchanged) dependence model component weights (0.8, 0.1, 0.1), proximity component smoothing parameter (4000), PRF parameters (N=10, k=50, feedback model weight=0.5).

Brown.A2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.A2
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 3d8527ae4f122eb567cba1992a3e27cf
  • Run description: Indri system (language model paradigm), Dirichlet smoothing mu=1700.

Brown.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.B1
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 57f55cfe1f4bab80ddd6d7adb6a90422
  • Run description: ML estimation of relevance distribution from feedback document, most likely 150 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.3. Using Indri, mixed with sequential dependence model, same paramerization as baseline component weights (0.8, 0.1, 0.1), proximity component smoothing parameter 4000. PRF used with parameterization N=10, k=50, feedback model weight=0.25.

Brown.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.B2
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 3f053acd23ddf6664da7ddb6eb7ac0ea
  • Run description: ML estimation of relevance distribution from feedback document, most likely 250 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.3. Run via Indri. No sequential dependency model, no PRF.

Brown.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.C1
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 94c5b4d89678d8ab74745e7a93b805e6
  • Run description: ML estimation of relevance distribution from positive feedback documents; negative feedback documents ignored. Most likely 150 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.45. Using Indri, mixed with sequential dependence model using component weights (0.9, 0.05, 0.05); proximity component smoothing parameter 4000 same as baseline. PRF used with parameterization N=10, k=50, feedback model weight=0.15.

Brown.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.C2
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: a909b6e8b5c928d8b205a07802331033
  • Run description: ML estimation of relevance distribution from positive feedback documents; negative feedback documents ignored. Most likely 150 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.45. Run via Indri. No sequential dependency model, no PRF.

Brown.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.D1
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: 62ddd45b7ae81cbeaaf7056a3bd32632
  • Run description: Same model used for Brown.C1. ML estimation of relevance distribution from positive feedback documents; negative feedback documents ignored. Most likely 150 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.45. Using Indri, mixed with sequential dependence model using component weights (0.9, 0.05, 0.05); proximity component smoothing parameter 4000 same as baseline. PRF used with parameterization N=10, k=50, feedback model weight=0.15.

Brown.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.D2
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: c4ee6737e271c8bdc9573d6144db9818
  • Run description: Same strategy used for Brown.C2. ML estimation of relevance distribution from positive feedback documents; negative feedback documents ignored. Most likely 150 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.45. Run via Indri. No sequential dependency model, no PRF.

Brown.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: Brown.E1
  • Participant: Brown_University
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 839f96603e29b641df2c0138d6230cc0
  • Run description: ML estimation of relevance distribution from positive feedback documents; negative feedback documents ignored. Most likely 250 terms comprise feedback unigram model. Mixed with ML query model at feedback model weight 0.8. Dependence model and PRF were not used. Run using Indri.

CMURF08.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.A1
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: c5305d05a2993b3d30cc9a4295df3e5c
  • Run description: dependency model initial query, stopwords included in queries, Relevance Model PRF w/ top 10 docs and top 50 terms from initial result. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.B1
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 6fb278aafc2e0018e4a7630067310da3
  • Run description: dependency model initial query, stopwords excluded in queries, extended Relevance Model for scoring feedback terms, w/ 0.7 on relevant document and 0.3 on the pseudo relevant top 10 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.B2
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 4fe57ebf8c0fbc5c385ff8405e37ce6d
  • Run description: dependency model initial query, stopwords included in queries, extended Relevance Model for scoring feedback terms, w/ 0.8 on relevant document and 0.2 on the pseudo relevant top 10 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.C1
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: ffaaab8970670088f4256d411300595c
  • Run description: dependency model initial query, stopwords excluded in queries, extended Relevance Model for scoring feedback terms, w/ 0.8 on relevant document and 0.2 on the pseudo relevant top 10 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.C2
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 24b3bd7d3533931608c61e2075950d24
  • Run description: dependency model initial query, stopwords excluded in queries, extended Relevance Model for scoring feedback terms, w/ 0.8 on relevant document and 0.2 on the pseudo relevant top 12 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.D1
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: ebccf077298e031e401dcfa8c129ed31
  • Run description: dependency model initial query, stopwords excluded in queries, extended Relevance Model for scoring feedback terms, w/ 0.8 on relevant document and 0.2 on the pseudo relevant top 10 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.D2
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: 92756bbce68c05ce50f7ecf86e8cdfc7
  • Run description: dependency model initial query, stopwords excluded in queries, extended Relevance Model for scoring feedback terms, w/ 0.8 on relevant document and 0.2 on the pseudo relevant top 12 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

CMURF08.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: CMURF08.E1
  • Participant: CMU-LTI-DIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: ee5fd11bbdf9da5cdeeec4e5e2b05d6a
  • Run description: dependency model initial query, stopwords excluded in queries, extended Relevance Model for scoring feedback terms, w/ 0.8 on relevant document and 0.2 on the pseudo relevant top 10 docs (excluding the known relevant), top 50 terms expanded. Phrase smoothing mu=4000, term smoothing mu=1700, expanded query (weight 0.3) interpolated with original dependency model query (weight 0.7).

DUTIRRF08.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: DUTIRRF08.A1
  • Participant: DUTIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/26/2008
  • Task: A
  • MD5: 5bf4ead8357247f0bb2c30342f2e1915
  • Run description: Our baseline, we used the Indri retrieval system for our experiments. This run is only a simple query likelihood run. We didnt make any change to the topics, just the original topics.

DUTIRRF08.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: DUTIRRF08.B1
  • Participant: DUTIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/26/2008
  • Task: B
  • MD5: fe554f249b15c9d234d14e0939c7f891
  • Run description: We use the local co-occurrence information in the relevance documents. The window of co-occurrence is a sentence. Top 20 terms are selected to extend the original topics. The proportion of original topic terms to extended terms is 0.8 0.2; the other runs are the same proportion.

DUTIRRF08.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: DUTIRRF08.C1
  • Participant: DUTIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/26/2008
  • Task: C
  • MD5: 399d8cad97874e4c6cf5f0091e389dbb
  • Run description: First, we use B method to select 30 candidate terms. Because the non-relate documents are available, we use a Rocchio formula to filter out the useless terms.

DUTIRRF08.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: DUTIRRF08.D1
  • Participant: DUTIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/26/2008
  • Task: D
  • MD5: aba6d057d6d541dde93ffc2cd9d53536
  • Run description: The same to C methods.

DUTIRRF08.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: DUTIRRF08.E1
  • Participant: DUTIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/26/2008
  • Task: E
  • MD5: 05f36183aa3b1cc5711b5ff2dafc1cf4
  • Run description: The same to C methods. But the extended terms are 70. This run examines the amount of improvement with more relevance info.

FubRF08.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.A1
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: A
  • MD5: c09d04b52b94d31f51c6e692dabd96c4
  • Run description: Baseline obtained using Terrier with the PL2 weighting model and parameter c=1.0

FubRF08.A2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.A2
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 603409b2ad01de02ed39197b3f5adcba
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 100 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection.

FubRF08.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.B1
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: B
  • MD5: 526508e9bc0708cc754c3d92e6fe37a6
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents The terms selected are with the most divergent frequency from the the frequency of the collection - 30 terms with a "negative weight" from the not relevant documents (score 0). The terms selected are with the most divergent frequency from the the frequency of the collection. The terms choosen should not be contained in the list of the positive weight terms.

FubRF08.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.C1
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: C
  • MD5: 363563746a094e3ba1b699b5bf0336a9
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 30 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection. The negative terms chosen should not be contained in the list of the positive terms.

FubRF08.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.C2
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 56b6aa96a74d2ccbbc13650a24c3ee21
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 100 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection.

FubRF08.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.D1
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: D
  • MD5: 5fe88a55af599e7b98393cf6699249ef
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 30 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection. The negative terms chosen should not be contained in the list of the positive terms.

FubRF08.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.D2
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: 7ea22821ed15aa8eb24b9e0ef6217efc
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 100 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection.

FubRF08.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.E1
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: E
  • MD5: 41d933ca9a9c0386939d39306e4c3389
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 30 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection. The negative terms chosen should not be contained in the list of the positive terms.

FubRF08.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: FubRF08.E2
  • Participant: fub
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: f9e6f3b23338bbd97511c0b32303cc9c
  • Run description: We used Terrier with a modified query expansion model (Pl2 and DFR Bo1 ) For each topic the query is expanded using - not relevant documents (score 0) - highly relevant documents (score 2) The expansion terms are - 100 terms with a "positive weight" from the highly relevant (score 2) documents. The terms selected have the most divergent frequency from the frequency of the collection - 100 terms with a "negative weight" from the not relevant documents (score 0). The terms selected have the most divergent frequency from the frequency of the collection.

HitRF08.A1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HitRF08.A1
  • Participant: HIT2
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: A
  • MD5: d2930ebb86afacd0b8277fd43e38165d
  • Run description: indri is used to generate the baseline run.

HitRF08.B1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HitRF08.B1
  • Participant: HIT2
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: B
  • MD5: 66082ffccbe36eeb67d280e64ef6fbdc
  • Run description: one relevant document is used by indri.

HitRF08.C1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HitRF08.C1
  • Participant: HIT2
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: C
  • MD5: 7d0988411bd2a54c73f90da3e610592c
  • Run description: three relevant document and three nonrelevant documents are used by indri.

HitRF08.D1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HitRF08.D1
  • Participant: HIT2
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: D
  • MD5: 5233760a34aee7cc5a38d75838b7f5d0
  • Run description: ten judged documents are used by indri.

HitRF08.E1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HitRF08.E1
  • Participant: HIT2
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: E
  • MD5: 8f361b248eb4e26b9d0678d60936205c
  • Run description: large number of judged documents are used by indri.

HKPU.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.A1
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/27/2008
  • Task: A
  • MD5: fe08c8207cc0dedd63b6edad027d03ca
  • Run description: We used the terabyte track 04, 05 and 06 to calibrate the system. So, we guess it is half-trained by the test topics, although not fully. The system combines the passage scores by Fuzzy disjunction.

HKPU.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.B1
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: a1605f290f7ba9a81cfd1c1f74837329
  • Run description: We use a document-context dependent term weight with RF. The system was calibrated by 5 even-numbered queries (802, 804, 806, 808, 810).

HKPU.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.B2
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: B
  • MD5: c498fe3f44d9551f4b2af1a65ee6f295
  • Run description: Used the document-context based retrieval model which is calibrated using 5 odd-numbered queries.

HKPU.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.C1
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 0e8f864b6b89b736c28a1d80cff9de33
  • Run description: Used the document-context based term weights. The system was caliberated using 5 even-numbered queries (802, 804, 806, 808 and 810).

HKPU.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.C2
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: C
  • MD5: 148989c43937e7954a85c975a6bf7b3b
  • Run description: Used the document-context based retrieval model that was trained by 5 odd-numbered queries

HKPU.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.D1
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: 0a6eea128c3b20cd162a990bd2dce8f4
  • Run description: We used our document-context based retrieval model that was calibrated by the odd number queries.

HKPU.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: HKPU.E1
  • Participant: HKPU
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: c24038414e0ea1cafbe08e6c67b97d03
  • Run description: We used 5 odd-numbered queries for training our document-context based retrieval model.

IowaSRF08.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: IowaSRF08.A1
  • Participant: UIowa-Srinivasan
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 280f60e47b06f5cc78f0267fc9d5fd2f
  • Run description: The queries were expanded using top ten documents retrieved and no relevance feedback documents. The five terms were determined by their TFIDF scores within the group of retrieved documents. The terms were exponentially discounted and query length discounted (the shorter the query, the less emphasis there is on the RF terms). Querying was done using Lucene, and re-ranking of the results using modified Okapi.

IowaSRF08.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: IowaSRF08.B1
  • Participant: UIowa-Srinivasan
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: b58c81f4308c907a7afb86739aeecc81
  • Run description: The queries were expanded using top ten documents retrieved and one relevance feedback document. The five terms were determined by their TFIDF scores within the group of the RF documents. The terms were exponentially discounted and query length discounted (the shorter the query, the less emphasis there is on the RF terms). Querying was done using Lucene, and re-ranking of the results using modified Okapi.

IowaSRF08.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: IowaSRF08.C1
  • Participant: UIowa-Srinivasan
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: c769866429b7c6d5649c5a201c9cb762
  • Run description: The queries were expanded using top ten documents retrieved and three relevance feedback documents. The five terms were determined by their TFIDF scores within the group of the RF documents. The terms were exponentially discounted and query length discounted (the shorter the query, the less emphasis there is on the RF terms). Querying was done using Lucene, and re-ranking of the results using modified Okapi.

IowaSRF08.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: IowaSRF08.D1
  • Participant: UIowa-Srinivasan
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: e3a17253eb5d84faf393a17ac364dd03
  • Run description: The queries were expanded using top ten documents retrieved and ten relevance feedback documents. The five terms were determined by their TFIDF scores within the group of the RF documents. The terms were exponentially discounted and query length discounted (the shorter the query, the less emphasis there is on the RF terms). Querying was done using Lucene, and re-ranking of the results using modified Okapi.

IowaSRF08.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: IowaSRF08.E1
  • Participant: UIowa-Srinivasan
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: cc51a76a52779d36d3a94ec6bd94aac6
  • Run description: The queries were expanded using top ten documents retrieved and a large number of relevance feedback documents. The five terms were determined by their TFIDF scores within the group of the RF documents. The terms were exponentially discounted and query length discounted (the shorter the query, the less emphasis there is on the RF terms). Querying was done using Lucene, and re-ranking of the results using modified Okapi.

pris.A1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.A1
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 447de53bd9b185740ac9ad6f12991080
  • Run description: This is the baseline retrieval without relevance information .

pris.B1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.B1
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 6eeeea2138d75e3d8eae9e21e863ac1d
  • Run description: This run is B with one relevant document.

pris.B2

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.B2
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 01950f41add0bf0cde7efc1c6a270964
  • Run description: This run is B with one relevant document.

pris.C1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.C1
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: fc2d894892b2f81260fea854ccedb143
  • Run description: This run is C with three relevant documents and three nonrelevant documents.

pris.C2

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.C2
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: e05848fbe925a8a71a5e9cbfb5e51674
  • Run description: This run is C with three relevant documents, three nonrelevant documents.

pris.D1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.D1
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: bf64d16100675905e2aad7aa78c42307
  • Run description: This run is D with ten judged documents.

pris.D2

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.D2
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: 43dc95034ff350522b77d67986cc6d49
  • Run description: This run is D with ten judged documents.

pris.E1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.E1
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 8e43f54121616964b126faf4803ca572
  • Run description: This run is E with large number of judged documents.

pris.E2

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: pris.E2
  • Participant: BUPT_pris_
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 62f814509d90db9a748f0b6d91db2fa4
  • Run description: This run is E with large number of judged documents.

RMIT08.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.A1
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: A
  • MD5: 5a65f733499acd725f84aa52e06c88dd
  • Run description: Baseline run

RMIT08.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.B1
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: B
  • MD5: 9937c69ada0d54955e6e992ced2415a6
  • Run description: Single relevant document run. Terms are selected based on their TFxIDF score. We have selected 25 top scoring terms per document.

RMIT08.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.C1
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: C
  • MD5: 451f98e7e677acb0a372132b8b9feeea
  • Run description: Multi-document run. For each document a set of top scoring terms (using tfxidf) were identified. Up to 25 terms that occur on more than one document's set of top terms were selected (preference given to terms that occur across most)

RMIT08.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.C2
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: C
  • MD5: 7cae10b033210ccd4cdeb09539d943d5
  • Run description: Multi-document run. All relevant documents were joint to a single large document and the tfxidf score of terms computed based on the large document. The top 25 terms were then selected.

RMIT08.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.D1
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: 4520b083e2d736ce6cbe982a2d374ddc
  • Run description: Multi-document run. For each document a set of top scoring terms (using tfxidf) were identified. Up to 25 terms that occur on more than one document's set of top terms were selected (preference given to terms that occur across most). Same as C1 just with more relevant docs to choose from

RMIT08.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.D2
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: bf4303844d8463dd7f891da3e85823e6
  • Run description: Multi-document run. All relevant documents were joint to a single large document and the tfxidf score of terms computed based on the large document. The top 25 terms were then selected. Same as C2 just with more relevant docs to choose from.

RMIT08.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.E1
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: ceac3b631b6be707510f9ee747d3a127
  • Run description: Multi-document run. For each document a set of top scoring terms (using tfxidf) were identified. Up to 25 terms that occur on more than one document's set of top terms were selected (preference given to terms that occur across most documents). Same as C1 and D1, just with more relevant documents.

RMIT08.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: RMIT08.E2
  • Participant: rmit
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: 5800a1d894b9e9871e928a5c34e29ce0
  • Run description: Multi-document run. All relevant documents were joint to a single large document and the tfxidf score of terms computed based on the large document. The top 25 terms were then selected. Same as C2 and D2, just with more relevant documents.

SabRF08.A1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: SabRF08.A1
  • Participant: sabir.buckley
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 4e994d8b88cb2d30e5e5744090865dc9
  • Run description: Base SMART Lnu.ltu run. No feedback.

SabRF08.B1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: SabRF08.B1
  • Participant: sabir.buckley
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 57906fab58d02583a120e2366f6ff5b0
  • Run description: Rocchio Feedback 16,16,8 one rel doc. Expand by top 50 terms

SabRF08.C1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: SabRF08.C1
  • Participant: sabir.buckley
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 495c3dcd4dc3bc0791a4355b3e934a22
  • Run description: Rocchio Feedback 16,16,8 three rel/nonrel docs. Expand by top 50 terms

SabRF08.D1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: SabRF08.D1
  • Participant: sabir.buckley
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: ac9984ed02a18fae5c8f3b8555e93800
  • Run description: Rocchio Feedback a,b,c = 16,16,8 10 judged docs. Expand by top 50 terms

SabRF08.E1

Participants | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: SabRF08.E1
  • Participant: sabir.buckley
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 89a8e0d48b54d50ae4eb0c0124451a1a
  • Run description: Rocchio Feedback a,b,c = 16,16,8 full judgments. Expand by top 50 terms

THUFB.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.A1
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 07c4fcd1fa7a4d9d982129ae49686aa9
  • Run description: The baseline of all runs, the TMiner retrieval system of THU with BM2500 model.

THUFB.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.B1
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 0bd088a3946ea0554d55fa17c8b1f18b
  • Run description: Query Expansion, result combination of gerenal queries and expansional query.

THUFB.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.B2
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 4b8435c82c3f8330390fa17ad5a6eb93
  • Run description: Result reranking with the nearest neighbor distance reweighting with feedback samples.

THUFB.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.C1
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: cf46293c9841939c80f0cece2ba37bcf
  • Run description: Query Expansion, result combination of gerenal queries and expansional query.

THUFB.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.C2
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 8a1ee63b60ef6968556203ff8311a16a
  • Run description: Result reranking with the nearest neighbor distance reweighting with feedback samples.

THUFB.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.D1
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: b0169ba1d12ddecb4607f7596f0cc5b4
  • Run description: Query Expansion, result combination of gerenal queries and expansional query,.

THUFB.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.D2
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: b65d3777c88355fc3880ffc323e8b3a3
  • Run description: Result reranking with the nearest neighbor distance reweighting with feedback samples.

THUFB.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.E1
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 21f8e29044996f688b881f6508b444c7
  • Run description: Query Expansion, result combination of gerenal queries and expansional query.

THUFB.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: THUFB.E2
  • Participant: THUIR
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: c3b5fb9274f5b7782096f58490a4fc33
  • Run description: Result reranking with the nearest neighbor distance reweighting with feedback samples.

uams08bl.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08bl.A1
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 71b94ff451c3945aef6e42aded670375
  • Run description: Baseline run out-of-the-box Indri.

uams08bl.A2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08bl.A2
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: 6a54e0c61341ff9695731a5c5b04e243
  • Run description: Baseline run out-of-the-box Indri

uams08m6.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m6.B1
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 6911532ea3f19595c01795479aeb0211
  • Run description: Terms for query models are selected based on information from relevant and non relevant documents (if available) and the background collection. Indri is used for final retrieval run.

uams08m6.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m6.C1
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 40ce5f72bbff3e586fa0e545a2904aa9
  • Run description: Terms for query models are selected based on information from relevant and non relevant documents (if available) and the background collection. Indri is used for final retrieval run.

uams08m6.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m6.D1
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: 666ce21a5059369982c398f91cad6b51
  • Run description: Terms for query models are selected based on information from relevant and non relevant documents (if available) and the background collection. Indri is used for final retrieval run.

uams08m6.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m6.E1
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: d36f9c06c6f4fe06335f9fe3fc2aea15
  • Run description: Terms for query models are selected based on information from relevant and non relevant documents (if available) and the background collection. Indri is used for final retrieval run.

uams08m9.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m9.B2
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 37617e7e2184b15c84759085e129f857
  • Run description: Terms for query models are selected based on information from relevant documents and the background collection. Information from non relevant documents is not used. Indri is used for final retrieval run.

uams08m9.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m9.C2
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 527b00270342bd95e55b842e8c93fd2a
  • Run description: Terms for query models are selected based on information from relevant documents and the background collection. Information from non relevant documents is not used. Indri is used for final retrieval run.

uams08m9.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m9.D2
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: e616a1d33ae97f38a9fb88f01a22d223
  • Run description: Terms for query models are selected based on information from relevant documents and the background collection. Information from non relevant documents is not used. Indri is used for final retrieval run.

uams08m9.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uams08m9.E2
  • Participant: UAms_De_Rijke
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: eb91e31afe49eea03323d77cdd4857e1
  • Run description: Terms for query models are selected based on information from relevant documents and the background collection. Information from non relevant documents is not used. Indri is used for final retrieval run.

UAmsR08CJ.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08CJ.B2
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: bc8f29f9f3871a813fcc17ad486e86dc
  • Run description: Indri Language Model with Parsimonious Relevance Feedback using relevant and non-relevant documents and Pseudo Relevance Feedback (10 documents, 50 terms) and Jelinek-Mercer Smoothing

UAmsR08CJ.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08CJ.C2
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 820a4e1b868b9b2a5832f9ce43763493
  • Run description: Indri Language Model with Parsimonious Relevance Feedback using relevant and non-relevant documents and Pseudo Relevance Feedback (10 documents, 50 terms) and Jelinek-Mercer Smoothing

UAmsR08CJ.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08CJ.D2
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: f063a98085981fdda0ed00925cc85169
  • Run description: Indri Language Model with Parsimonious Relevance Feedback using relevant and non-relevant documents and Pseudo Relevance Feedback (10 documents, 50 terms) and Jelinek-Mercer Smoothing

UAmsR08CJ.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08CJ.E2
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 5ec38fdead60d6d6d37534ec8205e3ed
  • Run description: Indri Language Model with Parsimonious Relevance Feedback using relevant and non-relevant documents and Pseudo Relevance Feedback (10 documents, 50 terms) and Jelinek-Mercer Smoothing

UAmsR08PD.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08PD.A1
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: ae475d72bdf1a83e919271822a0da4f1
  • Run description: Indri Language Model with Pseudo Relevance Feedback (10 documents, 50 terms) and Dirichlet Smoothing

UAmsR08PD.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08PD.B1
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: d1f668438277c16fa340ac84062969d6
  • Run description: Indri Language Model with Parsimonious Relevance Feedback and Pseudo Relevance Feedback (10 documents, 50 terms) and Dirichlet Smoothing

UAmsR08PD.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08PD.C1
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 68ed8733a0725edaac8d94a9b7bb13bc
  • Run description: Indri Language Model with Parsimonious Relevance Feedback and Pseudo Relevance Feedback (10 documents, 50 terms) and Dirichlet Smoothing

UAmsR08PD.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08PD.D1
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: D
  • MD5: 756ca5931a2ef69f19ed2765a89f008e
  • Run description: Indri Language Model with Parsimonious Relevance Feedback and Pseudo Relevance Feedback (10 documents, 50 terms) and Dirichlet Smoothing

UAmsR08PD.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08PD.E1
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: E
  • MD5: 39f1309d181434037a4482c4b28ac39e
  • Run description: Indri Language Model with Parsimonious Relevance Feedback and Pseudo Relevance Feedback (10 documents, 50 terms) and Dirichlet Smoothing

UAmsR08PD.F1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UAmsR08PD.F1
  • Participant: UAmsterdam
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: F
  • MD5: 1db9c6cbc4f26107740cded2bf67cbde
  • Run description: Indri Language Model with Parsimonious Topical Feedback and Pseudo Relevance Feedback (10 documents, 50 terms) and Dirichlet Smoothing

UIUC.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.A1
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: A
  • MD5: 2c5011936327018e5654f4557a64f559
  • Run description: Language Model approach with Dirichlet smoothing method

UIUC.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.B1
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: B
  • MD5: 140fc84926633d539210c2bdcc473b5a
  • Run description: Adaptive Relevance Feedback with Regularization

UIUC.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.B2
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: B
  • MD5: 73669ce465355bde25a422ae7ef8ba84
  • Run description: Adaptive Relevance Feedback

UIUC.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.C1
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: C
  • MD5: d62b263e22d351232f28ea818bd4e162
  • Run description: Adaptive Relevance Feedback with Regularization

UIUC.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.C2
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: C
  • MD5: ebbd536325c68ed8f08aeb6c768a596c
  • Run description: Adaptive Relevance Feedback with Interpolation

UIUC.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.D1
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: c083993889f6038bfc2987f672888626
  • Run description: Adaptive Relevance Feedback with Regularization

UIUC.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.D2
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: 596bada51a1ba37575e16d7201f4cb60
  • Run description: Pure Adaptive Relevance Feedback without prior knowledge

UIUC.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.E1
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: 1ee313ec14fc1aac462d208f01fbf067
  • Run description: Adaptive Relevance Feedback with Regularization

UIUC.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: UIUC.E2
  • Participant: UIUC
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: bbc6ec8a6f2f5438167e510d734cce43
  • Run description: Adaptive Relevance Feedback plus Pseudo Feedback

uogRF08.A1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.A1
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: A
  • MD5: 4135f3e9ff3c0dadf238f35d2e7885b7
  • Run description: pseudo relevance feedback using surrogates

uogRF08.A2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.A2
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: A
  • MD5: c3e88718365abdebd0f18bdbb835ce2e
  • Run description: pseudo relevance feedback

uogRF08.B1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.B1
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 04c52a68093fd6fe30ea8e6905de6816
  • Run description: positive relevance feedback using surrogates

uogRF08.B2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.B2
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: B
  • MD5: 982706ae9fc28d1b33884f01c69d626d
  • Run description: positive relevance feedback

uogRF08.C1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.C1
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: e2fd0fe5c82d9aeac643bbe1c798224c
  • Run description: positive relevance feedback using surrogates

uogRF08.C2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.C2
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/28/2008
  • Task: C
  • MD5: 7fcb54ea54a996fa6ed3b6f3e7cfe03c
  • Run description: positive relevance feedback

uogRF08.D1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.D1
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: 0691a26df01b9b5c826cb378cf549458
  • Run description: positive relevance feedback using surrogates

uogRF08.D2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.D2
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: D
  • MD5: b9e0532f9905773cb3d9bb38048160b3
  • Run description: positive relevance feedback

uogRF08.E1

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.E1
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: 9a475222366b96270c88e0d59fb24e8d
  • Run description: positive relevance feedback using surrogates

uogRF08.E2

Participants | Proceedings | Summary (mtc) | Summary (statAP) | Summary (top10) | Appendix

  • Run ID: uogRF08.E2
  • Participant: UoGtr
  • Track: Relevance Feedback
  • Year: 2008
  • Submission: 8/29/2008
  • Task: E
  • MD5: 2d86e588012f14bea936314373519b24
  • Run description: positive relevance feedback