Runs - Common Core 2017¶
ICT17ZCJL01¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICT17ZCJL01
- Participant: ICTNET
- Track: Common Core
- Year: 2017
- Submission: 6/16/2017
- Type: automatic
- Task: main
- MD5:
8fdc4809978c33a122d4be65c90074a4
- Run description: Title only Solr Retrieve Framework
ICT17ZCJL02¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICT17ZCJL02
- Participant: ICTNET
- Track: Common Core
- Year: 2017
- Submission: 6/16/2017
- Type: automatic
- Task: main
- MD5:
5ff4a9d1a8982d195a89a68c57c17d7a
- Run description: Title only Solr Retrieve Framework Add some words manually from the corresponding desc to the query word bags
ICT17ZCJL03¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICT17ZCJL03
- Participant: ICTNET
- Track: Common Core
- Year: 2017
- Submission: 6/16/2017
- Type: automatic
- Task: main
- MD5:
60eeb9d21b0f96477937954ca50a69e1
- Run description: query expansion using the google news corpus above the ICT17ZCJL01
ICT17ZCJL05¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICT17ZCJL05
- Participant: ICTNET
- Track: Common Core
- Year: 2017
- Submission: 7/25/2017
- Type: automatic
- Task: main
- MD5:
f0ee68fc20fb4b2bad6bde71f36df789
- Run description: word2vector use the corpus solr search engine query expansion automatically
ICT17ZCJL06¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICT17ZCJL06
- Participant: ICTNET
- Track: Common Core
- Year: 2017
- Submission: 7/28/2017
- Type: automatic
- Task: main
- MD5:
4757e30cfb4c6350ecb6e5621b6377ff
- Run description: use the describtion below the topic
- Code: https://github.com/Didiao1758/trecFinalProject
ICT17ZCJL07¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICT17ZCJL07
- Participant: ICTNET
- Track: Common Core
- Year: 2017
- Submission: 7/28/2017
- Type: automatic
- Task: main
- MD5:
f1b248ea2252005e1f5cf4d5e317c6fa
- Run description: trim the weight for the query term
- Code: https://github.com/Didiao1758/trecFinalProject
IlpsUvABoir¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsUvABoir
- Participant: UvA.ILPS
- Track: Common Core
- Year: 2017
- Submission: 6/14/2017
- Type: automatic
- Task: main
- MD5:
3c4b3fb20d1a1cd0093c28d5f35a8b25
- Run description: This run is generated via retrieval models in Indri where the model itself and the parameters are optimized by using Bayesian Optimization. Externel resources includeIndri, pybo, and the existing judgements of TREC robust track topics.
IlpsUvANvsm¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsUvANvsm
- Participant: UvA.ILPS
- Track: Common Core
- Year: 2017
- Submission: 6/14/2017
- Type: automatic
- Task: main
- MD5:
5a62715bd638bb0a8c7ff21a26ccb708
- Run description: Title field only. This run is generated by a latent vector space method, named NVSM, currently under review at a conference. For topics 312 and 348, the method did not return a ranking as the vocabulary of the method is limited to the top-60k most frequent terms. For those topics, the method falls back to a QLM with Dirichlet smoothing (mu = 1000), as otherwise the TREC submission system does not allow us to submit. The URL specified above will contain the source code of the method.
- Code: http://github.com/cvangysel/cuNVSM
IlpsUvAQlmNvsm¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsUvAQlmNvsm
- Participant: UvA.ILPS
- Track: Common Core
- Year: 2017
- Submission: 6/14/2017
- Type: automatic
- Task: main
- MD5:
34b4fe1e4c25cd7f7e08f9a849781656
- Run description: Title field only. This run is an unsupervised combination (per-topic standardised scores) between a QLM with Dirichlet smoothing (mu = 1000) and a latent vector space method, named NVSM, currently under review at a conference. The URL specified above will contain the source code of the method.
- Code: http://github.com/cvangysel/cuNVSM
ims_bm25_td¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ims_bm25_td
- Participant: BASELINE
- Track: Common Core
- Year: 2017
- Submission: 6/16/2017
- Type: automatic
- Task: main
- MD5:
6d94337befb9cd97b3218aafbfff8cf2
- Run description: Lucene 6.6.0 using components with default configuration: - tokenization: lucene StandardTokenizer - stop list: Indri stop list - stemmer: lucene Krovetz stemmer - ir model: lucene BM25 Used topic fields: title and description
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_cmbsum¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_cmbsum
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/17/2017
- Type: automatic
- Task: main
- MD5:
140705d3b7aa578f25c1232aab47d98c
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is (unsupervised) combsum with min-max normalization. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_dfrinl2_td¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ims_dfrinl2_td
- Participant: BASELINE
- Track: Common Core
- Year: 2017
- Submission: 6/16/2017
- Type: automatic
- Task: main
- MD5:
0d760ffdad860bc57da682be3608d260
- Run description: Lucene 6.6.0 using components with default configuration: - tokenization: lucene StandardTokenizer - stop list: Indri stop list - stemmer: lucene Krovetz stemmer - ir model: lucene DFR using Inverse Document Frequency model with Laplace's law of succession after-effect and normalisation 2 Used topic fields: title and description
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcmbsum_ap¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcmbsum_ap
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/17/2017
- Type: automatic
- Task: main
- MD5:
be132b040346acdf98bbe02b23571304
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its AP on that topic. AP is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_ap_uf¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_ap_uf
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
f32f11216ce0329e43d5c7e4e7632826
- Run description: This run adopts a two-level data fusion approach. The runs are created using Lucene 6.6.0 and components with default configuration. In the first data fusion level, we use the lucene multi-scorer, an implementation of CombSum without normalization, to merge 11 IR models: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene In the first data fusion level, we create a mini Grid-of-Points (GoP) consisting of 24 runs originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter - IR model: lucene multi-scores as above In the second data fusion level, we use weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its AP on that topic. AP is computed by scoring the same set of 24 systems on the TREC 13, 2004, Robust track. In addition, we ensure that the unique documents are at the top of the result lists, ranked by their min-max normalized score weighted by the AP of the run returning them. Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_err¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_err
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/22/2017
- Type: automatic
- Task: main
- MD5:
d6cf75c81a8451a51acc07229ea4e03b
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its ERR on that topic. ERR is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_ndcg¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_ndcg
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/20/2017
- Type: automatic
- Task: main
- MD5:
9f81dc1fe31b6ef395d19f37a9e31024
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its nDCG on that topic. nDCG is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_p10¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_p10
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/20/2017
- Type: automatic
- Task: main
- MD5:
bdde1cc93019274b0edea6252c97769b
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its P@10 on that topic. P@10 is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_rbp¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_rbp
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/21/2017
- Type: automatic
- Task: main
- MD5:
f2fccc912ad8cca48e2673741d232457
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its RBP on that topic. RBP is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_recall¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_recall
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/22/2017
- Type: automatic
- Task: main
- MD5:
38391420273cce4fc1c3a25bee954043
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its Recall on that topic. Recall is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_rprec¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_rprec
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/23/2017
- Type: automatic
- Task: main
- MD5:
f90af3c08bf7725f5d601c5fce8d4c6c
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its R-prec on that topic. R-prec is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
ims_wcs_twist¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ims_wcs_twist
- Participant: ims-core
- Track: Common Core
- Year: 2017
- Submission: 6/21/2017
- Type: automatic
- Task: main
- MD5:
88cbef1e08b952cc6b7141c75f1dc7c0
- Run description: This run merges a Grid-of-Points (GoP) consisting of 330 weak open source baselines. The baselines are created using Lucene 6.6.0 and components with default configuration. The merging algorithm is weighted combsum with min-max normalization. For each topic, each contributing run is weighted by its Twist on that topic. Twist is computed by scoring the same set of 330 systems on the TREC 13, 2004, Robust track. The systems constituting the GoP are originated by a factorial combination of the following components: - tokenization: StandardTokenizer - stop list: nostop, indri, lucene, smart, snowball, terrier - stemmer: nostem, krovetz, lovins, porter, 5grams - IR model: bm25, dfichi, dfiis, dfrinb2, dfrinexpb2, dfrinl2, iblgd, ibspl, lmd, lmjm, lucene Used topic fields: title and description.
- Code: https://bitbucket.org/frrncl/trec-core-2017
mpiik10e105akDT¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: mpiik10e105akDT
- Participant: MPIID5
- Track: Common Core
- Year: 2017
- Submission: 6/19/2017
- Type: automatic
- Task: main
- MD5:
cc699865b1dd6c82c64da6acd0db53f2
- Run description: BM25 + Neural IR model trained on Robust04
- Code: https://github.com/khui/trec-core-track-17
mpiik10e111akDT¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: mpiik10e111akDT
- Participant: MPIID5
- Track: Common Core
- Year: 2017
- Submission: 6/19/2017
- Type: automatic
- Task: main
- MD5:
e069e6170fc284ef8f9a9309b12a498a
- Run description: BM25 + Neural IR model trained on Robust04
- Code: https://github.com/khui/trec-core-track-17
mpiik15e74akDT¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: mpiik15e74akDT
- Participant: MPIID5
- Track: Common Core
- Year: 2017
- Submission: 6/19/2017
- Type: automatic
- Task: main
- MD5:
605286505917463b3fc79679d510e9f8
- Run description: BM25 + Neural IR model trained on Robust04
- Code: https://github.com/khui/trec-core-track-17
MRGrandrel¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MRGrandrel
- Participant: MRG_UWaterloo
- Track: Common Core
- Year: 2017
- Submission: 6/7/2017
- Type: manual
- Task: main
- MD5:
099192cb38f2724bea93a847f5609959
- Run description: Manual run using the core engine of the TREC Total Recall Track Baseline Model Implementation (BMI). Docs judged: 42,587 (170/topic) Docs judged relevant: 30,124 (70.7%; 120/topic) Total judging time: 64.1 hrs (15.4 min/topic; 5.4 sec/doc) Run consists of all judged-relevant documents, in random order, followed by remaining documents, ranked by final learned model. NOTE: Although random order for submission of judged-relevant documents is perhaps not realistic, it affords us an unbiased statistical estimate of precision (and hence a lower bound on R) regardless of the pooling depth.
MRGrankall¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MRGrankall
- Participant: MRG_UWaterloo
- Track: Common Core
- Year: 2017
- Submission: 6/7/2017
- Type: manual
- Task: main
- MD5:
eefdf96d526de80972af86cb238e87e8
- Run description: NOTE: This run differs from MRGrandrel and MRGrankrel in that manually judge-relevant documents are not explicitly placed at the top of the ranking. All documents are ranked by the final model. Manual run using the core engine of the TREC Total Recall Track Baseline Model Implementation (BMI). Docs judged: 42,587 (170/topic) Docs judged relevant: 30,124 (70.7%; 120/topic) Total judging time: 64.1 hrs (15.4 min/topic; 5.4 sec/doc) Run consists of the top 10,000 documents, ranked by final learned model.
MRGrankrel¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MRGrankrel
- Participant: MRG_UWaterloo
- Track: Common Core
- Year: 2017
- Submission: 6/7/2017
- Type: manual
- Task: main
- MD5:
a543e76c88bd41673803ab56822bb75e
- Run description: NOTE: This run differs from MRGrandrel only in the order of the judged-relevant docuements. Manual run using the core engine of the TREC Total Recall Track Baseline Model Implementation (BMI). Docs judged: 42,587 (170/topic) Docs judged relevant: 30,124 (70.7%; 120/topic) Total judging time: 64.1 hrs (15.4 min/topic; 5.4 sec/doc) Run consists of all judged-relevant documents, ranked by the final learned model, followed by remaining documents, ranked by final learned model. NOTE: Although post-hoc ranking for submission of judged-relevant documents is perhaps not realistic, it affords us the best opportunity to contribute relevant documents to the pool.
RMITFDMQEA1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RMITFDMQEA1
- Participant: RMIT
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
814b42ba1e47c7eb763011b95db50da5
- Run description: Automatic run over titles using FDM and RM3 query expansion.
RMITRBCUQVT5M1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RMITRBCUQVT5M1
- Participant: RMIT
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: manual
- Task: main
- MD5:
1fdeb1b496dd69e80a01f938dbd518be
- Run description: This is a fused run using RBC based on several hundred user generate queries for the information need. The top 5 user query variations were fused to produce the final run. Topics were ran using SDM+RM3, and Okapi BM25. The "best" five were based on their performance using the old TREC collection.
RMITUQVBestM2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RMITUQVBestM2
- Participant: RMIT
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: manual
- Task: main
- MD5:
cbfa2b567c5fb42caabdd8c470f09b22
- Run description: Many user query variations were gathered from a group of users based on the original topic description and narratives. Queries were then ran using FDM+RM3. The "best" variation for each topic was selected based on performance using the old TREC collection.
sab17coreA¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sab17coreA
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 6/19/2017
- Type: automatic
- Task: main
- MD5:
526b9fd5e5ff56fec0bd5deee63e4a87
- Run description: Standard SMART Lnu.ltu vector run, based on full topic
sab17coreE1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sab17coreE1
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 6/19/2017
- Type: automatic
- Task: main
- MD5:
30563768ce2c1227acee4cfae7303a21
- Run description: Constructed a very expanded query using Rocchio feedback, expanding by all terms in relevant documents in collection v45nocr that occur at least 3 times in collection. Base indexing Lnu.ltu, Rocchio weights 0,16,16 (0 weight from original query, equal weights from rel and nonrel docs). Will be used as input in later optimized weights run.
sab17coreO1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sab17coreO1
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 6/19/2017
- Type: automatic
- Task: main
- MD5:
4a69ff258596cf981ebcb6478722eda1
- Run description: 250 term queries heavily optimized using terms from relevant docs of v45nocr. These used the terms of sab17coreE1 as a pool of possible terms to add, and chose terms to maximize performance on v45nocr. This should be the equivalent of the 2005 robust run sab05ror1.
sabchmergeav45¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sabchmergeav45
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 8/1/2017
- Type: automatic
- Task: main
- MD5:
d11a02d068817883afdec380a8a4309e
- Run description: Optimized queries initially based on judgements from a collection merging v45 with aquaint, varying the number of top terms based on how well the optimized query of that length performed on the aquaint collection. Top terms sorted from pure rocchio feedback with no weight given to original query (i.e., terms ordered by average weight in rel docs minus average weight in nonrel docs). These queries are then merged with a pure original ltu query (equal weights for both)
sabchoiceaqv45¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sabchoiceaqv45
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 8/1/2017
- Type: automatic
- Task: main
- MD5:
6487452f587534e22c9c79c8e2e5935e
- Run description: Optimized queries initially based on judgements from a collection merging v45 with aquaint, varying the number of top terms based on how well the optimized query of that length performed on the aquaint collection. Top terms sorted from pure rocchio feedback with no weight given to original query (i.e., terms ordered by average weight in rel docs minus average weight in nonrel docs). If the base (no expansion/optimization) query performs better on aquaint than any optimization query, it is used instead.
sabchoicev45¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sabchoicev45
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 8/1/2017
- Type: automatic
- Task: main
- MD5:
1c1a64f2cadca751085877aa837d0d12
- Run description: Optimized queries based on v45 judgements, varying the number of terms based on how well the optimized query of that length performed on the aquaint collection (for the 33 queries with judgements on aquaint - used 50 terms otherwise). If the base query (no expansion or optimization) performed better on aquaint, it was used instead.
sabmerge50aqv45¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sabmerge50aqv45
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 8/1/2017
- Type: automatic
- Task: main
- MD5:
832dbe9188d6ff1a10689cf141f942ca
- Run description: Optimized queries initially based on judgements from a collection merging v45 with aquaint, all queries used top 50 terms from pure rocchio feedback with no weight given to original query (i.e., terms ordered by average weight in rel docs minus average weight in nonrel docs). These queries are then merged with a pure original ltu query (equal weights for both)
sabopt50av45¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sabopt50av45
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 8/1/2017
- Type: automatic
- Task: main
- MD5:
3993cbda9ad143217f437d7df045af60
- Run description: Optimized queries based on judgements from a collection merging v45 with aquaint, all queries used top 50 terms from pure rocchio feedback with no weight given to original query (i.e., terms ordered by average weight in rel docs minus average weight in nonrel docs).
sabopt50v45¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: sabopt50v45
- Participant: Sabir
- Track: Common Core
- Year: 2017
- Submission: 8/1/2017
- Type: automatic
- Task: main
- MD5:
23ea422184f0b859413ad402a9669863
- Run description: Optimized queries based on v45 judgements, all queries used top 50 terms from pure rocchio feedback with no weight given to original query (i.e., terms ordered by average weight in rel docs minus average weight in nonrel docs).
tgncorpBASE¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: tgncorpBASE
- Participant: tgncorp
- Track: Common Core
- Year: 2017
- Submission: 6/11/2017
- Type: manual
- Task: main
- MD5:
61f166117ba80e24fedf9c853d4408b5
- Run description: Solr queries semi-automatically constructed from the Topic descriptions and then augmented with information from Wordnet and Wikipedia.
tgncorpBOOST¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: tgncorpBOOST
- Participant: tgncorp
- Track: Common Core
- Year: 2017
- Submission: 6/11/2017
- Type: manual
- Task: main
- MD5:
fe08fd2dc25bb16b7d8add3582c508bd
- Run description: Solr queries semi-automatically constructed from the Topic descriptions and then augmented with information from Wordnet and Wikipedia. This run re-ranks based on presence of auxiliary evidence.
udelIndri¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: udelIndri
- Participant: udel
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
e7fd0826c62e5485a20dad36ea6894bb
- Run description: basic Indri run with default parameter settings
udelIndriB¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: udelIndriB
- Participant: udel
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
abc2d0dbbb40ec393ae861f9b729bb96
- Run description: basic Indri run with default parameter settings
UDelInfoEXPint¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UDelInfoEXPint
- Participant: udel_fang
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
ddccf2d17a35cab14d94100ffaebfd0f
- Run description: The basic retrieval method is F2EXP. Axiomatic query expansion is applied with the original collection to select the expansion terms.
UDelInfoLOGext¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UDelInfoLOGext
- Participant: udel_fang
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
9da3e5d8379ee17b980901c3ba03de60
- Run description: The basic retrieval method is F2LOG. Snippets from famous search engines are used as the sources for query expansion.
UDelInfoLOGint¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UDelInfoLOGint
- Participant: udel_fang
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: automatic
- Task: main
- MD5:
720d5ff077a98dd2e6381589ee98f2c3
- Run description: The basic retrieval method is F2LOG. Axiomatic query expansion is applied with the original collection
umass_baselnrm¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: umass_baselnrm
- Participant: BASELINE
- Track: Common Core
- Year: 2017
- Submission: 6/15/2017
- Type: automatic
- Task: main
- MD5:
663f7118071c8414ebe857334df1787e
- Run description: Baseline run using Galago #rm (Relevance model) operator configured for top 10 feedback documents and 10 expansion terms.
umass_baselnsdm¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: umass_baselnsdm
- Participant: BASELINE
- Track: Common Core
- Year: 2017
- Submission: 6/15/2017
- Type: automatic
- Task: main
- MD5:
04b0e6f06ac52e54d1dc6fca0fc0e2af
- Run description: Baseline run using Galago #sdm (Sequential Dependence Model) operator on topic title terms.
umass_direlm¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_direlm
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 6/15/2017
- Type: automatic
- Task: main
- MD5:
de9227ddec2be3a0ece3b756cca823c3
- Run description: LambdaMART ranking with validation set using DIRE re-ranking model trained on topic titles.
umass_direlmnvs¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_direlmnvs
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 6/15/2017
- Type: automatic
- Task: main
- MD5:
82fe89283dc8e6248f754b21f7ef16ea
- Run description: LambdaMART ranking without validation set using DIRE re-ranking model trained on topic titles.
umass_diremart¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_diremart
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 6/15/2017
- Type: automatic
- Task: main
- MD5:
31950a78e06dba48a07b184114363946
- Run description: MART ranking with validation set using DIRE re-ranking model trained on topic titles.
umass_emb1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_emb1
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
42ab9c5ee1d366a159a41cdc0d7da163
- Run description: Query expansion using word embeddings learned by neural nets. Assume conditional query term independence.
umass_erm¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_erm
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
84b5e8a0f4434b003133a17014b73b36
- Run description: Pseudo relevance feedback based on word embeddings and learning via neural nets.
umass_letor_lm¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_letor_lm
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
b92144481041ab091d5fcc5098b2eff7
- Run description: LambdaMart re-ranking of features derived fromtopic titles using BM25, PL2, Query likelihood and relevance model scoring.
umass_letor_lmn¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_letor_lmn
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
b9fddffd0ea5d56338c8e3e462b26c9b
- Run description: LambdaMAART ranking trained without validation set on topic titles.
umass_letor_m¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_letor_m
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
46cf650b0c8de14ed9ea0fe3e13a0afc
- Run description: MART re-ranking trained on topic titles.
umass_maxpas150¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_maxpas150
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
51f2dc6f3b1eebeccb1f643ade278001
- Run description: Max passage scoring of topic titles with passage size 150.
umass_maxpas50¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: umass_maxpas50
- Participant: UMass
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: automatic
- Task: main
- MD5:
a1c3aa08802769859390d87ee46c63c7
- Run description: Max passage scoring of topic titles with passage size 50.
UWatMDS_AFuse¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_AFuse
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/30/2017
- Type: manual
- Task: main
- MD5:
9fdab27a6ec9cf7a06c11f518814c56c
- Run description: UWaterlooMDS team collected two sets of relevance judgements for all the 250 topics by using different TAR tools. We re-ranked the documents by using these two different judgements sets separately. Then we used the reciprocal rank fusion to fuse the two rank lists of documents to compose this run.
UWatMDS_AUnion¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_AUnion
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/30/2017
- Type: manual
- Task: main
- MD5:
e249ccaaf00d80a7ea4bdd003b20f49e
- Run description: UWaterlooMDS team collected two sets of relevance judgements for all the 250 topics by using different TAR tools. We re-ranked the documents by using the union of these two judgements sets.
UWatMDS_AWgtd¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_AWgtd
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: manual
- Task: main
- MD5:
f27d3b189d850139513597f86910d6c1
- Run description: UWaterlooMDS team collected two sets of relevance judgements for all the 250 topics by using different TAR tools. We assigned different weights to documents according to their relevance from these two different judgements sets and trained a model. The we used the model to re-rank all the documents.
UWatMDS_BFuse¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_BFuse
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/30/2017
- Type: manual
- Task: main
- MD5:
b1bc0183188080347bd3564003baea55
- Run description: UWaterlooMDS team collected three sets of relevance judgements for 50 NIST topics by using different TAR tools. We re-ranked the documents by using these three different judgements sets separately. Then we used the reciprocal rank fusion to fuse the three rank lists of documents to compose this run.
UWatMDS_BUnion¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_BUnion
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/30/2017
- Type: manual
- Task: main
- MD5:
2b3c8935e449dd1573c88d3633e837f0
- Run description: UWaterlooMDS team collected three sets of relevance judgements for 50 NIST topics by using different TAR tools. We re-ranked the documents by using the union of these three judgements sets.
UWatMDS_BWgtd¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_BWgtd
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/31/2017
- Type: manual
- Task: main
- MD5:
4a1b37497bb9e99ff84d189f1ea6a580
- Run description: UWaterlooMDS team collected three sets of relevance judgements for 50 NIST topics by using different TAR tools. We assigned different weights to documents according to their relevance in these three different judgements sets and trained a model. The we used the model to re-rank all the documents.
UWatMDS_HT10¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_HT10
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 6/17/2017
- Type: manual
- Task: main
- MD5:
d6c8e9b641ebe1daa38d3e86354ed3a2
- Run description: This run applied HorvitzThompson estimator to sample 10 documents from the UWatMDS_TARSv1 run. The documents were ranked by the classification scores from by ranking model and then sampled according to their reciprocal rank. The 10 selected sample were located on the top of the run. The rest documents are sorted by scores.
UWatMDS_TARSv1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_TARSv1
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 6/17/2017
- Type: manual
- Task: main
- MD5:
4242dab5bebd124e48a098a38c9983dd
- Run description: UWaterlooMDS team used a TAR tool along with a search interface to search and review relevant documents. We had three sets of reviewed documents: relevant, on-topic and non-relevant. We made the run by the order of relevant documents first, on-topic docs second and non-relevant docs the last. And we also built a ranking model from the reviewed documents and re-ranked each set of documents.
UWatMDS_TARSv2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_TARSv2
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 6/18/2017
- Type: manual
- Task: main
- MD5:
53cd2bd5b7d5277090eda4d3b44309f0
- Run description: UWatMDS_TARSv2 run also ordered the documents by rel first, on-topic second and non-rel the last as the UWatMDS_TARSv1 run. But the relative ranking of documents in each set changed since we trained the ranking model by assigning different weights to rel docs and on-topic docs.
UWatMDS_ustudy¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWatMDS_ustudy
- Participant: UWaterlooMDS
- Track: Common Core
- Year: 2017
- Submission: 7/30/2017
- Type: manual
- Task: main
- MD5:
74e9367b0f83f2bd161f558d8b89255a
- Run description: UWaterlooMDS team recruited real users to judge the documents by using specially designed TAR tool along with a search interface. We collected three sets of reviewed documents: highly-relevant, relevant and non-relevant for each topic. We made the run by the order of highly-relevant documents the first, relevant documents the second and non-relevant documents the last. And we also built a ranking model from the users' judgements and re-ranked each set of documents.
WCrobust04¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WCrobust04
- Participant: WaterlooCormack
- Track: Common Core
- Year: 2017
- Submission: 6/8/2017
- Type: automatic
- Task: main
- MD5:
33768045401d9c6aa1205c3a9042cc63
- Run description: Logistic Regression (Sofia-ML, Cornell TF-IDF features, from Total Recall BMI) trained on Robust 04 qrels.
WCrobust0405¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WCrobust0405
- Participant: WaterlooCormack
- Track: Common Core
- Year: 2017
- Submission: 6/8/2017
- Type: automatic
- Task: main
- MD5:
791513540f70cb5cabfb1e2bf5fc777e
- Run description: Logistic Regression (Sofia-ML, Cornell TF-IDF features, from Total Recall BMI) trained on Robust 04 plus Robust 05 (for 50 of the topics) qrels.
WCrobust04W¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WCrobust04W
- Participant: WaterlooCormack
- Track: Common Core
- Year: 2017
- Submission: 6/8/2017
- Type: automatic
- Task: main
- MD5:
e528fcd0f1ce9b83869594cd41343248
- Run description: Logistic Regression (Sofia-ML, Cornell TF-IDF features, from Total Recall BMI) trained on Waterloo TREC 6 qrels (iffy=relevant) for 50 topics; Robust 04 qrels for the remainder.
webis_baseline¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: webis_baseline
- Participant: Webis
- Track: Common Core
- Year: 2017
- Submission: 8/2/2017
- Type: automatic
- Task: main
- MD5:
ecb8f00ed8e1436971977f07a594c16f
- Run description: BM25 over headline, abastract, first paragraph and body
webis_baseline2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: webis_baseline2
- Participant: Webis
- Track: Common Core
- Year: 2017
- Submission: 8/2/2017
- Type: automatic
- Task: main
- MD5:
0df62f6e95ab8ab035745fef590addab
- Run description: BM25 over headline, first paragraph, abstract and body. Disjunctional
webis_reranked¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: webis_reranked
- Participant: Webis
- Track: Common Core
- Year: 2017
- Submission: 8/2/2017
- Type: automatic
- Task: main
- MD5:
7ddfc115d56fa0d416bc678ef5d9f0b8
- Run description: BM25 over headline, first paragraph, abstract and body. Disjunctional. Afterwords reranked according to argumentativeness
webis_reranked2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: webis_reranked2
- Participant: Webis
- Track: Common Core
- Year: 2017
- Submission: 8/2/2017
- Type: automatic
- Task: main
- MD5:
94ed1e27993bd9c0be2fd7a44ad31f5b
- Run description: BM25 over headline, first paragraph, abstract and body. Disjunctional. Afterwords reranked according to argumentativeness