Runs - Deep Learning 2020¶
1¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: 1
- Participant: nvidia_ai_apps
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
92166ecb6c604d5abd20a022e912e150
- Run description: Positive passages for training were taken from qrels. Hard negatives were mined with BM25. We trained DPR model on such training data and used it to mine even harder negatives. After that we trained bert_base_uncased re-ranker on a combination of positive passages, negatives from BM25 and negatives from DPR.
2¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: 2
- Participant: nvidia_ai_apps
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
59e557d23012869469a6233786d7021d
- Run description: Positive passages for training were taken from qrels. Hard negatives were mined with BM25. We trained DPR model on such training data and used it to mine even harder negatives. After that we trained bert_base_uncased re-ranker on a combination of positive passages, negatives from BM25 and negatives from DPR. DPR was used for top-1000 retrieval and the results were re-ranked after that.
bcai_bertb_docv¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: bcai_bertb_docv
- Participant: bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
6ff82269cdaf82006133e9041acb7db6
- Run description: BERT BASE on top of a classic IR pipeline
bcai_bertl_pass¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bcai_bertl_pass
- Participant: bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
7e3b8c1aba09c5b98071f104bfe1f61f
- Run description: BERT LARGE on top of a classic IR pipeline
bcai_class_pass¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bcai_class_pass
- Participant: bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
df71dc4b2026deb3fc5ee83b0ff173bf
- Run description: a fusion of classic IR signals
bcai_classic¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: bcai_classic
- Participant: bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
c6e5daf986b6db89ebfb7fcd63e73a7a
- Run description: fusion of classic IR signals
bert_6¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bert_6
- Participant: UAmsterdam
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
78b13a7d52890594b02abe36f6eb3862
- Run description: first six layers of pre-trained bert as a base for an interaction based ranker
bigIR-BERT-R¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bigIR-BERT-R
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
69326d76a8424d09423521f01dbc1cf6
- Run description: An already pre-trained BERT large model with the MS-Marco passages training data was used to rerank the passages.
bigIR-DCT-T5-F¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bigIR-DCT-T5-F
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
1d9a3c7eca74975501fa0b7c3710ad9c
- Run description: First we expanded the passages using DeepCT, a Deep Contextualized Term Weighting framework that learns to map BERTs contextualized text representations to context-aware term weights for sentences and passages. Second, we indexed the expanded passages using anserini. Third, we adopted RM3 query expansion to retrieve the 1000 passages for each query using anserini. Finally, we reranked the retrieved passages using an already pre-trained T5 base model with the MS-Marco passages training data.
bigIR-DH-T5-F¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bigIR-DH-T5-F
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
4418367b6d3fb97fd25ea9840f57e8a2
- Run description: We first retrieved an initial set of documents using anserini by adopting BM25, and RM3 for query expansion. Second, we reranked the initial set using an already pre-trained T5 base model with the MS-Marco passages training data. The reranking was done by feeding the model the query and the head of the document.
bigIR-DH-T5-R¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bigIR-DH-T5-R
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
9ba13c372fa02f44fe6201b2919d9a26
- Run description: We reranked the documents using an already pre-trained T5 base model with the MS-Marco passages training data. The reranking was done by feeding the model the query and the head of the document.
bigIR-DT-T5-F¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bigIR-DT-T5-F
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
f6e949141fd50c16c39e2e1ef734731e
- Run description: We first retrieved an initial set of documents using anserini by adopting BM25, and RM3 for query expansion. Second, we reranked the initial set using an already pre-trained T5 base model with the MS-Marco passages training data. The reranking was done by feeding the model the query and the title of the document.
bigIR-DT-T5-R¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bigIR-DT-T5-R
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
6afda58c1332c2ebf0303e3b051c7ae8
- Run description: We reranked the documents using an already pre-trained T5 base model with the MS-Marco passages training data. The reranking was done by feeding the model the query and the title of the document.
bigIR-DTH-T5-F¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bigIR-DTH-T5-F
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
d511f9be1898f68c53758974dbed8468
- Run description: We first retrieved an initial set of documents using anserini by adopting BM25, and RM3 for query expansion. Second, we reranked the documents using an already pre-trained T5 base model with the MS-Marco passages training data. The reranking was done by feeding the model the query and the title+head of the document.
bigIR-DTH-T5-R¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bigIR-DTH-T5-R
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
43811213e5df0b8bcf918f11c0008506
- Run description: We reranked the documents using an already pre-trained T5 base model with the MS-Marco passages training data. The reranking was done by feeding the model the query and the title+head of the document.
bigIR-T5-BERT-F¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bigIR-T5-BERT-F
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
a61597a4647fa6e7dc2595343e8a4fbf
- Run description: First we expanded the passages using the queries predicted by a T5 model which was trained with MS-Marco passages dataset to predict queries that could be answered by a given passage. Second, we indexed the expanded passages using anserini. Third, we adopted RM3 query expansion to retrieve the 1000 passages for each query using anserini. Finally, we reranked the retrieved passages using an already pre-trained BERT large model with the MS-Marco passages training data.
bigIR-T5-R¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bigIR-T5-R
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
9cf180fd1ae8d463c42a5a81ebb3cf48
- Run description: An already pre-trained T5 base model with the MS-Marco passages training data was used to rerank the passages.
bigIR-T5xp-T5-F¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bigIR-T5xp-T5-F
- Participant: QU
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
e514307b72c3e221a1289e7f69ad81f2
- Run description: First we expanded the passages using the queries predicted by a T5 model which was trained with MS-Marco passages dataset to predict queries that could be answered by a given passage. Second, we indexed the expanded passages using anserini. Third, we adopted RM3 query expansion to retrieve the 1000 passages for each query using anserini. Finally, we reranked the retrieved passages using an already pre-trained T5 base model with the MS-Marco passages training data.
BIT-run1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BIT-run1
- Participant: BIT.UA
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
6a5c67e701daebaca68e52330dafb8d6
- Run description: Our model follows a lightweight interaction-based approach, it is a direct evolution of the following work [1] and a more detailed view can be found here [2]. We also train word2vec embeddings in the corpus. [1] T. Almeida and S. Matos, Calling Attention to Passages for Biomedical Question Answering, in Advances in Information Retrieval, 2020, pp. 6977. [2] T. Almeida and S. Matos, Frugal neural reranking: evaluation on the Covid-19 literature open url: https://openreview.net/pdf?id=TtcUlbEHkum
BIT-run2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BIT-run2
- Participant: BIT.UA
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
155832af758ef78fade99054c75a5cab
- Run description: Our model follows a lightweight interaction-based approach, it is a direct evolution of the following work [1] and a more detailed view can be found here [2]. This run is a combination of 4 runs, associated with different checkpoints in the val and test sets, using rank reciprocal fusion. We also train word2vec embeddings in the corpus. [1] T. Almeida and S. Matos, Calling Attention to Passages for Biomedical Question Answering, in Advances in Information Retrieval, 2020, pp. 6977. [2] T. Almeida and S. Matos, Frugal neural reranking: evaluation on the Covid-19 literature open url: https://openreview.net/pdf?id=TtcUlbEHkum
BIT-run3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BIT-run3
- Participant: BIT.UA
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
2aca5bcc880ecbc4f3d9d0d6e8dc3968
- Run description: Our retrieval system uses a top 250 BM25 followed by a lightweight interaction-based model, that it is a direct evolution of the following work [1] and a more detailed view can be found here [2]. We used the BM25 implementation of the elastic search, that was later finetuned This run is a combination of 4 runs, associated with different checkpoints in the val and test sets, using rank reciprocal fusion. We also train word2vec embeddings in the corpus. [1] T. Almeida and S. Matos, Calling Attention to Passages for Biomedical Question Answering, in Advances in Information Retrieval, 2020, pp. 6977. [2] T. Almeida and S. Matos, Frugal neural reranking: evaluation on the Covid-19 literature open url: https://openreview.net/pdf?id=TtcUlbEHkum
bl_bcai_mdl1_vs¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bl_bcai_mdl1_vs
- Participant: bl_bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
390ba988f2754d139739b39170ef05fa
- Run description: BM25+IBM MODEL1
bl_bcai_mdl1_vt¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bl_bcai_mdl1_vt
- Participant: bl_bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: passages
- MD5:
b2c38ea0cc1369be5d15e2458c3583c1
- Run description: BM25+IBM MODEL1
bl_bcai_model1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: bl_bcai_model1
- Participant: bl_bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
4e50799d647fb5158bde6eff5026aa22
- Run description: BM25+IBM MODEL 1
bl_bcai_multfld¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: bl_bcai_multfld
- Participant: bl_bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
d34bb6166846ba9ef5c7621b72a4fa2e
- Run description: BM25 multifield
bl_bcai_prox¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: bl_bcai_prox
- Participant: bl_bcai
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
23279c2d55ab39f90cac4cb9550e7947
- Run description: BM25+BM25 proximity
bm25_bert_token¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: bm25_bert_token
- Participant: UAmsterdam
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
c9299e9e872bd79d5215ad38c0b1cf88
- Run description: bm25 with bert tokenization
CoRT-bm25¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: CoRT-bm25
- Participant: HSRM-LAVIS
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: passages
- MD5:
52eead2bd0155e2c784682131d16e04e
- Run description: CoRT (Complementary Ranking from Transformers) is a representation-focused first-stage ranking approach using a siamese query/passage encoder based on a pretrained ALBERT model ("albert-base-v2" hosted by huggingface.co). CoRT aims to act an an complementary retriever to term-based first-stage rankers with the goal to compile high-recall re-ranking candidates while requiring less numbers of candidates than BM25. This run comprises candidates from CoRT merged with BM25, which could quickly be served as final search results or passed to an arbitrary re-ranker to increase precision.
CoRT-electra¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: CoRT-electra
- Participant: HSRM-LAVIS
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: passages
- MD5:
6abc74fe14c750cfbfe705e4646b9082
- Run description: CoRT (Complementary Ranking from Transformers) is a representation-focused first-stage ranking approach using a siamese query/passage encoder based on a pretrained ALBERT model ("albert-base-v2" hosted by huggingface.co). CoRT aims to act an an complementary retriever to term-based first-stage rankers with the goal to compile high-recall re-ranking candidates while requiring less numbers of candidates than BM25. This run demonstrates the re-ranked ranking quality based on candidates from CoRT merged with BM25 and a pretrained+finetuned ELECTRA discriminator.
CoRT-standalone¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: CoRT-standalone
- Participant: HSRM-LAVIS
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: passages
- MD5:
f9356c50a45447fb50e3e77fce017362
- Run description: CoRT (Complementary Ranking from Transformers) is a representation-focused first-stage ranking approach using a siamese query/passage encoder based on a pretrained ALBERT model ("albert-base-v2" hosted by huggingface.co). CoRT aims to act an an complementary retriever to term-based first-stage rankers with the goal to compile high-recall re-ranking candidates while requiring less numbers of candidates than BM25. This run comprises standalone candidates from CoRT, which eventually should be merged with rankings from BM25. To be precise, CoRT is not supposed to be used as a standalone ranker.
d_bm25¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: d_bm25
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
c859ab57f303f52707946423b52ea7ad
- Run description: Anserini baseline BM25
d_bm25rm3¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: d_bm25rm3
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
9c36bce448b8b9426e4342160e57cd4c
- Run description: Anserini baseline BM25+RM3
d_d2q_bm25¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: d_d2q_bm25
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
34d3b67dec0985839e4c3c91dfbbdf78
- Run description: Anserini baseline BM25 on index expanded with doc2query
d_d2q_bm25rm3¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: d_d2q_bm25rm3
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
c3e7ed3600316392e820f32ca428ccb9
- Run description: Anserini baseline BM25+RM3 on index expanded with doc2query
d_d2q_duo¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: d_d2q_duo
- Participant: h2oloo
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
e09293a15a298eb9842ec256f6e60bbf
- Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 uses Anserini baseline of BM25 on index expanded with doc2query
d_d2q_rm3_duo¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: d_d2q_rm3_duo
- Participant: h2oloo
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
a6def8f2d2465fb343a3b9cd860d7338
- Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 uses Anserini baseline of BM25+RM3 on index expanded with doc2query.
d_rm3_duo¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: d_rm3_duo
- Participant: h2oloo
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
e9547fdd0936eb8ea9702115c83b6601
- Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 uses Anserini baseline of BM25+RM3
DLH_d_5_t_25¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: DLH_d_5_t_25
- Participant: RMIT
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
9e606010da7515fc73e51e0e7f4211ff
- Run description: Terrier DLH model, Krovetz stemming, BA query expansion with 5 documents and 25 terms.
DoRA_Large¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: DoRA_Large
- Participant: reSearch2vec
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
68813be0a045a65ec2902c5b5b7dd0c6
- Run description: Electra model with Dora pretraining, then 12 epochs of training data
DoRA_Large_1k¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: DoRA_Large_1k
- Participant: reSearch2vec
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
ce2566b3544fa158e3cd1bc616349565
- Run description: Electra model, Dora pretraining, 22 epochs of training
DoRA_Med¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: DoRA_Med
- Participant: reSearch2vec
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
4f463926767b4e13835f462dbb4e5405
- Run description: Electra transformer model, Dora pretraining, 12 epochs
DoRA_Small¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: DoRA_Small
- Participant: reSearch2vec
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
8d94004c422d74fc44d7a943ea9a85b3
- Run description: Using Electra model, Dora pretraining, 6 epochs of training data.
fr_doc_roberta¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: fr_doc_roberta
- Participant: BITEM
- Track: Deep Learning
- Year: 2020
- Submission: 7/22/2020
- Type: auto
- Task: docs
- MD5:
9c79f0a7d66ea450305bf2b791867f25
- Run description: We trained a roberta-large model on passages, then, for document reranking, we split documents into passages and give each document the maximum passage score of all its passages. We used anserini for the retrieval part.
fr_pass_roberta¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: fr_pass_roberta
- Participant: BITEM
- Track: Deep Learning
- Year: 2020
- Submission: 7/22/2020
- Type: auto
- Task: passages
- MD5:
e39885691eca98faafe254a8678c9e36
- Run description: We trained a roberta-large model on passages, then, for document reranking, we split documents into passages and give each document the maximum passage score of all its passages. We used anserini for the retrieval part.
ICIP_run1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICIP_run1
- Participant: ICIP
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
01cf815e03b3d5b3822e6537a9e4478b
- Run description: In the run ICIP_run1, we use the neural language model BERT to re-rank the candidate documents. Specifically, we utilize the BERT-Large which first trained on MS MARCO passage small train triples, and then fine-tune it on MS MARCO document training data. We produced the MS MARCO document training samples as follows: all documents are split into overlapping passages, and the label of a passage is following the document where the passage is from, then the passages will be fed into BERT-Large trained on MS MARCO passage to filter some noisy training samples. The BERT re-ranker predicts the relevance of each passage with a query independently, and the document score is given by the score of the best passage (MaxP). All candidate documents are re-ranked by the document scores received.
ICIP_run2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICIP_run2
- Participant: ICIP
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
6ecc62a3afa865e4a221d504b3bc9e87
- Run description: In the run ICIP_run2, we perform the knowledge distillation technique on the BERT-Large which produced the run ICIP_run1. Specifically, the teacher model is the BERT-Large which first trained on MS MARCO passage data, then trained on MS MARCO document data; and the student model is set as 12 layers, with a half parameters of BERT-Large, distilled on MS MARCO document training samples. The student re-ranker predicts the relevance of each passage with a query independently, and the document score is given by the score of the best passage (MaxP). All candidate documents are re-ranked by the document scores received.
ICIP_run3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ICIP_run3
- Participant: ICIP
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
075129d22ee45cf6556d4c8e9a96f14e
- Run description: In the run ICIP_run3, we use the neural language model BERT-Large only trained on MS MARCO passage small train triples. Specifically, the BERT re-ranker will not be further trained on MS MARCO document data as produced in ICIP_run1, because there is some noise in the produced MS MARCO document training samples. Differently, after predicting the relevance of each passage with a query independently, the document score is given by the average of the scores of the top-2 passages (2-Max-Avgp). All candidate documents are re-ranked by the document scores received.
indri-fdm¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: indri-fdm
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
d43f48392924d35e97e8fcd6d91053ce
- Run description: Indri FDM model of Metzler and Croft. Default params.
indri-lmds¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: indri-lmds
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
57a01eab9835931563fa7d35ca7c4825
- Run description: Indri Language model, dirchlet, mu=650 Krovetz stemming.
indri-sdm¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: indri-sdm
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
ff6f7980b8e3bb54617ca4e038025a8b
- Run description: Indri SDM model of Metzler and Croft. Default params.
indri-sdmf¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: indri-sdmf
- Participant: RMIT
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
75ab5010a44d8ebf103b7300611eee20
- Run description: Indri SDM Fields with title, url, and body, Krovetz stemming.
longformer_1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: longformer_1
- Participant: USI
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
e0903c91f8764e38672e745ef74cce01
- Run description: We employ Longformer for document re-ranking task. Specifically, we use LongformerForSequenceClassification from Huggingface.
med_1k¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: med_1k
- Participant: reSearch2vec
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
8479a294502d9cc9ad1778ad752fe655
- Run description: Electra model, Dora pretraining, 12 epochs of training
mpii_run1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpii_run1
- Participant: mpii
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
8f7be3f9e3cd09ab4c865c3b5301f013
- Run description: We rerank the top 100 documents from the official baseline. Our fine-tuning approach is two-stage. First, we fine-tuned the ELECTRA-Base model on the MSMARCO passage dataset. The model is later utilized by the document ranking model PARADE and fine-tuned on the TREC Deep learning Track 2019 test set for 500 steps (batch size 32).
mpii_run2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpii_run2
- Participant: mpii
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
46d758ea7ce9724fc443aef9ab3e00bb
- Run description: We rerank the top 100 documents from the official baseline. Our fine-tuning approach is two-stage. First, we fine-tuned the ELECTRA-Base model on the MSMARCO passage dataset. The model is later utilized by the document ranking model PARADE_{max} and fine-tuned on the TREC Deep learning Track 2019 test set for 500 steps (batch size 32).
mpii_run3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpii_run3
- Participant: mpii
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
f199c4040448dc1857937af9a524ad40
- Run description: We rerank the top 100 documents from the official baseline. Our fine-tuning approach is two-stage. First, we fine-tuned the ELECTRA-Base model on the MSMARCO passage dataset. The model is later utilized by the document ranking model PARADE_{attn} and fine-tuned on the TREC Deep learning Track 2019 test set for 500 steps (batch size 32).
ndrm1-full¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ndrm1-full
- Participant: MSAI
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: docs
- MD5:
e65ae0315fd9022d05eb2afb15854a7b
- Run description: A Conformer-Kernel model with Query-Term-Independence (paper: https://arxiv.org/pdf/2007.10434.pdf). Specifically, NDRM1 model from https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start evaluated in the full ranking setting. Input word embeddings were pretrained using word2vec on the provided collection.
ndrm1-re¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ndrm1-re
- Participant: MSAI
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: docs
- MD5:
2274ec9686ede43a8c8fb2e71d24d952
- Run description: A Conformer-Kernel model with Query-Term-Independence (paper: https://arxiv.org/pdf/2007.10434.pdf). Specifically, NDRM1 model from https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start evaluated in the reranking setting. Input word embeddings were pretrained using word2vec on the provided collection.
ndrm3-full¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ndrm3-full
- Participant: MSAI
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: docs
- MD5:
8ef9d5e5e7728fbe93910c8b834db44d
- Run description: A Conformer-Kernel model with Query-Term-Independence (paper: https://arxiv.org/pdf/2007.10434.pdf). Specifically, NDRM3 model from https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start evaluated in the full ranking setting. Input word embeddings were pretrained using word2vec on the provided collection.
ndrm3-orc-full¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ndrm3-orc-full
- Participant: MSAI
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: docs
- MD5:
89729367fff36622840ff97316a68a81
- Run description: A Conformer-Kernel model with Query-Term-Independence (paper: https://arxiv.org/pdf/2007.10434.pdf). Specifically, NDRM3 model from https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start using ORCAS data as an additional document field evaluated in the full ranking setting. Input word embeddings were pretrained using word2vec on the provided collection.
ndrm3-orc-re¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ndrm3-orc-re
- Participant: MSAI
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: docs
- MD5:
660bd434d3568c1211e64b533455d5e8
- Run description: A Conformer-Kernel model with Query-Term-Independence (paper: https://arxiv.org/pdf/2007.10434.pdf). Specifically, NDRM3 model from https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start using ORCAS data as an additional document field evaluated in the reranking setting. Input word embeddings were pretrained using word2vec on the provided collection.
ndrm3-re¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ndrm3-re
- Participant: MSAI
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: docs
- MD5:
1341b7d803a1a45e462aac38b26dd6bf
- Run description: A Conformer-Kernel model with Query-Term-Independence (paper: https://arxiv.org/pdf/2007.10434.pdf). Specifically, NDRM3 model from https://github.com/bmitra-msft/TREC-Deep-Learning-Quick-Start evaluated in the reranking setting. Input word embeddings were pretrained using word2vec on the provided collection.
NLE_pr1¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: NLE_pr1
- Participant: NLE
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
662c288838761d19295d670453321d66
- Run description: * BERT pre-trained model * Siamese BERT for first-stage ranking * ensemble 8 BERT re-rankers
NLE_pr2¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: NLE_pr2
- Participant: NLE
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
6e51c246d908835cd0ef1f7d626fa55a
- Run description: * BERT pre-trained model * two siamese for first-stage ranking: one fine-tuned BERT + 1 roberta based trained from scratch with MLM on MSMARCO * ensemble 8 BERT re-rankers (fine-tuned) + 4 electra re-rankers (fine-tuned) + 3 roberta re-rankers learned from scratch with MLM on MSMARCO
NLE_pr3¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: NLE_pr3
- Participant: NLE
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
9dcc80d2172789799c27f7bb6b67f92a
- Run description: * BERT pre-trained model * two siamese for first-stage ranking: one fine-tuned BERT + 1 roberta based trained from scratch with MLM on MSMARCO * ensemble * 8 BERT re-rankers (fine-tuned) + 4 electra re-rankers (fine-tuned) + 3 roberta re-rankers learned from scratch with MLM on MSMARCO * 5 BERT re-rankers (fine-tuned) + 1 roberta re-rankers learned from scratch with MLM on MSMARCO, re-ranking BM25 results
nlm-bert-rr¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: nlm-bert-rr
- Participant: NLM
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
5c9f9be1194c58a66e0b6d9d1dc8ded9
- Run description: For this run, we fine-tuned a BERT model on the classification of passage relevance using the passage ranking training data, then used the model to generate relevance scores from the top 1000 passages provided by the organizers.
nlm-bm25-prf-1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: nlm-bm25-prf-1
- Participant: NLM
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
ff5ff612b41d464670b8e803e5512764
- Run description: BM25 retrieval baseline with pseudo-relevance feedback.
nlm-bm25-prf-2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: nlm-bm25-prf-2
- Participant: NLM
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
7ac9f628d2e82f2ae2029e8cd04b1eeb
- Run description: BM25 retrieval baseline with pseudo-relevance feedback and a different tokenization model.
nlm-ens-bst-2¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: nlm-ens-bst-2
- Participant: NLM
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
948ca65ed28758d3afcaf8e369e9042d
- Run description: For this run, we fine-tuned a BERT model on the classification of passage relevance using the passage ranking task training data, then used the model to generate relevance scores for the top 1000 passages retrieved by different search methods with low pairwise retrieval correlation. A boost-based ensemble method was then applied to re-rank the n-1000s and select the top 1000 passages for this run.
nlm-ens-bst-3¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: nlm-ens-bst-3
- Participant: NLM
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
e666b400e2a1297b67fc91bf5932d82e
- Run description: For this run, we fine-tuned a BERT model on the classification of passage relevance using the passage ranking task training data, then used the model to generate relevance scores for the top 1000 passages retrieved by different search methods with low pairwise retrieval correlation. A boost-based ensemble method was then applied to re-rank the n-1000s and select the top 1000 passages for this run.
nlm-prfun-bert¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: nlm-prfun-bert
- Participant: NLM
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
4407a4e23cfc39f79df8f85585595734
- Run description: For this run, we fine-tuned a BERT model on the classification of passage relevance using the passage ranking training data, then used the model to generate relevance scores from the top 1000 passages retrieved by our nlm-bm25-prf-u run.
p_bm25¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_bm25
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
f2735d4d03b9cf4cbdba800f634ee057
- Run description: Anserini baseline BM25
p_bm25rm3¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_bm25rm3
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
19532dbc2bce3326b5b0c511eb28fe04
- Run description: Anserini baseline BM25+RM3
p_bm25rm3_duo¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_bm25rm3_duo
- Participant: h2oloo
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
ae8fca3b0803de064953fc06bfdf635a
- Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 uses Anserini baseline of BM25+RM3
p_d2q_bm25¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_d2q_bm25
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
1f8c62aec7ae0f01a0b12e2dc14f2897
- Run description: Anserini baseline BM25 on index expanded with doc2query
p_d2q_bm25_duo¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_d2q_bm25_duo
- Participant: h2oloo
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
1e9d5447df11c03758664744a085158c
- Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 uses Anserini baseline of BM25 on index expanded with doc2query
p_d2q_bm25rm3¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_d2q_bm25rm3
- Participant: anserini
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
b097037a9eca0383a0d40862d8840585
- Run description: Anserini baseline BM25+RM3 on index expanded with doc2query
p_d2q_rm3_duo¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: p_d2q_rm3_duo
- Participant: h2oloo
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
381775f81ac08249909df6e25d12205b
- Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 uses Anserini baseline of BM25+RM3 on index expanded with doc2query.
pash_f1¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pash_f1
- Participant: PASH
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
0e28291867a55cab7f7cfd5473f5c336
- Run description: We pretrained the modified bert-large model on msmarco-docs.tsv from Document Ranking Task. Next we use multiple recall mechanisms such as query expansion, document expansion, machine translation, passage similarity technologies to expand the recall possibility. Then fine-tune the model on query-passage pairs with 1:1 positive & negative labels. Finally ensembled 10 more different models.
pash_f2¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pash_f2
- Participant: PASH
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
a37c20648bd12eb8a59113025a7f8707
- Run description: We pretrained the modified bert-large model on msmarco-docs.tsv from Document Ranking Task. Next we use multiple recall mechanisms such as query expansion, document expansion, machine translation, passage similarity technologies to expand the recall possibility. Then fine-tune the model on query-passage pairs with 1:1 positive & negative labels.
pash_f3¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pash_f3
- Participant: PASH
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
ce2bfcc1e5b9c8b2328d70de3be5199c
- Run description: We pretrained the modified bert-large model on msmarco-docs.tsv from Document Ranking Task. Next we use multiple recall mechanisms such as query expansion, document expansion, machine translation, passage similarity technologies to expand the recall possibility. Then fine-tune the model on query-passage pairs with 1:1 positive & negative labels. Finally ensembled 10 more different models.
pash_r1¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pash_r1
- Participant: PASH
- Track: Deep Learning
- Year: 2020
- Submission: 7/30/2020
- Type: auto
- Task: passages
- MD5:
0d227b5d78eac191737ac12eb4a0bb4f
- Run description: We pretrained the modified bert-large model on msmarco-docs.tsv from Document Ranking Task. Then fine-tune the model on query-passage pairs with 1:1 positive & negative labels.
pash_r2¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pash_r2
- Participant: PASH
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: passages
- MD5:
af3fd7baa0ae8b81016e13044a000bdb
- Run description: We pretrained the modified bert-large model on msmarco-docs.tsv from Document Ranking Task. Then fine-tune the model on query-passage pairs with 1:1 positive & negative labels. Then ensembled 10 different models.
pash_r3¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pash_r3
- Participant: PASH
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
62dd4105af8a814df30f9b8c1f8965c6
- Run description: We pretrained the modified bert-large model on msmarco-docs.tsv from Document Ranking Task. Then fine-tune the model on query-passage pairs with 1:1 positive & negative labels. Then ensembled 10 different models. Finally ensemble other two machine learning algorithms.
pinganNLP1¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pinganNLP1
- Participant: pinganNLP
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
7abaa0f22c224f1c1fb10cd65dc2deba
- Run description: using training data to pretrain bert model, and finetuning xlnet with training data, ensemble several models finally
pinganNLP2¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pinganNLP2
- Participant: pinganNLP
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
7c67c6273cf5c6048597c0a092a9d809
- Run description: using training data to pretrain bert model, and finetuning xlnet with training data, ensemble several models finally
pinganNLP3¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: pinganNLP3
- Participant: pinganNLP
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
dfdac4e3923c8de3d6d994a5e54ed0ad
- Run description: We pretrained bert model with train data, and we finetune XLNet with train data, at last we ensemble several models.
relemb_mlm_0_2¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: relemb_mlm_0_2
- Participant: UAmsterdam
- Track: Deep Learning
- Year: 2020
- Submission: 8/4/2020
- Type: auto
- Task: passages
- MD5:
c4b1a8157cebf523f6bd0d7d2defa25c
- Run description: Altered the masked language modeling task for pre-training bert while training on orcas, used the pre-trained bert as a base. Subsequently, trained an interaction-based bert ranker on top.
rindri-bm25¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rindri-bm25
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
b86875a869f8d834b60b188f7f0f8629
- Run description: Indri Bm25, Krovetz stemming, k1=1.6,b=0.7
RMIT-Bart¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: RMIT-Bart
- Participant: RMIT
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
dc35a0f5df426e9b6e56cacd719e6cf5
- Run description: Pairwise ranker on top of a BART transformer.
RMIT_DFRee¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RMIT_DFRee
- Participant: RMIT
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
a6ebfd5396bd3c5e288c27b6f4091329
- Run description: Terrier DFRee Ranker with bigrams, Bo1 query expansion, Krovetz stemming, 5 documents, 50 terms.
RMIT_DPH¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RMIT_DPH
- Participant: RMIT
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
07e3f27d342f156728b60467a222f28a
- Run description: Terrier DPH Ranker with bigrams, Bo1 query expansion, Krovetz stemming, 5 documents, 50 terms.
rmit_indri-fdm¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rmit_indri-fdm
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
ae95eba56d002aa948f0249cc7465965
- Run description: Indri FDM, default params, Krovetz stemming.
rmit_indri-sdm¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rmit_indri-sdm
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
d0a76bc436c64830ed58e2f81d7896fb
- Run description: Indri SDM, default params, Krovetz stemming.
roberta-large¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: roberta-large
- Participant: BITEM
- Track: Deep Learning
- Year: 2020
- Submission: 7/22/2020
- Type: auto
- Task: docs
- MD5:
6a9bbda4d4881d6ef58f3efdd384c02b
- Run description: We trained a roberta-large model on passages, then, for document reranking, we split documents into passages and give each document the maximum passage score of all its passages.
rr-pass-roberta¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: rr-pass-roberta
- Participant: BITEM
- Track: Deep Learning
- Year: 2020
- Submission: 7/22/2020
- Type: auto
- Task: passages
- MD5:
64912a55777991303f3900eb8b25aa24
- Run description: We trained a roberta-large model on passages, then, for document reranking, we split documents into passages and give each document the maximum passage score of all its passages.
rterrier-dph¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rterrier-dph
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
e37972a0dea143163f84754815c8f327
- Run description: Terrier DPH, default params, Krovetz stemming.
rterrier-dph_sd¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rterrier-dph_sd
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
b0a0e680038d042b5ecb85afeb5696cd
- Run description: Terrier DPH, bigrams, default params, Krovetz stemming.
rterrier-expC2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rterrier-expC2
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
8873c6f7c5dfc27663101913d1b84348
- Run description: Terrier in_expC2, default params, Krovetz stemming.
rterrier-tfidf¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rterrier-tfidf
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
15683acaea4b46c9423e80f1c78f63b0
- Run description: Terrier in_expC2, default params, Krovetz stemming.
rterrier-tfidf2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: rterrier-tfidf2
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
764f6ea17df9bb6aaa9771ecdf10383a
- Run description: Terrier Lemur tfidf, default params, Krovetz stemming.
small_1k¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: small_1k
- Participant: reSearch2vec
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
ef658199c9559567d93ceb486f256693
- Run description: Electra model, Dora pretraining, 4 epochs of training
terrier-BM25¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: terrier-BM25
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
796275a90a00744eff7debc6284ed2ab
- Run description: Terrier BM25 model. k1=0.9,b=0.4 Krovetz stemming.
terrier-DPH¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: terrier-DPH
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
0005693dab8cda80f064fac3435d7e2e
- Run description: Terrier DPH model,Krovetz stemming.
terrier-InL2¶
Results
| Participants
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: terrier-InL2
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
c54cc4e0a287c8123fa87bdee3e95d91
- Run description: Terrier InL2 model. Default params. Krovetz stemming.
terrier-jskls¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: terrier-jskls
- Participant: bl_rmit
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
ddae939a72e3393fed7a6fb021f267f2
- Run description: Terrier JS_KLs, default params, Krovetz stemming, bigrams, KL QE, 1 document and 10 terms.
TF_IDF_d_2_t_50¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: TF_IDF_d_2_t_50
- Participant: RMIT
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: passages
- MD5:
efeeaacd80062726c3d789136798871d
- Run description: Terrier TFIDF model, Krovetz stemming, BA query expansion with 2 documents and 50 terms.
TUW-TK-2Layer¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: TUW-TK-2Layer
- Participant: TU_Vienna
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
fa1471ef40520b25e5571c9f08bfbb33
- Run description: 2 Layer TK model from: https://arxiv.org/abs/2002.01854, for TREC'20 we also pre-trained the Transformer layers on an Masked Language Model task (not in the initial paper)
TUW-TK-Sparse¶
Results
| Participants
| Proceedings
| Input
| Summary (trec_eval)
| Summary (passages-eval)
| Appendix
- Run ID: TUW-TK-Sparse
- Participant: TU_Vienna
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: passages
- MD5:
93a3a75a2de0ccbfc0998b376c170ae2
- Run description: Sparse & contextualized stopword adaption from the TK base model, published in CIKM'20, paper will be available until TREC
TUW-TKL-2k¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TUW-TKL-2k
- Participant: TU_Vienna
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
76430b8bde744c2750f0c0b97b09130c
- Run description: 2 thousand token input TKL model from: https://arxiv.org/abs/2005.04908
TUW-TKL-4k¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TUW-TKL-4k
- Participant: TU_Vienna
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
3a228855501292e0ea2e7858b2312b3c
- Run description: 4 thousand token input TKL model from: https://arxiv.org/abs/2005.04908
uob_runid1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uob_runid1
- Participant: UoB
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
4c6e1c28f661a7b221ad439533fb73d2
- Run description: Used a BERT model pre-trained on MSMARCO and fine tuned using passage level training data. Aimed to cheaply pre-select 4 meaningful passages to determine relevance of document rather than run every passage through the model. Extracted keywords/named entities from query and used text windows around occurrences in doc body as input to model. Took max passage score as document score.
uob_runid2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uob_runid2
- Participant: UoB
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
da2a1b449e8dafa634df80deaf6c4e2c
- Run description: Used a BERT model pre-trained on MSMARCO and fine tuned using passage level training data. Aimed to pre-select meaningful passages to determine relevance of document rather than run every passage through the model. Split document into passages and used GloVe embeddings to encode passages and query. Assigned each passage a score by similarity to query and used top 4 as input to passage model
uob_runid3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uob_runid3
- Participant: UoB
- Track: Deep Learning
- Year: 2020
- Submission: 8/5/2020
- Type: auto
- Task: docs
- MD5:
bd2914f8e8ebbe9136ae8d53fe2e430a
- Run description: Used a BERT model pre-trained on MSMARCO and fine tuned using passage level training data. Aimed to pre-select meaningful passages to determine relevance of document rather than run every passage through the model. Split document into passages and used TextRank with GloVe embeddings to find most important passages in a document. Used top 4 as input to passage level model, taking the max score as the document score.
uogTr31oR¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uogTr31oR
- Participant: UoGTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/7/2020
- Type: auto
- Task: docs
- MD5:
85e4d3a751dacc1dbf54ec8436d8115c
- Run description: Uses 31 features to re-rank a candidate set obtained by DPH Divergence from Randomness. Learning to rank using LightGBM; Features include traditional and neural models such as BERT & ColBERT. ORCAS is included as a field. Run created using Pyterrier.
uogTrBaseDPH¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: uogTrBaseDPH
- Participant: bl_uogTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
5f313422ff17453b4e478c1a008be71d
- Run description: Terrier's DPH model from the Divergence from Randomness framework. Run was created using PyTerrier.
uogTrBaseDPHQ¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: uogTrBaseDPHQ
- Participant: bl_uogTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
ffd11c5599dac75011ec4676b4df7dd6
- Run description: Terrier's DPH model and Bo1 automatic query expansion from the Divergence from Randomness framework. Run was created using PyTerrier.
uogTrBaseL16¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: uogTrBaseL16
- Participant: bl_uogTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
1e7b7bd5a7a9946f3f957f613f044871
- Run description: LightGBM re-ranking of 16 non-neural features; candidate set identified using DPH; Run created using PyTerrier
uogTrBaseL17o¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: uogTrBaseL17o
- Participant: bl_uogTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
1227bd0756141b1aab4f5ba65dd3af54
- Run description: LightGBM re-ranking of 16 non-neural features + ORCAS as a field; candidate set identified using DPH; Run created using PyTerrier
uogTrBaseQL16¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: uogTrBaseQL16
- Participant: bl_uogTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
65f251728fe34b22adff6c4020cdc54f
- Run description: LightGBM re-ranking of 16 non-neural features; candidate set identified using DPH & Bo1; Run created using PyTerrier
uogTrBaseQL17o¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: uogTrBaseQL17o
- Participant: bl_uogTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
b135b34cf2d357623b012909fa32f90b
- Run description: LightGBM re-ranking of 16 non-neural features + ORCAS as a field; candidate set identified using DPH & Bo1 query expansion; Run created using PyTerrier
uogTrQCBMP¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uogTrQCBMP
- Participant: UoGTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
055a710f31eff4ba2de9276bb1e69788
- Run description: ColBERT MaxPassage re-ranking of a candidate set created using the Divergence from Randomness DPH + Bo1 Query expansion models. Run created using Pyterrier.
uogTrT20¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uogTrT20
- Participant: UoGTr
- Track: Deep Learning
- Year: 2020
- Submission: 8/6/2020
- Type: auto
- Task: docs
- MD5:
3be19f1e00fec2f5f2de00a747beb129
- Run description: An initial retrieval using the DPH Divergence from Randomness model is followed by applying TTTTT is a novel manner to perform query expansion to form the candidate set. 20 features to re-rank the candidate set. Learning to rank using LightGBM; Features include traditional and neural models such as ColBERT. Run created using Pyterrier.