Runs - Round 5 2020¶
BioInfo-run1¶
Results
| Participants
| Input
| Appendix
- Run ID: BioInfo-run1
- Participant: BioinformaticsUA
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
b37e05c1fe39a4121c846f6cb83ceb1a
- Run description: This run uses the open baseline "rd5_borda_1000" (from UIowaS team) and applies a neural ranking model [1] to rerank the top 10 documents. REFs: [1] T. Almeida and S. Matos, "Calling Attention to Passages for Biomedical Question Answering," in Advances in Information Retrieval, 2020, pp. 69--77.
BioInfo-run2¶
Results
| Participants
| Input
| Appendix
- Run ID: BioInfo-run2
- Participant: BioinformaticsUA
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
7c79117d622d8e54eebf52ff9db265f7
- Run description: This run performs rbf fusion over 6 runs. These runs were obtained by reranking the open baseline "rd5_borda_1000" (from UIowaS team) using the following neural ranking model [1] to rerank the top 15 documents. REFs: [1] T. Almeida and S. Matos, "Calling Attention to Passages for Biomedical Question Answering," in Advances in Information Retrieval, 2020, pp. 69--77.
BioInfo-run3¶
Results
| Participants
| Input
| Appendix
- Run ID: BioInfo-run3
- Participant: BioinformaticsUA
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
1ffa707d22f51bbfffff613ff7fd0be9
- Run description: This run performs rbf fusion over 8 runs. These runs were obtained by reranking the open baseline "rd5_borda_1000" (from UIowaS team) using the following neural ranking model [1] to rerank the top 10 documents. REFs: [1] T. Almeida and S. Matos, "Calling Attention to Passages for Biomedical Question Answering," in Advances in Information Retrieval, 2020, pp. 69--77.
BioInfo-run4¶
Results
| Participants
| Input
| Appendix
- Run ID: BioInfo-run4
- Participant: BioinformaticsUA
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
23289aadecc0af2a585f5f1e5fdbf454
- Run description: This tries to replicate the same pipeline used in the first batch, which means that explores bm25 + reranking approach [1]. REFs: [1] T. Almeida and S. Matos, "Calling Attention to Passages for Biomedical Question Answering," in Advances in Information Retrieval, 2020, pp. 69--77.
BioInfo-run5¶
Results
| Participants
| Input
| Appendix
- Run ID: BioInfo-run5
- Participant: BioinformaticsUA
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
862e11306e0ffd54b6e68f9490d3d75d
- Run description: This tries to replicate the same pipeline used in the first batch, which means that explores bm25 + reranking approach [1], trained with BioASQ data. REFs: [1] T. Almeida and S. Matos, "Calling Attention to Passages for Biomedical Question Answering," in Advances in Information Retrieval, 2020, pp. 69--77.
BioInfo-run6¶
Results
| Participants
| Input
| Appendix
- Run ID: BioInfo-run6
- Participant: BioinformaticsUA
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
9df8f925e48e341bc112df1e0fc7a9a0
- Run description: This tries to replicate the same pipeline used in the first round, which means that explores bm25 + reranking approach [1], trained with BioASQ data. REFs: [1] T. Almeida and S. Matos, "Calling Attention to Passages for Biomedical Question Answering," in Advances in Information Retrieval, 2020, pp. 69--77.
bm25_bl_run5¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25_bl_run5
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
6e32de342f0ae5df892f11b420c8c56e
- Run description: BM25 baseline
bm25L1_bilstm_run¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25L1_bilstm_run
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
0aff3b5e27c06e5e61a2f3500322df4a
- Run description: BM25L + Bert + BiLSTM (only abstracts, original qrels)
bm25L1_linear_run¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25L1_linear_run
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
f47b78294e767e92415fa1d0693ab839
- Run description: BM25L + Bert + Linear layer (only abstracts, original qrels)
bm25L2_bilstm_run¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25L2_bilstm_run
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
f7f7b73882e02c13416080baff81d86f
- Run description: BM25L2 + Bert + BiLSTM (only abstracts)
bm25L2_linear_run¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25L2_linear_run
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
e4046c53e2be30d707c033e2c10a25c1
- Run description: BM25L + Bert + Linear layer (only abstracts)
bm25L_bilstm_run¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25L_bilstm_run
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
93a906a64c6906ce8816f62aa1244a02
- Run description: BM25L + BERT + BiLSTM trained on qrels
bm25L_bl_run5¶
Results
| Participants
| Input
| Appendix
- Run ID: bm25L_bl_run5
- Participant: UH_UAQ
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
f35706a07f0d31a397b1e5c43006d123
- Run description: BM25L baseline
BRPHJ_bert¶
Results
| Participants
| Input
| Appendix
- Run ID: BRPHJ_bert
- Participant: BRPHJ
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
35fdda3dea21f34e729274505d5f49b7
- Run description: Previous round data is used to generate the train the model. BM25 hits were re ranked using trained biobert classification model.
BRPHJ_BM25¶
Results
| Participants
| Input
| Appendix
- Run ID: BRPHJ_BM25
- Participant: BRPHJ
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
79075999001f46a67712a26ed5e17e6e
- Run description: BM25 technique is used to generate the hits. Query, Question and Narrative are used to form the entire query. Recency factor is included for the relevant decision.
BRPHJ_logistic¶
Results
| Participants
| Input
| Appendix
- Run ID: BRPHJ_logistic
- Participant: BRPHJ
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
6965bb51f6f5ddcd03b61901fffcb5dc
- Run description: BM25 results were re ranked using logistic regression model. Logistic regression model is trained using previous data with tf-idf inputs.
CincyMedIR-0-2-3-4¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-0-2-3-4
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
f256681d3414284dcc37631382d620f5
- Run description: ElasticSearch: [title, abstract], cross_fields; Query: query, question, narrative; query_weight: 0.2, rescore_query_weight: 0.8; Learning To Rank: Coordinate Ascent.
CincyMedIR-0-4-1-3¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-0-4-1-3
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
0d11e050cf8aef8ef0507068817b2fb8
- Run description: ElasticSearch: [title, abstract, metamap_10_term_title_abstract], cross_fields; Query: query, question, narrative, lexigram; query_weight: 1, rescore_query_weight: 1 [total]; Learning To Rank: AdaRank.
CincyMedIR-1¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-1
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
3c32c1c61e4223979eab07834c427448
- Run description: ElasticSearch: [title, abstract, metamap_00_term_title_abstract], cross_fields; Query: query, question, narrative, lexigram; [query_weight: 1, rescore_query_weight: 1 [total];] Learning To Rank: none.
CincyMedIR-1-2-1-3¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-1-2-1-3
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
c122fe82986a17248f8a61cd621744f5
- Run description: ElasticSearch: [title, abstract], cross_fields; Query: query, question, narrative, lexigram; query_weight: 1, rescore_query_weight: 1 [total]; Learning To Rank: AdaRank.
CincyMedIR-1-4-1-3¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-1-4-1-3
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
8f1b876d50042041b6f40f5c16ac88d7
- Run description: ElasticSearch: [title, abstract, metamap_00_term_title_abstract], cross_fields; Query: query, question, narrative, lexigram; query_weight: 1, rescore_query_weight: 1 [total]; Learning To Rank: AdaRank.
CincyMedIR-1-6-4-3¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-1-6-4-3
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
523d43bededae117d538ad9c5287bbb4
- Run description: ElasticSearch: [title, abstract, metamap_10_term_title_abstract], cross_fields; Query: query, question, narrative, lexigram, metamap_10_term; query_weight: 0.4, rescore_query_weight: 0.6 [total]; Learning To Rank: AdaRank.
CincyMedIR-20-5-4¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-20-5-4
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
a38c38c5a17d9a071b83afc8afd724f4
- Run description: ElasticSearch: [title, abstract], cross_fields; Query: query, question, narrative; query_weight: 1, rescore_query_weight: 1 [multiply]; Learning To Rank: Coordinate Ascent.
CincyMedIR-s-20-5-4¶
Results
| Participants
| Input
| Appendix
- Run ID: CincyMedIR-s-20-5-4
- Participant: CincyMedIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
323f7ab733f359a5bacd4e90d4b3320c
- Run description: ElasticSearch: [title, abstract], cross_fields; Query: query, question, narrative, lexigram; query_weight: 1, rescore_query_weight: 1 [multiply]; Learning To Rank: CoordinateAscent.
covidex.r5.1s¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.1s
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
6ef74ae993609bcf7319dc71381e5f54
- Run description: A reciprocal rank fusion of two baseline runs that were independently reranked by a pointwise reranker (monoT5). The reranker was trained on MedMARCO (MacAvaney et al., SLEDGE, 2020). Baseline runs are rows 7 and 8 from https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md#round-5
covidex.r5.1s.lr¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.1s.lr
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
9c27f2dc09e8f5a6089055de34787042
- Run description: Interpolation (alpha=0.6) of covidex.r5.1s scores and scores from a logistic regression classifier trained on qrels of rounds 1, 2, 3, and 4 with tf-idf as input.
covidex.r5.2s¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.2s
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
7d34ee725fe2f01bf8cd3e0af1e3308f
- Run description: A reciprocal rank fusion of two baseline runs that were independently reranked by a pointwise reranker (monoT5) then by a pairwise reranker (duoT5). Both rerankers were trained on MedMARCO (MacAvaney et al., SLEDGE, 2020). Baseline runs are rows 7 and 8 from https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md#round-5
covidex.r5.2s.lr¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.2s.lr
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
733d99a883df5656a2388344bf7ac8cb
- Run description: Interpolation (alpha=0.6) of covidex.r5.2s scores and scores from a logistic regression classifier trained on qrels of rounds 1, 2, 3, and 4 with tf-idf as input.
covidex.r5.d2q.1s¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.d2q.1s
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
afc32ba76570ecf8fa487ad172d3ab25
- Run description: A reciprocal rank fusion of two baseline runs that were independently reranked by a pointwise reranker (monoT5). Documents were expanded with doc2query prior to indexing. The reranker was trained on MedMARCO (MacAvaney et al., SLEDGE, 2020). Baseline runs are rows 7 and 8 from https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md#round-5
covidex.r5.d2q.1s.lr¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.d2q.1s.lr
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
f9f7b882cd128b57968394e841f1ec20
- Run description: Interpolation (alpha=0.6) of covidex.r5.d2q.1s scores and scores from a logistic regression classifier trained on qrels of rounds 1, 2, 3, and 4 with tf-idf as input.
covidex.r5.d2q.2s¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.d2q.2s
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
5ccdf160f28f13cf396db17adb2ac919
- Run description: A reciprocal rank fusion of two baseline runs that were independently reranked by a pointwise reranker (monoT5) then by a pairwise reranker (duoT5). Documents were expanded with doc2query prior to indexing. Both rerankers were trained on MedMARCO (MacAvaney et al., SLEDGE, 2020). Baseline runs are rows 7 and 8 from https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md#round-5
covidex.r5.d2q.2s.lr¶
Results
| Participants
| Input
| Appendix
- Run ID: covidex.r5.d2q.2s.lr
- Participant: covidex
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
2cda0d4b6c29da9e3f209b9d7f61372f
- Run description: Interpolation (alpha=0.6) of covidex.r5.d2q.2s scores and scores from a logistic regression classifier trained on qrels of rounds 1, 2, 3, and 4 with tf-idf as input.
CSIROmedFR¶
Results
| Participants
| Input
| Appendix
- Run ID: CSIROmedFR
- Participant: CSIROmed
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
b94924f5082796f13f249f18354cc63b
- Run description: Round-robin reciprocal rank fusion between CSIROmedNIP, BM25 baseline and Anserini baseline number 9 (round 5).
CSIROmedNIP¶
Results
| Participants
| Input
| Appendix
- Run ID: CSIROmedNIP
- Participant: CSIROmed
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
125485fef95a254a0e7f79eaa2622581
- Run description: Similar to Round 4/5 CSIROmedNIR but with analytical normalization, and narrative query field but querying only on the abstract/fulltext for BM25 and abstract for cosine scoring.
CSIROmedNIR¶
Results
| Participants
| Input
| Appendix
- Run ID: CSIROmedNIR
- Participant: CSIROmed
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
d34dfda32d214f7b9517b1d1e46f91f8
- Run description: The same as round 4 CSIROmedNIR. Baseline run.
DoRA_MSMARCO_1k¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRA_MSMARCO_1k
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
4457c9532cf8e1118c61d047d13b1ef4
- Run description: DoRA pretraining on SciBert, then 3 epochs of the MSMarco dataset, then fine tuning on previous relevance scores. BM25 used to get top 1k, and then reranked.
DoRA_MSMARCO_1k_C¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRA_MSMARCO_1k_C
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
ea912589b6421b510322fd6d53a01f76
- Run description: Corrected, Dora pretraining, MSMacro 3 epochs, previous judgments fine tuning.
DoRA_MSMARCO_6k¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRA_MSMARCO_6k
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
74cc854d0c3bebaf258f711db313a773
- Run description: Dora pretraining with scibert, 3 epochs MSMarco, and finished with previous relevance scores. BM25 for top 6k and reranked.
DoRA_NO_Judgments_1k¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRA_NO_Judgments_1k
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
0973733d80fd29ce397b7b25f73dbd69
- Run description: DoRA pretraining on SciBert, no judgement scores training. Top 1000 for each topic retrieved with BM25, and reranked.
DoRA_NO_Judgments_6k¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRA_NO_Judgments_6k
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
104f09fd2312f3243221914fa348f5bd
- Run description: Dora pretraining on scibert. BM25 for top 6k , and reranked.
DoRAWithJudgments_1k¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRAWithJudgments_1k
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
3a0dc60b8210e8ba7fdb8a7f2e9afa7c
- Run description: DoRA pretraining with Transformers SciBert as Base, and then trained with previous rounds judgements, BM25 for top 1k, and reranked.
DoRAWithJudgments_6k¶
Results
| Participants
| Input
| Appendix
- Run ID: DoRAWithJudgments_6k
- Participant: reSearch2vec
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
018ba3af57915fb2328348b2ad8b345c
- Run description: Dora pretraining with scibert, and finished with previous relevance scores. BM25 for top 6k and reranked.
elhuyar_prf_nof99d¶
Results
| Participants
| Input
| Appendix
- Run ID: elhuyar_prf_nof99d
- Participant: Elhuyar_NLP_team
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
d3dabff8ff74cb842fb56870848cabce
- Run description: We tackle this document retrieval task in two steps: a) a first ranking and b) re-ranking. In order to obtain the first ranking of relevant documents of the collection corresponding to the queries, we use a language modeling based information retrieval approach (Ponte & Croft, 1998) including pseudo relevance feedback. For that purpose, we used the Indri search engine (Strohman, 2005), which combines Bayesian networks with language models. Full articles are indexed and titles and abstracts are expanded. When building the query, different weights are assigned to the query, question and narrative fields Then, we make a re-ranking based on BERT following a strategy similar to the one proposed by Nogueira and Cho (2019). We tuned the Clinical BERT model (Alsentzer et al., 2019) to the task of identifying relevant queries and abstracts by using a silver dataset composed of titles and their corresponding abstracts from the COVID-19 Open Research dataset and the qrels of the previous rounds. Indri and Tuned Clinical BERT scores are linearly combined and re-ranking is performed according to that new score. In this run we used a weight of 0.99 for the Clinical BERT score. Documents judged in previous rounds are removed from the ranking.
elhuyar_prf_nof99p¶
Results
| Participants
| Input
| Appendix
- Run ID: elhuyar_prf_nof99p
- Participant: Elhuyar_NLP_team
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
7602af4c690b7009d5c55a9504700f5d
- Run description: We tackle this document retrieval task in two steps: a) a first ranking and b) re-ranking. In order to obtain the first ranking of relevant documents of the collection corresponding to the queries, we use a language modeling based information retrieval approach (Ponte & Croft, 1998) including pseudo relevance feedback. For that purpose, we used the Indri search engine (Strohman, 2005), which combines Bayesian networks with language models. Full articles are indexed and titles and abstracts are expanded. When building the query, different weights are assigned to the query, question and narrative fields Then, we make a re-ranking based on BERT following a strategy similar to the one proposed by Nogueira and Cho (2019). We tuned the Clinical BERT model (Alsentzer et al., 2019) to the task of identifying relevant queries and abstracts by using a silver dataset composed of titles and their corresponding abstracts from the COVID-19 Open Research dataset and the qrels of the previous rounds. Indri and Tuned Clinical BERT scores are linearly combined and re-ranking is performed according to that new score. In this run we used a weight of 0.99 for the Clinical BERT score.
elhuyar_prf_nof9p¶
Results
| Participants
| Input
| Appendix
- Run ID: elhuyar_prf_nof9p
- Participant: Elhuyar_NLP_team
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
4998a9a2116527709eb956c96cc690f7
- Run description: We tackle this document retrieval task in two steps: a) a first ranking and b) re-ranking. In order to obtain the first ranking of relevant documents of the collection corresponding to the queries, we use a language modeling based information retrieval approach (Ponte & Croft, 1998) including pseudo relevance feedback. For that purpose, we used the Indri search engine (Strohman, 2005), which combines Bayesian networks with language models. Full articles are indexed and titles and abstracts are expanded. When building the query, different weights are assigned to the query, question and narrative fields Then, we make a re-ranking based on BERT following a strategy similar to the one proposed by Nogueira and Cho (2019). We tuned the Clinical BERT model (Alsentzer et al., 2019) to the task of identifying relevant queries and abstracts by using a silver dataset composed of titles and their corresponding abstracts from the COVID-19 Open Research dataset and the qrels of the previous rounds. Indri and Tuned Clinical BERT scores are linearly combined and re-ranking is performed according to that new score. In this run we used a weight of 0.9 for the Clinical BERT score.
elhuyar_rrf_nof09p¶
Results
| Participants
| Input
| Appendix
- Run ID: elhuyar_rrf_nof09p
- Participant: Elhuyar_NLP_team
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
89dda341cfca6de20b5c92c9351c78e7
- Run description: We tackle this document retrieval task in two steps: a) a first ranking and b) re-ranking. In order to obtain the first ranking of relevant documents of the collection corresponding to the queries, we use a language modeling based information retrieval approach (Ponte & Croft, 1998) including relevance feedback based on the qrels of the previous rounds. For that purpose, we used the Indri search engine (Strohman, 2005), which combines Bayesian networks with language models. Full articles are indexed and titles and abstracts are expanded. When building the query, different weights are assigned to the query, question and narrative fields Then, we make a re-ranking based on BERT following a strategy similar to the one proposed by Nogueira and Cho (2019). We tuned the Clinical BERT model (Alsentzer et al., 2019) to the task of identifying relevant queries and abstracts by using a silver dataset composed of titles and their corresponding abstracts from the COVID-19 Open Research dataset and the qrels of the previous rounds. Indri and Tuned Clinical BERT scores are linearly combined and re-ranking is performed according to that new score. In this run we used a weight of 0.9 for the Clinical BERT score.
elhuyar_rrf_nof99p¶
Results
| Participants
| Input
| Appendix
- Run ID: elhuyar_rrf_nof99p
- Participant: Elhuyar_NLP_team
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
a7eb9ea33106f837e1f39a87056679ac
- Run description: We tackle this document retrieval task in two steps: a) a first ranking and b) re-ranking. In order to obtain the first ranking of relevant documents of the collection corresponding to the queries, we use a language modeling based information retrieval approach (Ponte & Croft, 1998) including relevance feedback based on the qrels of the previous rounds. For that purpose, we used the Indri search engine (Strohman, 2005), which combines Bayesian networks with language models. Full articles are indexed and titles and abstracts are expanded. When building the query, different weights are assigned to the query, question and narrative fields Then, we make a re-ranking based on BERT following a strategy similar to the one proposed by Nogueira and Cho (2019). We tuned the Clinical BERT model (Alsentzer et al., 2019) to the task of identifying relevant queries and abstracts by using a silver dataset composed of titles and their corresponding abstracts from the COVID-19 Open Research dataset and the qrels of the previous rounds. Indri and Tuned Clinical BERT scores are linearly combined and re-ranking is performed according to that new score. In this run we used a weight of 0.99 for the Clinical BERT score.
fc3-qrel-hidden¶
Results
| Participants
| Input
| Appendix
- Run ID: fc3-qrel-hidden
- Participant: fcavalier
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: manual
- MD5:
bdefe1892b7f9b9a8c28904dac094273
- Run description: Find docids not in the round 1-4 judgment file that have titles, authors, and/or s2_id which match previously judged documents under different docids. Result is 172 rows per topic on average. (There are fewer than 1000 rows on purpose.) The topics 46-50 are new for round 5, and since there are no published previous judgments, the submission has just one dummy result each for them, per the requirements.
final.qruir.f.txt¶
Results
| Participants
| Input
| Appendix
- Run ID: final.qruir.f.txt
- Participant: ruir
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: manual
- MD5:
02fc1370cd47bbb7cfd797432afae2b6
- Run description: filtered on evidence partners data
final.qruir.txt¶
Results
| Participants
| Input
| Appendix
- Run ID: final.qruir.txt
- Participant: ruir
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: manual
- MD5:
8a98f6c22e77ee41d16da01324e8904a
- Run description: anserini query only fusion
final.qruir33.txt¶
Results
| Participants
| Input
| Appendix
- Run ID: final.qruir33.txt
- Participant: ruir
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: manual
- MD5:
d6523b64dd3e9d33ec6c14ee228cdab7
- Run description: anserini query only fusion, but adding task terms on the full-text index task classification manual
HKPU-BM25-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-BM25-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
3abc9645b9dd1c51f7ef56355000ddba
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the BM25 model, on long queries consisting of the combined Query, Question and Narrative. Document -based retrieval with pseudo-relevance feedback is employed.
HKPU-Gos1-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-Gos1-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
0b54befbc95718ebc47fe1967227a735
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed with the scoring function #1 presented in the Conclusion section (p.469) of the paper by Goswami et al. (Goswami et al, Exploring the space of information retrieval term scoring functions, Information Processing and Management, 53 (2017), p.454-472), on long queries consisting of the combined Query, Question and Narrative. Document-based retrieval with pseudo-relevance feedback is employed. The method applied in this run is the same as our submission HKPU-Gos1-pPRF in Round 4, with the exception that document-based retrieval is used in this run (HKPU-Gos1-dPRF), while passage-based retrieval is used for the previous HKPU-Gos1-pPRF. The difference between document-based and passage-based retrieval in the current task is expected to be not significant, as the documents mainly consist of short texts containing the title and the abstract.
HKPU-LGD-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-LGD-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
0646c0893777061676d74a0ed85bffee
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the information-based LGD model (Clinchant, S. and Gaussier, E., Information-based models for ad hoc IR. In Proceedings of the ACM SIGIR (2010), pp. 234-241), on long queries consisting of the combined Query, Question and Narrative. Document -based retrieval with pseudo-relevance feedback is employed.
HKPU-MATF-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-MATF-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
a134ceb6598edd723b331653f56f8862
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the MATF model (Paik, J.H., A novel tf-idf weighting scheme for effective ranking. In Proceedings of the ACM SIGIR (2013), pp. 343-352), on long queries consisting of the combined Query, Question and Narrative. Document -based retrieval with pseudo-relevance feedback is employed. The method applied in this run is the same as our submission HKPU-MATF-pPRF in Round 4, with the exception that document-based retrieval is used in this run (HKPU-MATF-dPRF), while passage-based retrieval is used for the previous HKPU-MATF-pPRF. The difference between document-based and passage-based retrieval in the current task is expected to be not significant, as the documents mainly consist of short texts containing the title and the abstract.
HKPU-MVD-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-MVD-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
e48719b3ab5686cbcddca74e24b2c0fc
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the MVD model (Paik, J.H., A probabilistic model for information retrieval based on maximum value distribution. In Proceedings of the ACM SIGIR (2015), pp. 585-594), on long queries consisting of the combined Query, Question and Narrative. Document -based retrieval with pseudo-relevance feedback is employed.
HKPU-PL2-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-PL2-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
c05321dfd59f6624e3945454e80b877d
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the PL2 model of the Divergence from Randomness framework (Amati, G. and van Rijsbergen, C.J., Probabilistic models of information retrieval based on measuring the divergence from randomness, ACM Transactions on Information Systems, 20, 4 (2002), 357-389), on long queries consisting of the combined Query, Question and Narrative. Document -based retrieval with pseudo-relevance feedback is employed.
HKPU-RM3-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-RM3-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
89eb742bd16fd8fb4c5545b7ea106424
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the RM3 variant of the Relevance Model (Lavrenko, V. and Croft, W.B., Relevance-based language model, In Proceedings of the ACM SIGIR (2001), pp. 120-127; and Abdul-Jaleel et al., UMass at TREC 2004. In Proceedings of TREC-13, pp. 715-725), on long queries consisting of the combined Query, Question and Narrative.
HKPU-SPUD-dPRF¶
Results
| Participants
| Input
| Appendix
- Run ID: HKPU-SPUD-dPRF
- Participant: HKPU
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
aa8fe914088bbbf47c2cb2c7788fc2d2
- Run description: The index is built from the combined title and abstract fields of the metadata file. Retrieval is performed by the SPUD model (Cummins et al., A polya urn document language model for improved information retrieval. ACM TOIS 33, 4, Article 21 (2015), p.1-34.), on long queries consisting of the combined Query, Question and Narrative. Document-based retrieval with pseudo-relevance feedback is employed. The method applied in this run is the same as our submission HKPU-SPUD-pPRF in Round 4, with the exception that document-based retrieval is used in this run (HKPU-SPUD-dPRF), while passage-based retrieval is used for the previous HKPU-SPUD-pPRF. The difference between document-based and passage-based retrieval in the current task is expected to be not significant, as the documents mainly consist of short texts containing the title and the abstract.
jlbasernd5-jlQErnd5¶
Results
| Participants
| Input
| Appendix
- Run ID: jlbasernd5-jlQErnd5
- Participant: julielab
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
338abffaca859e9e13775489501347cc
- Run description: Reciprocal rank fusion approach with the following settings: ElasticSearch with BM25 where k1=3.9 and b=0.55 (settings taken from SLEDGE paper). Index documents are the document paragraphs. Fused run 1: Stop word filtered query as mandatory clause. Stop word filtered question and narrative as optional clause. Fused run 2: Manually added synonyms to covid-indicating words ("coronavirus") and some model organisms.
MacEwan-base¶
Results
| Participants
| Input
| Appendix
- Run ID: MacEwan-base
- Participant: MacEwan_Business
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
1a94b8773a3eaa024fc3fc65e065bb3b
- Run description: Bag of words, combining all three topic fields, no special weighting, against body and abstract text.
mpiid5_run1¶
Results
| Participants
| Input
| Appendix
- Run ID: mpiid5_run1
- Participant: mpiid5
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
9e82ee91ba1bf54b99a34377b2cd569f
- Run description: We re-rank the top 2000 documents from the Anserini RM3 baseline (9th row). For the re-ranking method, we use the ELECTRA-Base model fine-tuned on the MSMARCO passage dataset. The model is later fine-tuned on the TREC COVID round 1-4 full-text collection. We use the question queries for re-ranking.
mpiid5_run2¶
Results
| Participants
| Input
| Appendix
- Run ID: mpiid5_run2
- Participant: mpiid5
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
8526b65b436c5a9de1f02143d61a381e
- Run description: We re-rank the top 2000 documents from the Anserini Fusion2 baseline (8th row). For the re-ranking method, we use the ELECTRA-Base model fine-tuned on the MSMARCO passage dataset. The model is later fine-tuned on the TREC COVID round 1-4 full-text collection. We use the question queries for re-ranking.
poznan_baseline¶
Results
| Participants
| Input
| Appendix
- Run ID: poznan_baseline
- Participant: POZNAN
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
8099fe768cebd65fec142095e12ffbcd
- Run description: Baseline run. Documents retrieved by Terrier. IR model - DFR_BM25 with PRF d=30;t=300. Index of abstracts. Concatenation of all types of queries.
poznan_rerank1¶
Results
| Participants
| Input
| Appendix
- Run ID: poznan_rerank1
- Participant: POZNAN
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
00f3e242beae288151b31dac0ecd273b
- Run description: Reranked baseline. Each document and query is converted into a list of word embeddings. Document and Query matrices are multiplied (to get similarities, vectors are normalized). Processed matrices of annotated documents are fed into a neural network (Conv2D->MaxPooling->Flatten->DNN->Concatenate->DNN(softmax). Neural network is used to rerank baselines - a document and query are put into a network to get a score for each pair in baseline ranking. Weighted score is added to DFR_BM25 score. Training strategy: leave one out n-fold.
poznan_rerank2¶
Results
| Participants
| Input
| Appendix
- Run ID: poznan_rerank2
- Participant: POZNAN
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
b095cb4fd659172ebd3a4b56dd71dfd2
- Run description: The same technique as the first reranking. This submission uses "narrative" queries for reranking. The first submission uses queries annotated as "query".
r5.d2q.fusion1¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.d2q.fusion1
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
48208a2ad3be32aeb3354b9d810b89fb
- Run description: Anserini doc2query fusion run corresponding to row 7 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md
r5.d2q.fusion2¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.d2q.fusion2
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
f61a3f31f09174ae9dc38db3ea17957c
- Run description: Anserini doc2query fusion run corresponding to row 8 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md
r5.d2q.qqabs¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.d2q.qqabs
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
d19586e82459a3196aa8fefad0c44668
- Run description: Anserini doc2query run corresponding to row 1 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md
r5.d2q.rf¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.d2q.rf
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
87d4887ac816d13401ac68bc7b6cad25
- Run description: Anserini doc2query relevance feedback run corresponding to row 9 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid-doc2query.md
r5.fusion1¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.fusion1
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
12122c12089c2b07a8f6c7247aebe2f6
- Run description: Anserini fusion run corresponding to row 7 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md
r5.fusion2¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.fusion2
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
ff1a0bac315de6703b937c552b351e2a
- Run description: Anserini fusion run corresponding to row 8 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md
r5.qqabs¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.qqabs
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
dfa3170b52e2907c6f977949ae108ec3
- Run description: Anserini run corresponding to row 1 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md
r5.rf¶
Results
| Participants
| Input
| Appendix
- Run ID: r5.rf
- Participant: anserini
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
74e2a73b5ffd2908dc23b14c765171a1
- Run description: Anserini relevance feedback run corresponding to row 9 in table for Round 5 at https://github.com/castorini/anserini/blob/master/docs/experiments-covid.md
rk_bdl_brx_logit¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_bdl_brx_logit
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
e746ffc8ff320bfb76e06598d3073224
- Run description: A logistic regression model trained on round 4 QREL using features (query, question, narrative, title, abstract, etc.) of the BM25, DFR and LM Dirichlet information retrieval models and deep neural language models (BERT, RoBERTa and XLNet).
rk_bm25_bs¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_bm25_bs
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
0393c087b639e993fb1cf7a2c5dda2d7
- Run description: Baseline BM25 model using the metadata index and trained on round 4 queries.
rk_bm25_dfr_lmd_rrf¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_bm25_dfr_lmd_rrf
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
2625dbf604a237ed277b6b83bcab0a9e
- Run description: A combination of BM25, DFR and LM Dirichlet runs using reciprocal-rank fusion (k=60) on metadata and full text indices.
rk_ir_bdl_trf_brx_lm¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_ir_bdl_trf_brx_lm
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
eb4a35b2636022c3373273f05a5ad0ba
- Run description: A learning-to-rank model using the LambdaMART algorithm with features from classic information retrieval models (BM25, DFR and LM Dirichlet) and deep neural language models (BERT, RoBERTa and XLNet) extracted from metadata and full text indices.
rk_ir_bdl_trf_brx_rr¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_ir_bdl_trf_brx_rr
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
2b2eed7908b9f84a652113605c223eb8
- Run description: A combination of classic information retrieval models (BM25, DFR and LM Dirichlet) with neural language models. Reciprocal-rank fusion with a k=60 was used to combined the runs.
rk_ir_trf_logit_rr¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_ir_trf_logit_rr
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
79a0ffb015b5bd92467b0371b7980a5a
- Run description: A combination of information retrieval (BM25, DFR, LM Dirichlet), neural language (BERT, RoBERTa, XLNet) and logistic regression models trained on round 4 QREL. Reciprocal-rank fusion with a k=60 was used to combined the runs.
rk_trf_brx_rrf¶
Results
| Participants
| Input
| Appendix
- Run ID: rk_trf_brx_rrf
- Participant: risklick
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
3dc0bad20180e60942eb8920e39a81d2
- Run description: A combination of deep neural language models (BERT, RoBERTa and XLNet) runs fine tuned on round 4 QREL and combined using reciprocal-rank fusion. Classic IR models (BM25, DFR, XLNet) was used as the baseline run.
run1_C_Arf_SciB¶
Results
| Participants
| Input
| Appendix
- Run ID: run1_C_Arf_SciB
- Participant: CIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
e479d9c2a9ab5dbece5e12bc4784a500
- Run description: Fusion of: (1) Anserini RF baseline (2) Sparse Embedding retrieval model trained on MedMARCO (3) SciBERT reranker trained on MedMARCO.
run2_C-Arf_SciB¶
Results
| Participants
| Input
| Appendix
- Run ID: run2_C-Arf_SciB
- Participant: CIR
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
13c37a251a769c7bc99d8a1865283d2b
- Run description: Fusion of: (1) Sparse-Dense Embedding retrieval model trained on MedMARCO, using Anserini-rf baseline as initial ranking, (2) SciBERT reranker trained on MedMARCO.
sab20.5.1.meta.docs¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.1.meta.docs
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
7a85bef782ee7b77d0c7aa854cbeb809
- Run description: Same retrieval and indexing as in Round 1 metadocs run. Standard SMART vector run based on Lnu docs, ltu query weighting. Separate inverted files for metadata and JSON docs. Final score is 1.5 * metadata score + JSON score.
sab20.5.2.dfo.metado¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.2.dfo.metado
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
8bf061d10e8f3a23c1c8b326f545af5d
- Run description: SMART vector DFO run. Same indexing and retrieval as used for Round2 sab20.2.dfo.metadocs run. Base Lnu.ltu weights. Run DFO algorithm (the runs and same parameters described in my TREC 2005 Routing track, and later, eg 2017 core track). Use relevance info on Round 1-4 collection to expand and optimize weights on that collection (using only metadata documents), and then run exactly that query (with same dictionary) on the Round5 collection. This run on Round 5 used separate Lnu weighted inverted files for metadata and JSON docs. Score = 1.2 * metadata + JSON. Expanded to 250 terms(way too much). Used full narrative and ltu weights for the new topics in Round 5.
sab20.5.2.meta.docs¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.2.meta.docs
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
13896800eb89845a0f156028ecea6245
- Run description: Same retrieval and indexing as in Round 2 metadocs run. SMART vector run. Lnu.ltu weights. Separate Lnu weighted inverted files for metadata and JSON docs. Score = 1.2 * metadata + JSON. Used full topics including narrative.
sab20.5.3.dfo¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.3.dfo
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
943be5956b3c341ba5b327d0180a2561
- Run description: SMART vector DFO run. Same retrieval and indexing as in Round 3 dfo run. Base Lnu.ltu weights, with doc indexing = 0.5 metadoc_Lnu_weighting + 0.7 JSON_Lnu_weighting if a JSON doc exists (= straight Lnu weighting if only metadata info).Run DFO algorithm (the runs are described in my TREC 2005 Routing track, and later, eg 2017 core track). Use relevance info on Rounds 1+2+3+4 collections to expand and optimize weights on that collection. Conservative run - xpand to 15 terms.
sab20.5.3.metadocs_m¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.3.metadocs_m
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
1b6f1066411bd413a25e164e1e34324a
- Run description: Same retrieval and indexing as in ROund 3 metadocs_m run. Standard SMART vector run based on Lnu docs, ltu query weighting. Doc indexing - if only metadata info exists for a docid, that is used with Lnu weights. Each JSON doc is assigned final indexing as 0.5 * Metadata_Lnu_vector + 0.7 * JSON_Lnu_vector. After inverted retrieval, the highest similarity for each cord_uid is used.
sab20.5.4.dfo¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.4.dfo
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
f1cdb3a2cda65fc08a724f352e6b24f4
- Run description: SMART vector DFO run. Same retrieval parameters as in Round 4 dfo run. Round 4 official run indexing was buggy (I indexed Round 4 metadata but Round 3 JSON docs). Here:Base Lnu.ltu weights, with doc indexing = 0.4 metadoc_Lnu_weighting + 0.6 JSON_Lnu_weighting if a JSON doc exists (= straight Lnu weighting if only metadata info).Run DFO algorithm (the runs are described in my TREC 2005 Routing track, and later, eg 2017 core track). Use relevance info on Rounds 1+2+3+4 collections to expand and optimize weights on that collection. Expand to top 50 terms. Optimize ignoring top 30 nonrel docs (to accomodate incomplete judgements).
sab20.5.dfo¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.dfo
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
1ce83603da05bb5dab333fed77c25d84
- Run description: SMART vector DFO run. Base Lnu.ltu weights, with doc indexing = 0.4 metadoc_Lnu_weighting + 0.6 JSON_Lnu_weighting if a JSON doc exists (= straight Lnu weighting if only metadata info).Run DFO algorithm (the runs are described in my TREC 2005 Routing track, and later, eg 2017 core track). Use relevance info on Rounds 1+2+3+4 collections to expand and optimize weights on that collection. Expand to top 70 terms. Optimize ignoring top 40 nonrel docs (to accomodate incomplete judgements).
sab20.5.metadocs_m¶
Results
| Participants
| Input
| Appendix
- Run ID: sab20.5.metadocs_m
- Participant: sabir
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
4e636096e874d4a2be268853a82d69a9
- Run description: Standard SMART vector run based on Lnu docs (pivot 130,slope.12 for JSON. pivot 110, slope.24 for metadata), ltu query weighting. Doc indexing - if only metadata info exists for a docid, that is used with Lnu weights. Each JSON doc is assigned final indexing as 0.4 * Metadata_Lnu_vector + 0.6 * JSON_Lnu_vector. After inverted retrieval, the highest similarity for each cord_uid is used.
SFDC-enc45-refus12¶
Results
| Participants
| Input
| Appendix
- Run ID: SFDC-enc45-refus12
- Participant: SFDC
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
07d40c9b71b0cda3c458b7a56f958f1a
- Run description: Encoder 4/5 runs + fusion runs reranked
uab.base¶
Results
| Participants
| Input
| Appendix
- Run ID: uab.base
- Participant: UAlbertaSearch
- Track: Round 5
- Year: 2020
- Submission: 7/30/2020
- Type: automatic
- MD5:
63d4345244dd8af6b5b0ca74d4611990
- Run description: BM25+ ranking on documents using an index on the full text (title+abstract+body). Uses the query part of the topics as query terms (stop words removed). Top 25 docs are reranked using a term proximity score.
uab.idf¶
Results
| Participants
| Input
| Appendix
- Run ID: uab.idf
- Participant: UAlbertaSearch
- Track: Round 5
- Year: 2020
- Submission: 7/31/2020
- Type: automatic
- MD5:
9f4788a353ffb4afb31784e5dd44f844
- Run description: BM25+ ranking on documents using an index on the full text (title+abstract+body). Uses the query part of the topics as query terms (stop words removed). Top 25 docs are reranked using a term proximity score. Uses an idf of 0.3 for the term "coronavirus".
ucd_cs_r1¶
Results
| Participants
| Input
| Appendix
- Run ID: ucd_cs_r1
- Participant: UCD_CS
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
61f855f5d4654e02e9f1d914c09dc284
- Run description: This run is re-ranked from anserini's rf run using document abstract. The reranker is fine-tuned from SciBERT base checkpoint on TREC-COVID previous qrels.
ucd_cs_r2¶
Results
| Participants
| Input
| Appendix
- Run ID: ucd_cs_r2
- Participant: UCD_CS
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
a26ae36e4cacfeb4363fe8fcb6bb00b7
- Run description: This run is re-ranked from anserini's rf run using document fulltext. The reranker is trained using tf-idf weighting based logistic regression.
ucd_cs_r3¶
Results
| Participants
| Input
| Appendix
- Run ID: ucd_cs_r3
- Participant: UCD_CS
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
139d711b5c451d781c86bdf46bab42e3
- Run description: This run is RFF fusion of run1 and run2.
udel_fang_ltr_split¶
Results
| Participants
| Input
| Appendix
- Run ID: udel_fang_ltr_split
- Participant: udel_fang
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
701f35ae4431ecc23bb068ccfda362d3
- Run description: We build an index with title and abstract from the metadata file. Non-stopwords in the query and entities tagged by SciSpacy in question and narrative fields are assigned the weight ratio of 2:3:1 to form the query. We generate a run using relevance feedback on the first 45 queries and pseudo relevance feedback on the last 5 queries. LambdaMART is used to re-rank the first 200 results. The features we use include BM25, SciBERT, recency, and so on. We tune the hyper-parameters of relevance feedback and pseudo relevance feedback based on different validation methods.
udel_fang_ltr_uni¶
Results
| Participants
| Input
| Appendix
- Run ID: udel_fang_ltr_uni
- Participant: udel_fang
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
0fa8d5d4930c0f1f325c613cd924c2d6
- Run description: We build an index with title and abstract from the metadata file. Non-stopwords in the query and entities tagged by SciSpacy in question and narrative fields are assigned the weight ratio of 2:3:1 to form the query. We generate a run using relevance feedback on the first 45 queries and pseudo relevance feedback on the last 5 queries. LambdaMART is used to re-rank the first 200 results. The features we use include BM25, SciBERT, recency, and so on. We tune the hyper-parameters of relevance feedback and pseudo relevance feedback based on the same validation methods.
udel_fang_nir¶
Results
| Participants
| Input
| Appendix
- Run ID: udel_fang_nir
- Participant: udel_fang
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
b6107f646313dd00baeadfa6fbb5d373
- Run description: We build an index with title and abstract from the metadata file. Non-stopwords in the query and entities tagged by SciSpacy in question and narrative fields are assigned the weight ratio of 2:3:1 to form the query. We generate a run using relevance feedback on the first 45 queries and pseudo relevance feedback on the last 5 queries. SCIBERT is used to re-rank the first 1000 results. SCIBERT is finetuned on the whole msmarco datasets.
UIowaS_Run1¶
Results
| Participants
| Input
| Appendix
- Run ID: UIowaS_Run1
- Participant: UIowaS
- Track: Round 5
- Year: 2020
- Submission: 8/1/2020
- Type: feedback
- MD5:
7768d350555fc801d4b9af7de2a137d7
- Run description: Reciprocal rank fusion of two runs - both employ Terrier, BM25 weighting with relevance feedback. Both use all query fields - we did add the word Covid to the question field of all topics. In the first run we used 10 relevant documents to expand the query by 300 terms. In the second run we used 30 documents to expand the query by 1000 terms. Reciprocal rank fusion was done on these two runs. Retrieval was done against the metadata title and abstract fields. For the 5 new topics we did a Terrier, TF_IDF retrieval feedback run, 10 documents, 20 expansion query terms. The queries were as above, But the dataset was limited to filtered documents from the metadata title and abstract. Each retained document has to have a word in a pre-defined list described in our run 1 documentation. We also filtered out documents older than 1990.
UIowaS_Run2¶
Results
| Participants
| Input
| Appendix
- Run ID: UIowaS_Run2
- Participant: UIowaS
- Track: Round 5
- Year: 2020
- Submission: 8/1/2020
- Type: feedback
- MD5:
7100df2aa2e361932916b35ed9b4688e
- Run description: Reciprocal rank fusion of two runs - both employ Terrier, BM25 weighting with relevance feedback. Both use all query fields - we did add the word Covid to the question field of all topics. In the first run we used 10 relevant documents to expand the query by 300 terms. In the second run we used 30 documents to expand the query by 1000 terms. Reciprocal rank fusion was done on these two runs. Retrieval was done against the metadata title and abstract fields. For the 5 new topics we did a Terrier, TF_IDF retrieval feedback run, 10 documents, 20 expansion query terms. The queries were as above, But the dataset was limited to filtered documents from the metadata title and abstract. Each retained document has to have a word in a pre-defined list described in our run 1 documentation. Unlike our run1, we did not do any additional filtering based on the date of the document.
UIowaS_Run3¶
Results
| Participants
| Input
| Appendix
- Run ID: UIowaS_Run3
- Participant: UIowaS
- Track: Round 5
- Year: 2020
- Submission: 8/1/2020
- Type: feedback
- MD5:
21c8e6884022e6fabaea3e3f4ebde3a8
- Run description: Borda merge of two runs - both employ Terrier, BM25 weighting with relevance feedback. Both use all query fields - we did add the word Covid to the question field of all topics. In the first run we used 10 relevant documents to expand the query by 300 terms. In the second run we used 30 documents to expand the query by 1000 terms. Borda merge was done on these two runs. Retrieval was done against the metadata title and abstract fields. For the 5 new topics we did a Terrier, TF_IDF retrieval feedback run, 10 documents, 20 expansion query terms. The queries were as above, But the dataset was limited to filtered documents from the metadata title and abstract. Each retained document has to have a word in a pre-defined list described in our run 1 documentation. Unlike run1, we do not filter out any documents based on their date.
uogTrDPH_QE_RF¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_RF
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
8e41b4b382a4bfb96c8cf989e8e1a1c3
- Run description: A relevance feedback-based run using DFR query expansion run built on pyTerrier.
uogTrDPH_QE_RF_CB¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_RF_CB
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
b3f006daf05b064acc203febd8750f25
- Run description: A relevance feedback-based run using DFR query expansion run built on pyTerrier, which linearly combines a SciColBERT model.
uogTrDPH_QE_RF_SB¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_RF_SB
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
a9a37d67ae82b43e257538e3d611c0bf
- Run description: A relevance feedback-based run using DFR query expansion run built on pyTerrier, which linearly combines a SciBert model trained on MSMarco medical queries.
uogTrDPH_QE_RF_SB_B¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_RF_SB_B
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
e765a6733fbb5bb7b4b32916d970a820
- Run description: A relevance feedback-based run using DFR query expansion run built on pyTerrier, which linearly combines a SciBert model trained on MSMarco medical queries and BERT.
uogTrDPH_QE_RF_SB_CB¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_RF_SB_CB
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
6c6d321a10890f3d1431fdc382427b9b
- Run description: A relevance feedback-based run using DFR query expansion run built on pyTerrier, which linearly combines a SciBert model trained on MSMarco medical queries and SciColBERT.
uogTrDPH_QE_SB¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_SB
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
a652abead1cfe70b86d7ab40340c39b2
- Run description: An automatic DFR-based query expansion run built on pyTerrier, which linearly combines a SciBert model trained on MSMarco medical queries.
uogTrDPH_QE_SB_B¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_SB_B
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
aa4191aab5ccc1988b626d267345cfb9
- Run description: An automatic DFR-based query expansion run built on pyTerrier, which linearly combines a SciBert model trained on MSMarco medical queries and BERT.
uogTrDPH_QE_SB_CB¶
Results
| Participants
| Input
| Appendix
- Run ID: uogTrDPH_QE_SB_CB
- Participant: uogTr
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: automatic
- MD5:
0de18b184fde6c8da3be78abe3fa57b5
- Run description: An automatic DFR-based query expansion run built on pyTerrier, which linearly combines a SciBert model trained on MSMarco medical queries and SciColBERT.
UPrrf102-r5¶
Results
| Participants
| Input
| Appendix
- Run ID: UPrrf102-r5
- Participant: unique_ptr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
4435030951e363a211100d42a380ef2a
- Run description: A reciprocal rank fusion of (a) Anserini & Terrier runs based on different indexing schemes and topic variants, including relevance feedback (b) Neural retrieval runs based on synthetic query generation (https://arxiv.org/abs/2004.14503) (c) TF-Ranking + multiple BERT variants with softmax loss (https://arxiv.org/abs/2004.08476) trained on MS-Marco (d) TF-Ranking + multiple BERT variants and topic variants fine-tuned with softmax loss on the relevance judgments from Rounds 1 - 4.
UPrrf102-wt-r5¶
Results
| Participants
| Input
| Appendix
- Run ID: UPrrf102-wt-r5
- Participant: unique_ptr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
9a84ff6be637cf73082b4cef1094601b
- Run description: A weighted reciprocal rank fusion of (a) Anserini & Terrier runs based on different indexing schemes and topic variants, including relevance feedback (b) Neural retrieval runs based on synthetic query generation (https://arxiv.org/abs/2004.14503) (c) TF-Ranking + multiple BERT variants with softmax loss (https://arxiv.org/abs/2004.08476) trained on MS-Marco (d) TF-Ranking + multiple BERT variants and topic variants fine-tuned with softmax loss on the relevance judgments from Rounds 1 - 4. For weighting the runs, we use a simple scheme which doubles the weight of any runs that have access to relevance judgments in the reciprocal rank fusion.
UPrrf80-r5¶
Results
| Participants
| Input
| Appendix
- Run ID: UPrrf80-r5
- Participant: unique_ptr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
dd0ac2124079df2495946b71b522670a
- Run description: A reciprocal rank fusion of (a) Anserini & Terrier runs based on different indexing schemes and topic variants (b) Neural retrieval runs based on synthetic query generation (https://arxiv.org/abs/2004.14503) (c) TF-Ranking + multiple BERT variants with softmax loss (https://arxiv.org/abs/2004.08476) trained on MS-Marco. This is a fully automatic run, with no extra tuning based on the existing judgments from TREC-COVID.
UPrrf89-r5¶
Results
| Participants
| Input
| Appendix
- Run ID: UPrrf89-r5
- Participant: unique_ptr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
aecc437290061d5cf2d8411126d2e3f4
- Run description: A reciprocal rank fusion of (a) Anserini & Terrier runs based on different indexing schemes and topic variants (b) Neural retrieval runs based on synthetic query generation (https://arxiv.org/abs/2004.14503) (c) TF-Ranking + multiple BERT variants with softmax loss (https://arxiv.org/abs/2004.08476) trained on MS-Marco (d) TF-Ranking + multiple BERT variants with softmax loss trained on BIOASQ data (https://arxiv.org/abs/1809.01682). This is a fully automatic run, with no extra tuning based on the existing judgments from TREC-COVID.
UPrrf93-r5¶
Results
| Participants
| Input
| Appendix
- Run ID: UPrrf93-r5
- Participant: unique_ptr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
6731ad0d64a618670c389d5573334fac
- Run description: A reciprocal rank fusion of (a) Anserini & Terrier runs based on different indexing schemes and topic variants, including relevance feedback (b) Neural retrieval runs based on synthetic query generation (https://arxiv.org/abs/2004.14503) (c) TF-Ranking + multiple BERT variants with softmax loss (https://arxiv.org/abs/2004.08476) trained on MS-Marco (d) TF-Ranking + multiple BERT variants using the question topic field, fine-tuned with softmax loss on the relevance judgments from Rounds 1 - 4.
UPrrf93-wt-r5¶
Results
| Participants
| Input
| Appendix
- Run ID: UPrrf93-wt-r5
- Participant: unique_ptr
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: feedback
- MD5:
12db63f36692089091d7178304258555
- Run description: A weighted reciprocal rank fusion of (a) Anserini & Terrier runs based on different indexing schemes and topic variants, including relevance feedback (b) Neural retrieval runs based on synthetic query generation (https://arxiv.org/abs/2004.14503) (c) TF-Ranking + multiple BERT variants with softmax loss (https://arxiv.org/abs/2004.08476) trained on MS-Marco (d) TF-Ranking + multiple BERT variants using the question topic field, fine-tuned with softmax loss on the relevance judgments from Rounds 1 - 4. For weighting the runs, we use a simple scheme which doubles the weight of any runs that have access to relevance judgments in the reciprocal rank fusion.
uw_base1¶
Results
| Participants
| Input
| Appendix
- Run ID: uw_base1
- Participant: WiscIRLab
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
06bfab970917a99226a96bdb90530c78
- Run description: baseline run using title query
uw_base2¶
Results
| Participants
| Input
| Appendix
- Run ID: uw_base2
- Participant: WiscIRLab
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: automatic
- MD5:
071f1010791cf869d3037cec9b9e4d19
- Run description: baseline run using all query fields
uw_crowd1¶
Results
| Participants
| Input
| Appendix
- Run ID: uw_crowd1
- Participant: WiscIRLab
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: manual
- MD5:
6d4179a3980f49421cd67128d41252b8
- Run description: search using crowdsourcing queries
uw_crowd2¶
Results
| Participants
| Input
| Appendix
- Run ID: uw_crowd2
- Participant: WiscIRLab
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: manual
- MD5:
7ee6f0c4b0b5893ba4c0f13577e7b05e
- Run description: search using crowdsourcing queries
uw_fb1¶
Results
| Participants
| Input
| Appendix
- Run ID: uw_fb1
- Participant: WiscIRLab
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
5d06009750f2add1b539ef5ef8b7119b
- Run description: Using judged results as relevance feedback.
uw_fb2¶
Results
| Participants
| Input
| Appendix
- Run ID: uw_fb2
- Participant: WiscIRLab
- Track: Round 5
- Year: 2020
- Submission: 8/3/2020
- Type: feedback
- MD5:
29fbb423f2cb3d1414b4a1cb20f4c21c
- Run description: Using judged results as relevance feedback.
xj4wang_run1¶
Results
| Participants
| Input
| Appendix
- Run ID: xj4wang_run1
- Participant: xj4wang
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: manual
- MD5:
2f095c3d25a02a1ddd9f0e71ab0cc53f
- Run description: The retrieval model used is BMI (Baseline Model Implementation), provided as a starter by Gordon Cormack for the TREC 2015/2016 Total Recall Track, with human assessors in place of the server (manual processing). [1] In more detail: It uses the CAL (Continuous Active Learning) method, starting with 1 synthetic file created using the given topics, word for word. This method is described by Grossman and Cormack in [4]. Feature vectors are created using the BMI tools. [1] SofiaML is used as the learner. The weighting scheme were chosen heavily based on the work of Cormack and Grossman in [2]. Stopping conditions for manual labeling were chosen heavily based on the work of Grossman et al. in [3]. References: [1] https://cormack.uwaterloo.ca/trecvm/ [2] file:///C:/Users/Jean/Downloads/2600428.2609601.pdf [3] https://trec.nist.gov/pubs/trec25/papers/Overview-TR.pdf [4] https://cormack.uwaterloo.ca/caldemo/AprMay16_EdiscoveryBulletin.pdf
xj4wang_run2¶
Results
| Participants
| Input
| Appendix
- Run ID: xj4wang_run2
- Participant: xj4wang
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: manual
- MD5:
a5a300c36e5b7a634e6fc745dc5b318c
- Run description: The retrieval model used is BMI (Baseline Model Implementation), provided as a starter by Gordon Cormack for the TREC 2015/2016 Total Recall Track, with human assessors in place of the server (manual processing). [1] In more detail: It uses the CAL (Continuous Active Learning) method, starting with 1 synthetic file created using the given topics, word for word. This method is described by Grossman and Cormack in [4]. Feature vectors are created using the BMI tools. [1] SofiaML is used as the learner. The weighting scheme were chosen heavily based on the work of Cormack and Grossman in [2]. Stopping conditions for manual labeling were chosen heavily based on the work of Grossman et al. in [3]. References: [1] https://cormack.uwaterloo.ca/trecvm/ [2] file:///C:/Users/Jean/Downloads/2600428.2609601.pdf [3] https://trec.nist.gov/pubs/trec25/papers/Overview-TR.pdf [4] https://cormack.uwaterloo.ca/caldemo/AprMay16_EdiscoveryBulletin.pdf
xj4wang_run3¶
Results
| Participants
| Input
| Appendix
- Run ID: xj4wang_run3
- Participant: xj4wang
- Track: Round 5
- Year: 2020
- Submission: 8/2/2020
- Type: manual
- MD5:
afc15a15d4e8e673df073a0ff3f28d3c
- Run description: The retrieval model used is BMI (Baseline Model Implementation), provided as a starter by Gordon Cormack for the TREC 2015/2016 Total Recall Track, with human assessors in place of the server (manual processing). [1] In more detail: It uses the CAL (Continuous Active Learning) method, starting with 1 synthetic file created using the given topics, word for word. This method is described by Grossman and Cormack in [4]. Feature vectors are created using the BMI tools. [1] SofiaML is used as the learner. The weighting scheme were chosen heavily based on the work of Cormack and Grossman in [2]. Stopping conditions for manual labeling were chosen heavily based on the work of Grossman et al. in [3]. References: [1] https://cormack.uwaterloo.ca/trecvm/ [2] file:///C:/Users/Jean/Downloads/2600428.2609601.pdf [3] https://trec.nist.gov/pubs/trec25/papers/Overview-TR.pdf [4] https://cormack.uwaterloo.ca/caldemo/AprMay16_EdiscoveryBulletin.pdf