Skip to content

Runs - Precision Medicine 2020

baseline

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: baseline
  • Participant: BIT.UA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 051e34f43d672e37dff19c4cd788b578
  • Run description: This run uses the ElasticSearch engine with BM25 weighting scheme finetuned in the 2019 data. The query is simply the concatenation of all topics fields. Additionally, we also prepare a synonymy expansion for the genes.

bm25

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: bm25
  • Participant: ASCFDA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 783f0d3972885a80ede4aa9fdc1a6237
  • Run description: Abstracts are retrieved using Anserini's BM25 with default parameters.

bm25_p10

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: bm25_p10
  • Participant: ims_unipd
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: c911ae0309e1d45a8b9ccb1d92dbd1cc
  • Run description: Elasticsearch BM25 k1 = 1.2 b = 0.75

bm25_synonyms

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: bm25_synonyms
  • Participant: vohcolab
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/26/2020
  • Type: automatic
  • Task: primary
  • MD5: 74608b9b16e26620c5a0131fe4f968f3
  • Run description: Elastic Search BM25 b=0.75 k =1.2 (disease,gene,treatment+synonyms)

CincyMedIR28dgt

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CincyMedIR28dgt
  • Participant: CincyMedIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: b5740810774a2c109c642a63dae4b93c
  • Run description: Query: disease+gene+treatment; ElasticSearch: most_fields, hit against title and abstract; learning to rank: model 8, score mode: multiply with query_weight = 1 and rescore_query_weight = 1

CincyMedIR_20

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CincyMedIR_20
  • Participant: CincyMedIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 0177f0061f15774b4add32f69dc53397
  • Run description: Query: disease+gene; ElasticSearch: most_fields, hit against title and abstract

CincyMedIR_28

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CincyMedIR_28
  • Participant: CincyMedIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: f2ea4b49f495a87e29bb8e16631842b5
  • Run description: ElasticSearch: most_fields, hit against title and abstract; learning to rank: model 8, score mode: multiply with query_weight = 1 and rescore_query_weight = 1

CincyMedIR_28_t

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CincyMedIR_28_t
  • Participant: CincyMedIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 4b8b0d3dfa2eaab57b426e41729671cb
  • Run description: Query: treatment; ElasticSearch: most_fields, hit against title and abstract; learning to rank: model 8, score mode: multiply with query_weight = 1 and rescore_query_weight = 1

CincyMedIR_dgt

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CincyMedIR_dgt
  • Participant: CincyMedIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 231ab44ce9c19f65c6c88edd114b11d9
  • Run description: Query: disease+gene+treatment; ElasticSearch: most_fields, hit against title and abstract

CornellTech1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CornellTech1
  • Participant: CTIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 796622597fc5c250940d5bfef5d64b70
  • Run description: We design and own a basic IR model

CornellTech2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CornellTech2
  • Participant: CTIR
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 3c95edb2bb67948f45094864504578b6
  • Run description: A neural network is applied to re-rank the result of CornellTech1

CSIROmed_rlxRR

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CSIROmed_rlxRR
  • Participant: CSIROmed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 893f7a7fb8f456b9f03cc621684a9a0d
  • Run description: Base ranking is DFR from Apache Solr (scoring all topic fields equally). Reranking is done with BioBERT model fine-tuned on Trec PM 2017-2019 qrels.

CSIROmed_rRRa

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CSIROmed_rRRa
  • Participant: CSIROmed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 499c692a1a6f0069f01ce9e35f22cf35
  • Run description: Base ranking is DFR from Apache Solr (scoring all topic fields equally). Reranking is done with BioBERT model fine-tuned on Trec PM 2017-2019 qrels with query augmentation using drug names from document keywords.

CSIROmed_sRRa

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CSIROmed_sRRa
  • Participant: CSIROmed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 89bd9c976270942f832015c4e1f74ce1
  • Run description: Base ranking is DFR from Apache Solr, with priority given to documents, which match the treatment term. Reranking is done with BioBERT model fine-tuned on Trec PM 2017-2019 qrels with query augmentation using drug names from document keywords.

CSIROmed_strDFR

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CSIROmed_strDFR
  • Participant: CSIROmed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: b64a92f21a954add6049f72d6c8fb486
  • Run description: Run based on DFR from Apache Solr. Priority is given to documents, which match the treatment term.

CSIROmed_strRR

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: CSIROmed_strRR
  • Participant: CSIROmed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: f8ec4348b30d7135f9d3684a2ea73b64
  • Run description: Base ranking is DFR from Apache Solr, with priority given to documents, which match the treatment term. Reranking is done with BioBERT model fine-tuned on Trec PM 2017-2019 qrels.

DA_DCU_IBM_1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: DA_DCU_IBM_1
  • Participant: DA_IICT
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 8719cb4891ad2e1356a9b7cb35dba23d
  • Run description: A structured representation of an indexed document (comprised of fields such as meshheading, chemlist etc.) is matched with different prior weights with each field of a query (e.g. title, desc etc.). The final score of a query-document match is an aggregated score of individual matches across the query-document field pairs. This run also uses a pre-retrieval query expansion to further enrich the query. Specifically, the set of terms added for each query term corresponds to the nearest neighbors of the query term vector in an embedded space of words vectors (in our experiments, we used pre-trained PubMed skipgram vectors). The retrieval model used is BM25.

DA_DCU_IBM_2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: DA_DCU_IBM_2
  • Participant: DA_IICT
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 289452df610253060138740bedc6ce9d
  • Run description: Relevance feedback (RM3) applied on the outcome of run DA_DCU_IBM_1.

DA_DCU_IBM_3

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: DA_DCU_IBM_3
  • Participant: DA_IICT
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 10349f0f204f28895de7f63a6cd34872
  • Run description: An ablation form of runDA_DCU_IBM_1, where we do not apply the nearest neighbor based pre-retrieval expansion. Instead, we use the average distance between the positions of matched query terms within each document to obtain a modified score. This positional score is then multiplied with the BM25 score to obtain the final ranking.

DA_DCU_IBM_4

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: DA_DCU_IBM_4
  • Participant: DA_IICT
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 535523ff5da575a03cde3bfe5b44739e
  • Run description: Similar to run DA_DCU_IBM_1. Instead of applying RM3 on the target collection, we use a different collection (specifically, the RCT collection of TREC-PM 2018) to construct the post-retrieval expanded query. Specifically, the set of pseudo-relevant documents used to estimate feedback weights is obtained from the top-documents retrieved from the RCT collection. Selected terms from this distribution are then used to re-retrieve from the target collection.

damoespb1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: damoespb1
  • Participant: ALIBABA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 76049e5d6c09089a2b97b87e8f89eb39
  • Run description: We expand each topic with field synonyms from Genetics Home Reference and use ElasticSearch to retrieve relevant PubMed documents. We utilize features including the ElasticSearch score, publication type, publication citation count and pre-trained BioBERT prediction to re-rank the retrieved documents. We pre-train BioBERT by previous data of previous TREC PM challenges.

damoespb2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: damoespb2
  • Participant: ALIBABA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: a7289963a5647c21ead6d5b044f4ecb4
  • Run description: We expand each topic with field synonyms from Genetics Home Reference and use ElasticSearch to retrieve relevant PubMed documents. We utilize features including the ElasticSearch score, publication type, publication citation count and pre-trained BioBERT prediction to re-rank the retrieved documents. We pre-train BioBERT by previous data of previous TREC PM challenges.

damoespcbh1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: damoespcbh1
  • Participant: ALIBABA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: manual
  • Task: primary
  • MD5: 4cc57bf0fb3735742b0cfc9e911866f6
  • Run description: We expand each topic with field synonyms from Genetics Home Reference and use ElasticSearch to retrieve relevant PubMed documents. We utilize features including the ElasticSearch score, publication type, publication citation count, pre-trained BioBERT prediction and human-in-the-loop active learning with BioBERT to re-rank the retrieved documents. We pre-train BioBERT by previous data of previous TREC PM challenges. We use private annotations of relevance in the active learning process.

damoespcbh2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: damoespcbh2
  • Participant: ALIBABA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: manual
  • Task: primary
  • MD5: 920876df682c0ab691011c86ad6084dc
  • Run description: We expand each topic with field synonyms from Genetics Home Reference and use ElasticSearch to retrieve relevant PubMed documents. We utilize features including the ElasticSearch score, publication type, publication citation count, pre-trained BioBERT prediction and human-in-the-loop active learning with BioBERT to re-rank the retrieved documents. We pre-train BioBERT by previous data of previous TREC PM challenges. We use private annotations of relevance in the active learning process.

damoespcbh3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: damoespcbh3
  • Participant: ALIBABA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: manual
  • Task: primary
  • MD5: f1c0bb04af5eb1cf68f2f17547bf98d8
  • Run description: We expand each topic with field synonyms from Genetics Home Reference and use ElasticSearch to retrieve relevant PubMed documents. We utilize features including the ElasticSearch score, publication type, publication citation count, pre-trained BioBERT prediction and human-in-the-loop active learning with BioBERT to re-rank the retrieved documents. We pre-train BioBERT by previous data of previous TREC PM challenges. We use private annotations of relevance in the active learning process.

duoT5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: duoT5
  • Participant: h2oloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 9aa2241510ed29bb7651b0f43776b6b4
  • Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 re-ranks the top 1000 documents from the Anserini BM25 baseline.

duoT5rct

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: duoT5rct
  • Participant: h2oloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 88f605a03d1ba6dca29a3e0ce3f90dcf
  • Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 re-ranks the top 1000 documents from the Anserini BM25 baseline. The keywords "meta-analysis" and "randomized controlled trial (RCT)" are added to the query

ebm

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: ebm
  • Participant: PINGAN_NLP
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/25/2020
  • Type: automatic
  • Task: primary
  • MD5: 1b42a92d4b5b1e6b27238617e2117f69
  • Run description: this result generated by our retrieval model,we first use emb-nlp dataset to trian a treament classifier. and then use past 3 years' dataset to train a relevance classifier.and then we retrieve pubmed es index, finally we pass all candiates papers to these to models.

ens

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: ens
  • Participant: PINGAN_NLP
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/25/2020
  • Type: automatic
  • Task: primary
  • MD5: 9890bcde87bd3913e7ea5f2c341f409a
  • Run description: ensemble three results generated from run: ebm, run: PA1run, and run: r1st.

f_CTD_run1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: f_CTD_run1
  • Participant: READ-Biomed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: c72928ed44558a20444572571e6f71db
  • Run description: Apache Lucene, Embeddings Similarity

f_CTD_run2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: f_CTD_run2
  • Participant: READ-Biomed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 6358cab6f6c95984fc75d5ae650ea8c1
  • Run description: Apache Lucene, Embeddings Similarity

f_run0

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: f_run0
  • Participant: READ-Biomed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: fa9d8fdcf8262f93a92f27740e46ea0c
  • Run description: Apache Lucene

f_run1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: f_run1
  • Participant: READ-Biomed
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 6053a983ea4879e78176e36feed9c50d
  • Run description: Apache Lucene

monoT5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: monoT5
  • Participant: h2oloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 5f38faf775c060fc7d42e20f8592ae43
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline.

monoT5e1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: monoT5e1
  • Participant: h2oloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: dcba3f0f0cc1ac9e3728ad9292e3b3e8
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline. The keywords "meta-analysis" and "randomized controlled trial (RCT)" are added to the query. We re-rank the top 100 results from here to prefer documents that score less according to the original query, i.e. penalize the matchings purely in terms of genetic mutation, disease, and treatment that show little indication of high strength of evidence.

monoT5rct

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: monoT5rct
  • Participant: h2oloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 7ff68b6c7ace9180050fbeb632e3f808
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline. The keywords "meta-analysis" and "randomized controlled trial (RCT)" are added to the query.

nnrun1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: nnrun1
  • Participant: BIT.UA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 9d80de6b5bdf430a938dfa034c570ed3
  • Run description: This run follows a two-stage retrieval process. In the first stage, we use the ElasticSearch engine with BM25 weighting scheme finetuned in the 2019 data. The query is simply the concatenation of all topics fields. Additionally, we also prepare a synonymy expansion for the genes. In the second stage, we explore a neural interaction model [1] that was trained on the bioASQ data to rerank the previous documents. The query was formatted to be similar to the BioASQ (a natural language question). [1] Tiago Almeida and Srgio Matos. 2020. Calling attention to passages for biomedical question answering. In Advances in Information Retrieval, pages 6977, Cham. Springer International Publishing

nnrun2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: nnrun2
  • Participant: BIT.UA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 32b1f1613a65225e1722d892bab21f37
  • Run description: This run follows a two-stage retrieval process. In the first stage, we use the ElasticSearch engine with BM25 weighting scheme finetuned in the 2019 data. The query is simply the concatenation of all topics fields. Additionally, we also prepare a synonymy expansion for the genes. In the second stage, we explore a neural interaction model [1] that was trained on the bioASQ data to rerank the previous documents. The query was formatted to be similar to the BioASQ (a natural language question). [1] Tiago Almeida and Srgio Matos. 2020. Calling attention to passages for biomedical question answering. In Advances in Information Retrieval, pages 6977, Cham. Springer International Publishing

nnrun3

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: nnrun3
  • Participant: BIT.UA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 6770a03a29df28fb359e2476e046cb4f
  • Run description: This run follows a two-stage retrieval process. In the first stage, we use the ElasticSearch engine with BM25 weighting scheme finetuned in the 2019 data. The query is simply the concatenation of all topics fields. Additionally, we also prepare a synonymy expansion for the genes. In the second stage, we explore a neural interaction model [1] that was trained on the bioASQ data to rerank the previous documents. The query was formatted to be similar to the BioASQ (a natural language question). [1] Tiago Almeida and Srgio Matos. 2020. Calling attention to passages for biomedical question answering. In Advances in Information Retrieval, pages 6977, Cham. Springer International Publishing

PA1run

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: PA1run
  • Participant: PINGAN_NLP
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/23/2020
  • Type: automatic
  • Task: primary
  • MD5: 96766c0e723630b09d123cfc2d1a6147
  • Run description: this result generated by our retrieval model, in this model: we first retrieve pubmed es index, and then run a relevant cls model( biobert), finally the data are passed through a tier rank cls model (another biobert)

pozadditional

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: pozadditional
  • Participant: POZNAN
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 8decb3ee3d67ba7df3dd7a4270c32535
  • Run description: In addition to poznan_baseline and poznan_reranked, we created a set of passages for each treatment method. The passage was used as a part of query. Additionaly, the passages were used as a part of the training set.

pozbaseline

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: pozbaseline
  • Participant: POZNAN
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 6ee85fcdfba086474220344473aca406
  • Run description: We used our own parser. We've extracted the information which we have deemed the most useful from the document corpus. We used the Terrier software with DFR_BM25 model (d=300; t=30) to perform retrieval.

pozreranked

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: pozreranked
  • Participant: POZNAN
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/28/2020
  • Type: automatic
  • Task: primary
  • MD5: 328a6b329e629c704c81b00cb2a1c216
  • Run description: We used our own parser. We've extracted the information which we have deemed the most useful from the document corpus. We used the Terrier software with DFR_BM25 model (d=300; t=30) to perform retrieval. We then used a neural network setup in order to rerank documents. We used the 2019 data to create and train a neural network, which classifies a document as either " not relevant" or "relevant" to a disease. We've used the network to predict the relevance of ranked documents and changed the score accordingly.

r1st

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: r1st
  • Participant: PINGAN_NLP
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/25/2020
  • Type: automatic
  • Task: primary
  • MD5: 9ac2c8909eaddba988b1cc0c70dfc66d
  • Run description: this result generated by our retrieval model, we first retrieve pubmed es index, then, we manually labeled about 2000 pubmed papers. with these labeled papers, we train a relevance biobert model and tier biobert model. finally we pass all candiates papers to these to models. in this run ,we weight heavily on relevance

rrf

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: rrf
  • Participant: BIT.UA
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 8efba2eda54e806328d68766a3669445
  • Run description: This run corresponds to the rank reciprocal fusion of other four submitted runs

rrf_p10

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: rrf_p10
  • Participant: ims_unipd
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 807b6f6748b08e2830f0692492c8913f
  • Run description: Reciprocal Ranking Fusion with k = 60 Elasticsearch BM25 k1 = 1.2 b = 0.75, tittle weight = 0.1, abstract weight = 0.9 DFR basic_model='if', after_effect='b', normalization='h2', tittle weight = 0.5, abstract weight = 0.2 QLM mu = 2000, tittle weight = 0.1, abstract weight = 0.5

rrf_prf_infndcg

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: rrf_prf_infndcg
  • Participant: ims_unipd
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 05a8d4210e49c75bfe44e53434f0c5f4
  • Run description: Reciprocal Ranking Fusion with k = 60 RM3 Pseudo Relevance Feedback Elasticsearch BM25 k1 = 1.2 b = 0.75, tittle weight = 0.1, abstract weight = 0.8, 5 docs, 30 terms, title DFR basic_model='if', after_effect='b', normalization='h2', tittle weight = 0.3, abstract weight = 0.5, 5 dice, 30 terms, title QLM mu = 2000, tittle weight = 0.2, abstract weight = 0.7, 10 docs, 30 terms, title

rrf_prf_p10

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: rrf_prf_p10
  • Participant: ims_unipd
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 33a1231e9e0c90874ed83371fbbd7eab
  • Run description: Reciprocal Ranking Fusion with k = 60 RM3 Pseudo Relevance Feedback Elasticsearch BM25 k1 = 1.2 b = 0.75, tittle weight = 0.1, abstract weight = 0.9, 30 docs, 30 terms, abstract DFR basic_model='if', after_effect='b', normalization='h2', tittle weight = 0.5, abstract weight = 0.2, 10 dice, 30 terms, title QLM mu = 2000, tittle weight = 0.1, abstract weight = 0.5, 10 docs, 10 terms, abstract

rrf_prf_rprec

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: rrf_prf_rprec
  • Participant: ims_unipd
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: bb26654d7bd11781303029a88171eec3
  • Run description: Reciprocal Ranking Fusion with k = 60 RM3 Pseudo Relevance Feedback Elasticsearch BM25 k1 = 1.2 b = 0.75, tittle weight = 0.2, abstract weight = 0.5, 10 docs, 30 terms, abstract DFR basic_model='if', after_effect='b', normalization='h2', tittle weight = 0.2, abstract weight = 0.3, 10 dice, 30 terms, abstract QLM mu = 2000, tittle weight = 0.1, abstract weight = 0.6, 5 docs, 30 terms, title

run_bm25

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: run_bm25
  • Participant: vohcolab
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/26/2020
  • Type: automatic
  • Task: primary
  • MD5: 02dd857b9bd7b77b925073f0cb8978ff
  • Run description: Elastic Search BM25 b=0.75 k =1.2 (disease,gene,treatment)

sibtm_run1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: sibtm_run1
  • Participant: BITEM
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 38a1fc850f490695597d762fd0d2201b
  • Run description: Run 1 is our baseline run. This run is a combination of an exact query (containing all query items: the disease, the drug and the gene) and a relaxed query (containing two of the three query items). Drugs, genes and diseases are normalized and are searched in a pre-annotated version of the MEDLINE collection.

sibtm_run2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: sibtm_run2
  • Participant: BITEM
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 1f81e088fee05b1af06068f0a0d53718
  • Run description: Run 2 is a re-ranking of the results obtained in run 1, based on a classifier for precision medicine. Title and abstracts are used to attribute a label "precision medicine" or "not precision medicine" to each document. Documents labelled as "Precision medicine" are then favored in the ranking.

sibtm_run3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: sibtm_run3
  • Participant: BITEM
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 095f1e0eee5ea301971373b30eae52ef
  • Run description: Run 3 is a re-ranking of the results obtained in run 2, based on a classifier for document focus regarding the topic. For each document, a focus score is calculated for the gene and the disease: it represents the percentage of the topic item regarding all the items mentioned in the document. The classifier is build using these focus scores as well as the density of genes, diseases and drugs in the document.

sibtm_run4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: sibtm_run4
  • Participant: BITEM
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: f2dbe32c99c77828b9bebf03fae167e9
  • Run description: Run 4 is a re-ranking of the results obtained in run 3. Results are further re-ranked based on the type of publications, as well as the strength of the evidences.

sibtm_run5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: sibtm_run5
  • Participant: BITEM
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 5864a3847de2f7068c1f7fc4740dc574
  • Run description: Run 5 is a re-ranking of the results obtained in run 3. Results are re-ranked based on the type of publications, the strength of the evidences, the size groups, as well as the diversity of ages, genders and ethnies mentioned in the document.

tier1st

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: tier1st
  • Participant: PINGAN_NLP
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/25/2020
  • Type: automatic
  • Task: primary
  • MD5: 96646f450b1560cb457b064d93aedbe0
  • Run description: this result generated by our retrieval model, we first retrieve pubmed es index, then, we manually labeled about 2000 pubmed papers. with these labeled papers, we train a relevance biobert model and tier biobert model. finally we pass all candiates papers to these to models. in this run ,we weight heavily on tier

uog_ufmg_DFRee

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uog_ufmg_DFRee
  • Participant: UoGTr
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 4eceb139f262e67ccea833870b89abc1
  • Run description: An automatic run using DFRee ranking model built on pyTerrier.

uog_ufmg_s_dfr0

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uog_ufmg_s_dfr0
  • Participant: UoGTr
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 636848c31e8386d060ffc02454706dde
  • Run description: An automatic run with documents from an initial DFRee ranking built on pyTerrier, re-ranked by SciBERT model fine-tuned on MSMarco medical queries.

uog_ufmg_s_dfr5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uog_ufmg_s_dfr5
  • Participant: UoGTr
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 89bfe077c21a8a953c7d955189a88842
  • Run description: An automatic run linear combining the DFRee ranking model with Scibert model trained on MSMarco medical queries, built on pyTerrier.

uog_ufmg_sb_df5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uog_ufmg_sb_df5
  • Participant: UoGTr
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: d7f5bcc9dacacdd27b5f78e139800a4d
  • Run description: An automatic run built on pyTerrier with linear combination of scores from DFRee ranking and scores from SciBERT model, fine-tuned on MSMarco medical queries (topics fixed).

uog_ufmg_secL2R

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uog_ufmg_secL2R
  • Participant: UoGTr
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 42b0fbe55fecb1aa80c5d020cbb31999
  • Run description: An automatic run combining scores of DFRee and Scibert model (trained on MSMarco medical queries) on different sections of the abstracts (automatic classified), using L2R with LightGBM and pyTerrier.

uwbm25

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uwbm25
  • Participant: MRG_UWaterloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: ff815aadea8d3f90b2b5b9dfd59cb90c
  • Run description: BM25 -- wumpus search with default parameters

uwman

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uwman
  • Participant: MRG_UWaterloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: manual
  • Task: primary
  • MD5: fecd0349ee1040a0bb769f6ed292c0c9
  • Run description: Manual screening and relevance feedback.

uwpr

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uwpr
  • Participant: MRG_UWaterloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: automatic
  • Task: primary
  • MD5: 95cb8b0cfd585eb8b93396029bef88a0
  • Run description: Pseudo-relevance feedback: 20 records bm25; feedback logisitic regression

uwr

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uwr
  • Participant: MRG_UWaterloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: manual
  • Task: primary
  • MD5: 5f3f2aefbe609853c430285c5329b08b
  • Run description: Manual relevance feedback - positive only

uwrn

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Summary (evidence-eval) | Appendix

  • Run ID: uwrn
  • Participant: MRG_UWaterloo
  • Track: Precision Medicine
  • Year: 2020
  • Submission: 8/27/2020
  • Type: manual
  • Task: primary
  • MD5: d4567749883eb5e042c1f893751a61c2
  • Run description: Manual relevance feedback - positive and negative