Skip to content

Runs - Health Misinformation 2020

adhoc_run1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run1
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: e979d184137c5c6baffb988fcff54c0a
  • Run description: Run1 - BM25_description (no re-ranking) - baseline

adhoc_run10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run10
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 4d46f1dd725f07397cb880efe47cc9c1
  • Run description: # Run10 - RM3_description + re-ranking using only rev_missinfo zscore of the top N per query

adhoc_run11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run11
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 32c3fcdeec3f5f7972966ce0c8d47e6c
  • Run description: # Run11 - Reciprocal rank fusion of run1, run9 and run10

adhoc_run12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run12
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 63ebe2c5bb0f781ddf747973ad0474b2
  • Run description: # Run12 - RM3_description + re-ranking using only rev_cred zscore of the top 200 per query

adhoc_run13

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run13
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: f85128ecca3ff44f85724cf3172cbfce
  • Run description: # Run13 - RM3_description + re-ranking using only rev_missinfo zscore of the top 200 per query

adhoc_run2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run2
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 5ca10df72da7ead6359de2b5a99274e6
  • Run description: RM3- baseline

adhoc_run3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run3
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: d89b472686eb1395ae35c3305ebf2133
  • Run description: # Run3 - BM25_description + re-ranking using average of (rel, cred, miss-info) zscore of the top 100 per query.

adhoc_run4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run4
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 33c9dd79f34be92ec127112fba6301a7
  • Run description: Run4 - RM3_description + re-ranking using average of (rel, cred, miss-info) zscore of the top N per query

adhoc_run5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run5
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: a32359bbb89522e34ddda9c449a111a6
  • Run description: Run5 - BM25_description + re-ranking using average of (.6 rel .2cred .2miss-info) zscore of the top N per query

adhoc_run6

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run6
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 578c9fce37afeefd751babc1c17949a5
  • Run description: Run6 - RM3_description + re-ranking using average of (.6 rel .2cred .2miss-info) zscore of the top N per query

adhoc_run7

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run7
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: f25043d9af1ccc9b90df92b55497ba40
  • Run description: Run7 - RM3_description + re-ranking using euclidean distance between ([rel, cred, miss-info], [maxRel, maxCred, maxMissinfo]), where maxCred is the maximum credibility score of the topic, of the top N per query

adhoc_run8

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run8
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: c8bbb5ac81b32514d54298d2e7e48122
  • Run description: # Run8 - RM3_description + re-ranking using chebyshev distance between ([rel, cred, miss-info], [maxRel, maxCred,maxMissinfo]), where maxRel is the maximum relevance score of the topic, of the top N per query

adhoc_run9

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: adhoc_run9
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 8a90a5e85e2670b5bff336f48ea950c9
  • Run description: # Run9 - RM3_description + re-ranking using only rev_cred zscore of the top N per query

bm25-desc

Results | Participants | Input | Summary | Appendix

  • Run ID: bm25-desc
  • Participant: UWaterlooMDS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/31/2020
  • Type: auto
  • Task: recall
  • MD5: bd46017e8d3946b90214fc7a02ff1d2b
  • Run description: Anserini default BM25. Description field used as query.

bm25-title

Results | Participants | Input | Summary | Appendix

  • Run ID: bm25-title
  • Participant: UWaterlooMDS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/31/2020
  • Type: auto
  • Task: recall
  • MD5: 040b4c2fa47b8130caaf740cf19247e1
  • Run description: Anserini default BM25. Title field used as query.

bm25_desc

Results | Participants | Input | Summary | Appendix

  • Run ID: bm25_desc
  • Participant: UWaterlooMDS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/31/2020
  • Type: auto
  • Task: adhoc
  • MD5: ab290fa9f25df1e5321316d75153ff6f
  • Run description: Anserini default BM25. Title field used as query.

bm25_title

Results | Participants | Input | Summary | Appendix

  • Run ID: bm25_title
  • Participant: UWaterlooMDS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/31/2020
  • Type: auto
  • Task: adhoc
  • MD5: 1c301bdabfedefc387780c7357371e87
  • Run description: Anserini default BM25. Title field used as query.

CiTIUSCrdAdh

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSCrdAdh
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/14/2020
  • Type: auto
  • Task: adhoc
  • MD5: 98a1b054d9f3ce4ad9dfe02efb9a6a94
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. After that, we reranked the n retrieved documents based on a generalistic credibility classifier trained by us. Finally, we kept the top thousand documents.

CiTIUSCrdRelAdh

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSCrdRelAdh
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/17/2020
  • Type: auto
  • Task: adhoc
  • MD5: fdce06f768d58cdeac05a8b73eba64f2
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. After that, we evaluated the n retrieved documents based on a generalistic credibility classifier trained by us. Finally, to produce the final rank, we used a voting method, Borda Count, to combine both rankings and we kept the first top thousand documents.

CiTIUSCrdRelTot

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSCrdRelTot
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/17/2020
  • Type: auto
  • Task: recall
  • MD5: d1dd5cd854244133fa9301065613139d
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. After that, we evaluated the n retrieved documents based on a generalistic credibility classifier trained by us. Finally, to produce the final rank, we used a voting method, Borda Count, to combine both rankings and we kept the first top ten thousand documents.

CiTIUSCrdTot

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSCrdTot
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/17/2020
  • Type: auto
  • Task: recall
  • MD5: 4fc498d45b8c0b4b57ac549b0e5f94eb
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. After that, we reranked the n retrieved documents based on a generalistic credibility classifier trained by us. Finally, we kept the top documents.

CiTIUSSimAdh

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSSimAdh
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/17/2020
  • Type: manual
  • Task: adhoc
  • MD5: 53039a55fc16b42527fc1a4e8807f32c
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. Agter that, we manually built a query using both description and answer fields. For example, for topic 1 the result query should be "Vitamin D does not cure COVID-19". Then, we reranked the n retrieved documents based on maximum sentence similarity between the query and all sentences in each document. Finally, we kept the top thousand documents.

CiTIUSSimRelAdh

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSSimRelAdh
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/17/2020
  • Type: manual
  • Task: adhoc
  • MD5: 99b51e945e04b6c2c2e76226dfa3aaa8
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. After that, we manually built a query using both description and answer fields. For example, for topic 1 the result query should be "Vitamin D does not cure COVID-19". Then, we evaluated the n retrieved documents based on the maximum sentence similarity between the query and all sentences in each document. Finally, to produce the final rank, we used a voting method, Borda Count, to combine both rankings and we kept the first top thousand documents.

CiTIUSSimTot

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CiTIUSSimTot
  • Participant: CiTIUS
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 8/17/2020
  • Type: manual
  • Task: recall
  • MD5: 116a99e99ef288a56bf16ee017b011b5
  • Run description: For producing this run, we first did a BM25 retrieval baseline using the title field. After that, we manually built a query using both description and answer fields. For example, for topic 1 the result query should be "Vitamin D cures COVID-19". Then, we evaluated the n retrieved documents based on maximum sentence similarity between the query and all sentences in each document. Finally,to produce the final rank, we iused a voting method, Borda Count, to combine both rankings and we kept the first top ten thousand documents.

cn-ax-rer

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-ax-rer
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: de32569385bcadc20d580c9b603a988a
  • Run description: We have indexed the CC-News Collection to Elastic ChatNoir, and retrieve documents using the title of the topics. The top-20 documents of all topics are reranked with three argumentative axioms.

cn-descr-2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-descr-2
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 6dd194d2d05114eafd2ac06d859f94f0
  • Run description: We have indexed the CC-News Collection to Elastic ChatNoir, and retrieve documents using the description of the topics.

cn-kq

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-kq
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: manual
  • Task: adhoc
  • MD5: d22db62e9c865d5905cf29ec683e8374
  • Run description: For each topic, humans submitted multiple queries against the UI of ChatNoir to identify relevant documents (6 minutes per topic). Relevance was judged solely on the shown snippets/titles of the SERP entries. We calculate keyqueries for the identified relevant documents and use team-draft-interleaving to combine all keyqueries for a topic to produce the final ranking.

cn-kq-t-td

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-kq-t-td
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: manual
  • Task: adhoc
  • MD5: 1ee1806867185c35776015f1a1e63d25
  • Run description: For each topic, humans submitted multiple queries against the UI of ChatNoir to identify relevant documents (6 minutes per topic). Relevance was judged solely on the shown snippets/titles of the SERP entries. We calculate keyqueries for the identified relevant documents and use team-draft-interleaving to combine all keyqueries and the original title of a topic to produce the final ranking and move relevant documents to first ranking positions.

cn-kq-td

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-kq-td
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: manual
  • Task: adhoc
  • MD5: a2cea66d2c174ecd77195130e9643997
  • Run description: For each topic, humans submitted multiple queries against the UI of ChatNoir to identify relevant documents (6 minutes per topic). Relevance was judged solely on the shown snippets/titles of the SERP entries. We calculate keyqueries for the identified relevant documents and use team-draft-interleaving to combine all keyqueries of a topic to produce the final ranking and move relevant documents to first ranking positions.

cn-m-title

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-m-title
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: manual
  • Task: adhoc
  • MD5: 4ed2bbfa5bbf1cdb853e2ecad518b0d6
  • Run description: For each topic, humans submitted multiple queries against the UI of ChatNoir to identify relevant documents (6 minutes per topic). Relevance was judged solely on the shown snippets/titles of the SERP entries. This run ranks identified relevant documents before documents retrieved by submitting the title against ChatNoir.

cn-title-2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cn-title-2
  • Participant: Webis
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 9a2a2d511b256f633effc1bba52786a5
  • Run description: We have indexed the CC-News Collection to Elastic ChatNoir, and retrieve documents using the title of the topics.

h2oloo.m1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m1
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: bb02d9e4a68ccd1820f0ecf8776cd2b2
  • Run description: Anserini's BM25 with default parameters.

h2oloo.m10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m10
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 06aa6ee92f29005a9d992d123189453c
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline. All topics prefixed with "Clinical Studies, FDA, CDC, Health Officials, WHO or researchers say".

h2oloo.m2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m2
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 90dcbde082235b11e2e85eade4cca849
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline.

h2oloo.m3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m3
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: c1f30943a9c8cf729db3b3009bab8de0
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline.

h2oloo.m4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m4
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 11f9b74cbb69722baf787b96f9cf2062
  • Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 re-ranks the top 1000 documents from the Anserini BM25 baseline.

h2oloo.m5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m5
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 04891304af15d5723fc82fa1cfa6006f
  • Run description: A pairwise reranker (duoT5) using top-50 documents from a pointwise reranker (monoT5). monoT5 re-ranks the top 1000 documents from the Anserini BM25 baseline.

h2oloo.m7

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m7
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: e8bba6715698d42d379901f5f986070f
  • Run description: First, a pointwise reranker (monoT5) is used with the top 1000 documents from the Anserini BM25 baseline. Then, T5 trained on effectiveness judgments of TREC MISINFO'19 is used to re-rank top 1000 segments from unique documents.

h2oloo.m8

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m8
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 011f375229eb48fc4546cf9e13a20dae
  • Run description: First, a pointwise reranker (monoT5) is used with the top 1000 documents from the Anserini BM25 baseline. Then, T5 trained on effectiveness judgments of TREC MISINFO'19 is used to re-rank top 1000 segments from unique documents. Submission is sum of probabilities from the pointwise reranker and the TREC MISINFO'19 system.

h2oloo.m9

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: h2oloo.m9
  • Participant: h2oloo
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 8a37cc4dcad84e544a2bff733fe81567
  • Run description: A pointwise reranker (monoT5) using the top 1000 documents from the Anserini BM25 baseline. All topics prefixed with "Clinical Studies, FDA, CDC, Health Officials, WHO or researchers say". A linear combination with original query score is used.

NLM_BNU_E_GH

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_BNU_E_GH
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 2c9b0e97581a81e0a349d9c7e35b0d4a
  • Run description: Reranking with gaussian hits (hub/authority scores) and page rank scores based on the BNU_E retrieval ensemble.

NLM_BNU_E_NLI_C

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_BNU_E_NLI_C
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 70cc95dd524b13bbb8ef2513d7858470
  • Run description: Based on a sentence-level index of the CCN Covid collection: Top-10000 documents retrieved with a BM25 based model and an NGram boost, documents scored incrementally with each relevant sentence, then re-ranked with an ensemble of models (T5, BERT-base, BERT-large). Each question from the topics description was then transformed to an affirmative sentence automatically using syntactic rules and a Natural Language Inference model (Roberta) trained on MNLI was used to infer whether the most relevant sentence from the documents had an entailment/neutral/contradiction relation with the transformed topic. The final ranking was performed by putting the documents contradicting the reference answer first then the remaining documents in order of relevance.

NLM_BNU_ENS_NLI

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_BNU_ENS_NLI
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 3c6be3cce28c20ebb1ee90d9fcf3efd6
  • Run description: Based on a sentence-level index of the CCN Covid collection: Top-10000 documents retrieved with a BM25 based model and an NGram boost, documents scored incrementally with each relevant sentence, then re-ranked with an ensemble of models (T5, BERT-base, BERT-large). Each question from the topics description was then transformed to an affirmative sentence automatically using syntactic rules and a Natural Language Inference model (Roberta) trained on MNLI was used to infer whether the most relevant sentence from the documents had an entailment/neutral/contradiction relation with the transformed topic. The final ranking was performed by putting the documents validating the reference answer first (in order of relevance) then the remaining documents (still ranked by relevance).

NLM_BNU_T5_CTM

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_BNU_T5_CTM
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 4aad3c4c65630fc485191a88806b4592
  • Run description: Based on a sentence-level index of the CCN Covid collection: Top-10000 documents retrieved with a BM25 based model and an NGram boost, documents scored incrementally with each relevant sentence, then re-ranked with a T5 relevance-based ranking model. Only the most relevant sentence from the IR-based search was then classified as containing "false" or "true" claims using both an external dataset and an in-house dataset (with the same final T5 model used in NLM_CTM_R1 and NLM_CTM_R2).

NLM_CTM_R1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_CTM_R1
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: c4386e21648bb669e73fc8d4f33d3db0
  • Run description: Top-10000 documents retrieved with a BM25 model, re-ranked with a T5 relevance-based ranking model. Each sentence in the documents was then classified as containing "false" or "true" claims using both an external dataset and an in-house dataset. A voting method was applied to turn the sentences classification into a document-level classification.

NLM_CTM_R1_C

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_CTM_R1_C
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: f0963739cdf3c6b4230d62ee6d5f1020
  • Run description: Top-10000 documents retrieved with a BM25 model, re-ranked with a T5 relevance-based re-ranking model. Each sentence in the documents was then classified as containing "false" or "true" claims using both an external dataset and an in-house dataset. A voting method was applied to turn the sentences classification into a document-level classification. Only documents classified as "false" were selected for the Total Recall task.

NLM_CTM_R2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_CTM_R2
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 913c3575f5e713dedb508ef0231dbfea
  • Run description: Top-1000 documents retrieved with a result-fusion/ensemble IR method, re-ranked with a T5 relevance-based ranking model. Each sentence in the documents was then classified as containing "false" or "true" claims using both an external dataset and an in-house dataset. A voting method was applied to turn the sentences classification into a document-level classification.

NLM_E3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_E3
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: a5cdaa7c2a5727b7a2732193c1fa03c7
  • Run description: End-to-end rank-based ensemble of 4 Adhoc runs based on different methods.

NLM_E4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_E4
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: b7431f35036b238575bc3b71bd1308ef
  • Run description: End-to-end rank-based ensemble of 5 Adhoc runs based on different methods.

NLM_TME

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_TME
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 66320a233c1240a48f5fc243a0ca6ea8
  • Run description: Ensemble relevance method with 8 retrieval approaches.

NLM_TME_GH

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_TME_GH
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: b84f713332322813ea186a1bc9950ca4
  • Run description: Reranking with gaussian hits (hub/authority scores) and page rank scores based on the TME retrieval ensemble and an incrementally increased influence of the source score for lower ranks.

NLM_TME_NLIR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_TME_NLIR
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 84b39ce84e332be6c268c0083bacd804
  • Run description: Ensemble retrieval method combining 4 conventional IR models, and 4 deep-learning based re-ranking. Results were filtered to keep only documents with sentences mentioning both the subject and object entities in the questions. The most relevant sentence of each document was used to detect entailment, neutrality, or contradiction with the affirmative for of the topic. The results were then re-ranked according to their contradiction/entailment scores.

NLM_TME_NLIR_C

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLM_TME_NLIR_C
  • Participant: NLM
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: recall
  • MD5: 43827c84d9dc638543fe26088d791a33
  • Run description: Ensemble retrieval method combining 4 conventional IR models, and 4 deep-learning based re-ranking models. Results were filtered to keep only documents with sentences mentioning both the subject and object entities of the questions. The most relevant sentence of each document was used to detect entailment, neutrality, or contradiction with the affirmative form of the topic. The results were then re-ranked according to their contradiction with the reference answer.

RSL_BM25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RSL_BM25
  • Participant: RealSakaiLab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 67ab8afa346525019fcdaa586c09db91
  • Run description: BM25 baseline

RSL_BM25LC

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RSL_BM25LC
  • Participant: RealSakaiLab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: adhoc
  • MD5: 3e42cceb4fc8e0e4591c620f76f726b1
  • Run description: In this run, we used a language identification model to filter out documents that are not in English and then trained a news category classifier to find out documents that are in relevant categories, the probability output of the classifier is used to boost the baseline BM25 score, and the documents are then reranked by the boosted scores.

RSL_BM25LM

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RSL_BM25LM
  • Participant: RealSakaiLab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: ab99bbdda48abc4a7de6709ef8ce29c0
  • Run description: filter out documents that are not in English, calculate a majority score using the similarity between retrieved documents to model their credibility boost the BM25 by using the majority score and then rerank

RSL_BM25LMC

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RSL_BM25LMC
  • Participant: RealSakaiLab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: fc4e334575479ed83d8424e6c9d4f4fa
  • Run description: a combined run of RSL_BM25LC and RSL_BM25LM, score(RSL_BM25LMC) = score(RSL_BM25LC) + score(RSL_BM25LM)

run1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run1
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 537611a90936dfd30d7c9272a80927e2
  • Run description: Run1 - BM25_description + re-ranking using average of (rel, cred-1, miss-info-1) zscore of the top 200 per query. Scores of credibility were generated by using a LogReg wit pre-trained on Microsoft Credibility Dataset. Miss info scores were computed based on a stance detection model.

run10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run10
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: ab046783a905e1c837afdce834e96e5f
  • Run description: # Run10 - RM3_description + re-ranking using euclidean distance between ([rel, rev_cred, rev_miss-info], [maxRel, minCred, minMissinfo]), where minCred is the minimum credibility score of the topic, of the top 300 per query

run11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run11
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 317e0f4a623c7741a5e7057bf465c3ae
  • Run description: # Run11 - Reciprocal rank fusion of all runs.

run2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run2
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 8150a443ca93441d130ae5767568577e
  • Run description: Run2 - RM3_description + re-ranking using average of (rev_cred, rev_miss-info) zscore of the top 100 per query. Scores of credibility were generated by using a LogReg wit pre-trained on Microsoft Credibility Dataset. Miss info scores were computed based on a stance detection model.

run3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run3
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: ecfc39703ee430f28e23169578560a64
  • Run description: Run3 - RM3_description + re-ranking using average of (rev_rel, rev_cred, rev_miss-info) zscore of the top 100 per query. Scores of credibility were generated by using a LogReg wit pre-trained on Microsoft Credibility Dataset. Miss info scores were computed based on a stance detection model.

run4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run4
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 3bc6c7fde7d740830b4b7236e1002297
  • Run description: Run4 - RM3_description + re-ranking using only rev_cred zscore of the top N per query. Scores of credibility were generated by using a LogReg wit pre-trained on Microsoft Credibility Dataset.

run5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run5
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 84c7805cad611a979e0e1c799615d570
  • Run description: Run5 - RM3_description + re-ranking using only rev_missinfo zscore of the top 100 per query. Miss info scores were computed using a stance detection model.

run6

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run6
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 98ce539b04bbd85db165e804bc4226df
  • Run description: Run6 - a reciprocal rank fusion of run4 and run 5.

run7

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run7
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: f00c2d8d933d3408faf61346c9723290
  • Run description: Run7 - RM3_description + re-ranking using euclidean distance between ([rel, rev_cred, rev_miss-info], [maxRel, minCred, minMissinfo]), where minCred is the minimum credibility score of the topic, of the top N per query

run8

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run8
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 6d5eecb55b0892ef9ee9a32571d0c04e
  • Run description: Run8 - RM3_description + re-ranking using chebyshev distance between ([rel, cred, miss-info], [maxRel, maxCred,maxMissinfo]), where maxRel is the maximum relevance score of the topic, of the top N per query

run9

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: run9
  • Participant: KU
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: auto
  • Task: recall
  • MD5: 164044db2bd2717a389b1ccfa7f1d125
  • Run description: Run9 - RM3_description + re-ranking using only rev_cred zscore of the top 200 per query

THUIRRuleBased

Results | Participants | Input | Summary | Appendix

  • Run ID: THUIRRuleBased
  • Participant: THUIR
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/1/2020
  • Type: manual
  • Task: recall
  • MD5: 9cedb7048cb4c92fd50df383bc1c6e70
  • Run description: We first used the bm25 algorithm to recall the document containing "COVID" and various treatments. After that, we manually construct the query according to effect term and answer field for further recall.

vohbm25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: vohbm25
  • Participant: vohcolab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: adhoc
  • MD5: 031d3eac0239f0b7956bbcc081ad2172
  • Run description: bm25

vohbm25rm3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: vohbm25rm3
  • Participant: vohcolab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: auto
  • Task: recall
  • MD5: 4e3124c9e19d9a32c541c75c7d152012
  • Run description: bm25 + rm3

vohcolabEvSim

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: vohcolabEvSim
  • Participant: vohcolab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: manual
  • Task: adhoc
  • MD5: e83bb86fa599f90ecf5da7424542b688
  • Run description: an initial retrieval using bm25 then for each topic: 1. create corpus with retrieved docs + evidence_text (obtained from crawling evidence url) 2. create tfidf representation normalized with l1 3. smooth tfidf with collection statistics 4 compare all documents in the corpus with the evidence_text through kl divergence between the tfidf document vectors 5. rerank based on similarity

vohEvDiv_colm

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: vohEvDiv_colm
  • Participant: vohcolab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: manual
  • Task: recall
  • MD5: 98c74dc79e23126ac7b46e759b17eccc
  • Run description: initial retrieve 1k documents with bm25 and rm3 then for each topic: 1. create corpus of retrieved documents + evidence text (obtained from crawling url) 2. create tf representation 3. compute simetric kl divergence with background model as collection, between each document and the evidence's mean statistics of the text(words frequencies as probabilistic model) 4. rerank starting from farthest away from evidence

vohEvDivTfidf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: vohEvDivTfidf
  • Participant: vohcolab
  • Track: Health Misinformation
  • Year: 2020
  • Submission: 9/2/2020
  • Type: manual
  • Task: recall
  • MD5: 587966fcdbe39a7d91bb61f62a168cfd
  • Run description: return initial 1k documents with bm25+rm3 then for each topic: 1. created corpus of retrieved documents + evidence_text (extracted from evidence field's url) 2. created tfidf representation 3. compare divergence between docs and evidence_text through simetric Kl divergence with smoothed background model (collection) 4. rerank documents in descending order from divergence (highest divergence first)