Skip to content

Runs - Clinical Decision Support 2016

AutoDes

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: AutoDes
  • Participant: FDUDMIIP
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: abd0de9ffd4080670096a4058e32b44f
  • Run description: We used language model based on indri

AutoNote

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: AutoNote
  • Participant: FDUDMIIP
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: 3159817cb469bbf73bad82aa2b8bb4c4
  • Run description: We used language mode based on Indri.

AutoSummary

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: AutoSummary
  • Participant: FDUDMIIP
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: 7f42f9ff5f7e370984196265e89402e9
  • Run description: We used language mode based on Indri.

AutoSummary1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: AutoSummary1
  • Participant: FDUDMIIP
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: 376e1e164bb1a7157e1d384ecaef1639
  • Run description: We used language mode based on Indri.

cbnun1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: cbnun1
  • Participant: cbnu
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 1e268c7e50d6f250dfc67619e104e704
  • Run description: pseudo relevance feedback based on word embeddings and disease-centered clusters (topic type: notes)

cbnus1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: cbnus1
  • Participant: cbnu
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 92403e672b069f43ff561757310ad7c6
  • Run description: pseudo relevance feedback using word embeddings and disease-centered clusters (topic summary type)

cbnus2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: cbnus2
  • Participant: cbnu
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: a100ad484f4bc7a6057301903818dbdf
  • Run description: re-ranking based on word embeddings and disease-centered clusters (topic summary type)

CCNUDESR2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CCNUDESR2
  • Participant: CCNU2016TREC
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/25/2016
  • Type: automatic
  • MD5: 7ae671ea59f5c1ab2b1dc915c7b2585e
  • Run description: Modify the IDF of BM25 in the first step, and then use pseudo relevance feedback.

CCNUNOTER1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CCNUNOTER1
  • Participant: CCNU2016TREC
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/25/2016
  • Type: automatic
  • MD5: a94677907f303463696457795fd0804d
  • Run description: Modify the IDF of BM25 in the first step, and then use pseudo relevance feedback.

CCNUNOTER2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CCNUNOTER2
  • Participant: CCNU2016TREC
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/25/2016
  • Type: automatic
  • MD5: 95dddd38f12851dd88cd9a61bb1cca99
  • Run description: Modify the TF of MATF in the first step, and then use pseudo relevance feedback.

CCNUNOTER3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CCNUNOTER3
  • Participant: CCNU2016TREC
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/25/2016
  • Type: automatic
  • MD5: 620ac76cb87f822925185bc272c88529
  • Run description: Modify the TF of MATF and IDF of BM25 in the first step, and then combine them,in addition to, use pseudo relevance feedback.

CCNUSUMR1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CCNUSUMR1
  • Participant: CCNU2016TREC
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/25/2016
  • Type: automatic
  • MD5: 959052d0512b233b6cee218928cd5128
  • Run description: Firstly, modify BM25 and MATF respectively. Then, linearly combine them.

CSIROdSum

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CSIROdSum
  • Participant: CSIROmed
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: ad7584872955a583394bb5369f98ee1b
  • Run description: Description and summary fields were fed into Solr. Abstracts and titles were given boost over bodies of the paper. Matching phrases were boosted in ranking. Title and abstract metamap identified concepts were also indexed and searched.

CSIROmeta

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CSIROmeta
  • Participant: CSIROmed
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: decaf51db60c9db5457d28e2dc54fc28
  • Run description: Metamap was run over all three fields, and keywords were searched for in Solr. Abstracts and titles were given boost over bodies of the paper. Matching phrases were boosted in ranking. Title and abstract metamap identified concepts were also indexed and searched.

CSIROmnul

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CSIROmnul
  • Participant: CSIROmed
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: manual
  • MD5: 8dbca157b6ffcdec6e06840a3ea982e1
  • Run description: Used a Noun Chunks Extractor to extract keyphrases in each topic, the extracted key-phrases were checked against a dictionary of medical abbreviations and expanded if found. Synonyms of disease names were added based on either Mayoclinic or wikipedia entries of that disease (if applicable).

CSIROnote

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CSIROnote
  • Participant: CSIROmed
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 7acfaacdc26db48cbaf1a30f28d8f433
  • Run description: Supplied notes were santised and fed directly into the Solr search engine as queries. Abstracts and titles were given boost over bodies of the paper. Matching phrases were boosted in ranking. Title and abstract metamap identified concepts were also indexed and searched.

CSIROsumm

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: CSIROsumm
  • Participant: CSIROmed
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: feded3232bb507aee2ca32bae2535820
  • Run description: Metamap was run over summary fields. Original summary and keywords from the metamap output were fed into Solr. Abstracts and titles were given boost over bodies of the paper. Matching phrases were boosted in ranking. Title and abstract metamap identified concepts were also indexed and searched.

d2vCombIrit

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: d2vCombIrit
  • Participant: IRIT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: cccfe8b21b86508cd66f2dc3d96ca602
  • Run description: Ranking with a neural network with takes in account the text, the description of concepts extracted from text (using MeSH). Scoring is combined with BM25 ranking.

d2vDescIrit

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: d2vDescIrit
  • Participant: IRIT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: c87571711d127db5439cc374c94d7a6c
  • Run description: Ranking with a neural network with takes in account the text, the description of concepts extracted from text (using MeSH)

dacmmf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: dacmmf
  • Participant: HAUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 61bb0b50814c56eefddfcc154a377204
  • Run description: Abstract-based. Medical Text Indexer for query expansion; Article classification; Multi-model fusion using description.

DAdescTM

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAdescTM
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: a23c3a60425ac3d8863a9c7459f8b39b
  • Run description: Run using description as query and topics from topic modeling for pseudo relevance feedback.

DAnote

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAnote
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 79fe9bbe978161d11f9d06343a2a1044
  • Run description: Run using note as query and In_expC2 retrieval model.

DAnoteRoc

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAnoteRoc
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 214757e25830adbd11e4464e77a73540
  • Run description: Run using note as query and rocchio for pseudo relevance feedback.

DAnoteTM

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAnoteTM
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 522ccada91472a638ca127ca9765f2ab
  • Run description: Run using note as query and topics from topic modeling for pseudo relevance feedback.

DAsummTM

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAsummTM
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 61815e1d9f7e3bdbda4cae1e504c3381
  • Run description: Run using summary as query and topics from topic modeling for pseudo relevance feedback.

DDPHBo1CM

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DDPHBo1CM
  • Participant: IAII_PUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 0dabe1822699a56338f219357ec3307a
  • Run description: Document retieval using Terrier (with DPH Bo1) and query expansion - keywords for topic type and MESH for medical terms in description.

DDPHBo1MWRe

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DDPHBo1MWRe
  • Participant: IAII_PUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: b43950133e804301103c3b74ca893735
  • Run description: Hybrid mode - depending on score, different methods used; for most relevant - DPH Bo1 using description, then gradually added query expansions: using MESH and topic type keywords, then word2vec query expansion and finally removed description text leaving only query expansion.

descUIOWAS2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: descUIOWAS2
  • Participant: UIowaS
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 121dd1bb5ed885c2f242983beb553191
  • Run description: We start by processing topics2016.xml and extract the 30 topics. Given the type (note, summary, or description) of topic, we use only that text category as input in a particular run. We use Metamap to extract all the UMLS concepts in the text. We use a subset of these extracted concepts in creating the query per topic. We also extract the age and gender of the patient from the text and use it in the query. Lastly we use indri to run the query on pre-indexed pmc dataset to get the final ranked list.

DUTHaaRPF

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DUTHaaRPF
  • Participant: DUTH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: ba545db62ae96326413042cd3edda289
  • Run description: Using UMLS METAMAP mappings, the complete set of atoms for CUIs of interest were extracted and added to the SUMMARY in order to create a query. RPF with 3 Docs/10Terms was used (INDRI 5.8)

DUTHmaRPF

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DUTHmaRPF
  • Participant: DUTH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 9422e6b4db6d64b2dc53e05a8b34d000
  • Run description: Using UMLS METAMAP mappings, multiple atoms for CUIs of interest were extracted and added to the SUMMARY in order to create a query. RPF with 3 Docs/10Terms was used (INDRI 5.8)

DUTHsaRPF

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DUTHsaRPF
  • Participant: DUTH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: c0c3d731a11746b918e35799b0296b4c
  • Run description: Using UMLS METAMAP mappings, the first atom for CUIs of interest was extracted and added to the SUMMARY in order to create a query. RPF with 3 Docs/10Terms was used (INDRI 5.8)

ECNUmanual

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ECNUmanual
  • Participant: ECNU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: manual
  • MD5: bbdc02c6a1a7a8fbc89a063dcdad033b
  • Run description: We invite an medical PhD student to give us the diagnosis of each topic by only using content in the note. We retrieve by utilizing Terrier with PL2 model.
  • Code: https://github.com/heyunh2015/cds2016.git

ECNUrun1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ECNUrun1
  • Participant: ECNU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: f226b53302c3edee3b313d47f1316b73
  • Run description: Put the original summay content into the Google engine, use the words which not only appear in the top 10 retrieval results but also appear in the MeSH dictionary to do the query expansion.
  • Code: https://github.com/heyunh2015/cds2016.git

ECNUrun3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ECNUrun3
  • Participant: ECNU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: feb7541c65d4b041caad9440fdbfa665
  • Run description: Query expansion with pesudo relevance feedback and terms extracted from Google engine Combine the results from Language model, BM25 model and log-logistic DFR model
  • Code: https://github.com/heyunh2015/cds2016.git

ECNUrun4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ECNUrun4
  • Participant: ECNU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: f6aa61738c1640d78f850fabcaf606eb
  • Run description: Utilizing the KDBA system to find out the query terms which appear in DBPedia. Then we apply the MeSH to search those words, and the results are processed and added to query as the expansion terms. Finally, we retrieve with terrier.
  • Code: https://github.com/heyunh2015/cds2016.git

ECNUrun5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ECNUrun5
  • Participant: ECNU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 97e9ed406dca089d7e2f9307ad5492c3
  • Run description: Put the original summay content into the Google engine, use the words which not only appear in the top 10 retrieval results but also appear in the MeSH dictionary to do the query expansion.
  • Code: https://github.com/heyunh2015/cds2016.git

ETHDescRR

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ETHDescRR
  • Participant: ETH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 769ebe24c6a8d9f471e3d34cb440ce1f
  • Run description: We use a range of query expansion and re-ranking features based on MeSH.

ETHNote

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ETHNote
  • Participant: ETH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 1179e30af96b9ddaf1e110921b80177c
  • Run description: We use a range of query expansion and re-ranking features based on MeSH.

ETHNoteRR

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ETHNoteRR
  • Participant: ETH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: c5377e6711975f56074353c58b427e4b
  • Run description: We use a range of query expansion and re-ranking features based on MeSH.

ETHSumm

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ETHSumm
  • Participant: ETH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 5f7797be61fb4826df9528696642b77f
  • Run description: We use a range of query expansion and re-ranking features based on MeSH.

ETHSummRR

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ETHSummRR
  • Participant: ETH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 6f0364939a6a415b5a5988207777dc82
  • Run description: We use a range of query expansion and re-ranking features based on MeSH.

lsbn

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: lsbn
  • Participant: hany-miner
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: e6b047cfebd8e528e4d0a06ed36c9411
  • Run description: Key features of this run were BiGram,BM25,Child vs Adult Filter. No SVM feature is added in this run.

lssbd

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: lssbd
  • Participant: hany-miner
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 78fb4b63f35f53b0a93a1cc3d8bf98f7
  • Run description: Key features of this run were BiGram,BM25,SVM,Child vs Adult Filter. First, SVM (LIBSVM)is used to classify the three different topics, then used Lucene to search documents. Synonyms were added and the source was from Metathesaurus. After retreiving relevant documents, AgeRnageFilter find whether topic and documents are related to child or adult and filter ones do not match.

lssbn

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: lssbn
  • Participant: hany-miner
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 03f0800e33c9ea407171c43be1a7eb09
  • Run description: Key features of this run were BiGram,BM25,SVM,Child vs Adult Filter. First, SVM (LIBSVM)is used to classify the three different topics, then used Lucene to search documents. Synonyms were added and the source was from Metathesaurus. After retreiving relevant documents, AgeRnageFilter find whether topic and documents are related to child or adult and filter ones do not match.

lssbs

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: lssbs
  • Participant: hany-miner
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 18a71748c9a35f688f6e3cc89d35bfbc
  • Run description: Key features of this run were BiGram,BM25,SVM,Child vs Adult Filter. First, SVM (LIBSVM)is used to classify the three different topics, then used Lucene to search documents. Synonyms were added and the source was from Metathesaurus. After retreiving relevant documents, AgeRnageFilter find whether topic and documents are related to child or adult and filter ones do not match.

LucBase

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: LucBase
  • Participant: SCIAICLTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/22/2016
  • Type: automatic
  • MD5: 93643863bf3348fa94623f6b088c6a4b
  • Run description: This run used Lucene to index the corpus. We passed the topic summary as the query and returned the top 20 documents for each topic.

LucNote

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: LucNote
  • Participant: SCIAICLTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/22/2016
  • Type: automatic
  • MD5: 2f85f67576580067b976529525d43e3a
  • Run description: This run used Lucene to find documents for each topic. We passed in the note field of each topic as queries and returned the top 20 documents.

LucNoteFrame

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: LucNoteFrame
  • Participant: SCIAICLTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: 724fce981b491e1e1d74d01efffb75b0
  • Run description: First, the corpus is searched using Lucene for the top 30 documents for each topic note. These documents are then passed into our framing program, which compares elements of each document to the topics and narrows the documents into a top 10 list. The run also uses UMLS Metamap and the Stanford POS tagger to assist in document parsing.

LucWeight

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: LucWeight
  • Participant: SCIAICLTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/26/2016
  • Type: automatic
  • MD5: afed83af2876331768e67a176244b993
  • Run description: This run combined Lucene with machine learning. The weights of symptoms found through machine learning were used to help boost certain symptoms in the Lucene queries.

LucWghtFrame

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: LucWghtFrame
  • Participant: SCIAICLTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/25/2016
  • Type: automatic
  • MD5: a187d83add609187019039916cea4a67
  • Run description: First, the corpus is searched using a Lucene for the top 30 documents for each topic summary. These documents are then passed into our framing program, which compares elements of each document to the topics and narrows the documents into a top 10 list. The run also uses UMLS Metamap and the Stanford POS tagger to assist in document parsing.

ManualRun

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ManualRun
  • Participant: nch_risi
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: manual
  • MD5: 45edaaa42c16535bf0e5d6afcfbe495d
  • Run description: A group of human experts worked together to come up with keywords and use google to search. We have discrepancy between our googled list of PMCs and provided PMCIDs. But we only include provided PMCIDs only. This eliminates about 50% of relevant entries.

mayoad

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: mayoad
  • Participant: MayoNLPTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 0584a4c1bbeb83fb58b19d73c7d5867f
  • Run description: Using the description field in each topic, we applied an ensemble model of (1) a Part-of-Speech query term weighting model; (2) a Markov Random Field Model of extracted medical phrases; and (3) Query Expansion using MeSH terms.

mayoan

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: mayoan
  • Participant: MayoNLPTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: baca97e1af7bc89fa2a666938c5b5507
  • Run description: Using the note field in each topic, we applied an ensemble model of (1) a Part-of-Speech query term weighting model; (2) a Markov Random Field Model of extracted medical phrases (by MedTagger); and (3) Query Expansion using MeSH terms.

mayoas

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: mayoas
  • Participant: MayoNLPTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: acfb19e713ef24c6aa341ef16f9dd9dd
  • Run description: Using the summary field in each topic, we applied an ensemble model of (1) a Part-of-Speech query term weighting model; (2) a Markov Random Field Model of extracted medical phrases; and (3) Query Expansion using MeSH terms.

mayomd

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: mayomd
  • Participant: MayoNLPTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: manual
  • MD5: b0120655be72bcb2dc69997a32c3acd1
  • Run description: Using the description field in each topic, we applied an ensemble model of (1) a Part-of-Speech query term weighting model; (2) a Markov Random Field Model of manually extracted medical phrases; and (3) Query Expansion using MeSH terms.

mayomn

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: mayomn
  • Participant: MayoNLPTeam
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: manual
  • MD5: 7ce484cb254c5b55265b3aa5c5530589
  • Run description: Using the note field in each topic, we applied an ensemble model of (1) a Part-of-Speech query term weighting model; (2) a Markov Random Field Model of manually extracted medical phrases; and (3) Query Expansion using MeSH terms.

MRKPrfNote

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MRKPrfNote
  • Participant: MERCKKGAA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 88cda94ce45ab409dada2f0e8ce6c7d5
  • Run description: Search with Notes on title, abstract, body

MRKSumCln

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MRKSumCln
  • Participant: MERCKKGAA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 6260f8ef885554e8fc1a6f6dfed2b0a5
  • Run description: Summaries search on title, abstract, body with pseudo relevance feedback. Summaries have been cleaned to include only noun phrases.

MRKUmlsSolr

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MRKUmlsSolr
  • Participant: MERCKKGAA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 9b15463be7b7e1398be68e2be7314fbe
  • Run description: Search using summaries in title, abstract, body. Enable Pseudo relevance feedback, enable UMLS query expansion.

MrkUmlsXgb

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MrkUmlsXgb
  • Participant: MERCKKGAA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 3ce43374863b75d4c2a5a136c0db4cff
  • Run description: Search using summaries in title, abstract, body. Enable Pseudo relevance feedback, enable UMLS query expansion, enable gradient boosting-based reranking. Enable Wikipedia-based Word vectors

nacmmf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: nacmmf
  • Participant: HAUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 00b3ff2373e422164a3c55baeb47cc4b
  • Run description: Abstract-based. Medical Text Indexer for query expansion; Article classification; Multi-model fusioni using note.

NDPHBo1C

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NDPHBo1C
  • Participant: IAII_PUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 7ffa7ef63b7344c7666e128c2af65c3b
  • Run description: Document retieval using Terrier (with DPH Bo1) and query expansion - keywords for topic type.

NDPHBo1CM

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NDPHBo1CM
  • Participant: IAII_PUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 6e738ac07c820b8e1263cd621feb9a48
  • Run description: Document retieval using Terrier (with DPH Bo1) and query expansion - keywords for topic type and MESH for medical terms in note.

nkuRun1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: nkuRun1
  • Participant: NKU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 114135a4ab958a63f5c92be75cb44487
  • Run description: We extract the main part from articles and build a new collection. We indexed and retrieved with Terrier system. We combined the results which were retrieved by BM25 model and In_expB2 model. And we extend the type with some synonyms.

nkuRun2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: nkuRun2
  • Participant: NKU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/21/2016
  • Type: automatic
  • MD5: 22553a7e8865b1ab2d7b93730cd2c04e
  • Run description: We extract the main part from articles and build a new collection. We indexed and retrieved with Terrier system using BM25 weighting model and Pseudo relevance feedback. And we extend the type with some synonyms.

nkuRun3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: nkuRun3
  • Participant: NKU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/19/2016
  • Type: automatic
  • MD5: 8f56fb556b3aa0322e049c66f1111bdb
  • Run description: We extract the main part from articles and build a new collection. We indexed and retrieved with Terrier system using In_expB2 weighting model and Pseudo relevance feedback.

nkuRun4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: nkuRun4
  • Participant: NKU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/21/2016
  • Type: automatic
  • MD5: 3f051209af7e3d54362e716c3ade5e72
  • Run description: We extract the main part from articles and build a new collection. We indexed and retrieved with Terrier system using BM25 weighting model and Pseudo relevance feedback. And we extend the type with some synonyms.

nkuRun5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: nkuRun5
  • Participant: NKU
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/19/2016
  • Type: automatic
  • MD5: 8b73a9cf4282201ba67800c93df61eac
  • Run description: We extract the main part from articles and build a new collection. We indexed and retrieved with Terrier system using In_expB2 weighting model and Pseudo relevance feedback.

NLMrun1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NLMrun1
  • Participant: NLM_NIH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: dd63ab6d896e0b8a6d653d4fdc5b5a52
  • Run description: NLMrun1: an automatic run that uses the summaries and combines the results of two IR models In expB2 and TF-IDF.

NLMrun2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NLMrun2
  • Participant: NLM_NIH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 29d0b2f68be1adf0247fba8b58a4582c
  • Run description: NLMrun2: uses MeSH for query expansion and BM25 for IR.

NLMrun3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NLMrun3
  • Participant: NLM_NIH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: aa39caf7b48b96a7b3582fc8b3b0f7e8
  • Run description: NLMrun3: uses the IR model BM25 and the MeSH terms extracted from the summaries.

NLMrun4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NLMrun4
  • Participant: NLM_NIH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 051ba28fc761f8c8b63d41b873da4233
  • Run description: NLMrun4: uses MeSH terms extracted from the notes and BM25 model for IR.

NLMrun5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NLMrun5
  • Participant: NLM_NIH
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: b5fb363c135e7554a199efe385fc715c
  • Run description: NLMrun5 uses MeSH terms extracted from the notes and combines two IR models TF-IDF and In expB2.

NoteES

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NoteES
  • Participant: nch_risi
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: f29d0108abd6c27e3313184e5431a22a
  • Run description: UMLS to provide keywords. ES to rank using notes only. Last submission.

noteUIOWAS1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: noteUIOWAS1
  • Participant: UIowaS
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 4f0d59ee6e71563fe296408a908f6e95
  • Run description: We start by processing topics2016.xml and extract the 30 topics. Given the type (note, summary, or description) of topic, we use only that text category as input in a particular run. We use Metamap to extract all the UMLS concepts in the text. We use a subset of these extracted concepts in creating the query per topic. We also extract the age and gender of the patient from the text and use it in the query. Lastly we use indri to run the query on pre-indexed pmc dataset to get the final ranked list.

prna1sum

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prna1sum
  • Participant: prna
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 309f9bc29be3b063d5cfde54fab1a43f
  • Run description: We used an automated multiple-steps driven method to extract relevant biomedical articles corresponding to each given topic. We performed clinical concepts extraction with ontology mapping for identifying important IDF-weighted topical keywords from the given topic summaries, which were used to extract relevant diagnoses, test, and treatment concepts from Wikipedia clinical medicine category articles embedded into a knowledge graph architecture. Ultimately, the Wiki concepts related to relevant diagnoses, tests, and treatments were used in mapping pertinent biomedical articles, which were further filtered by named entity and demographic information, and ordered by publication date and importance in relation to the extracted Wiki keywords.

prna2desc

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prna2desc
  • Participant: prna
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 96c6f62ff161e664dbdebb67dd3f75e8
  • Run description: We used an automated multiple-steps driven method to extract relevant biomedical articles corresponding to each given topic. We performed clinical concepts extraction with ontology mapping for identifying important IDF-weighted topical keywords from the given topic descriptions, which were used to extract relevant diagnoses concepts from Wikipedia clinical medicine category articles embedded into a knowledge graph architecture. Ultimately, the Wiki concepts related to relevant diagnoses, tests, and treatments were used in mapping pertinent biomedical articles, which were further filtered by named entity and demographic information, and ordered by publication date and importance in relation to the extracted Wiki keywords.

prna3note

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prna3note
  • Participant: prna
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 2e6566db9beb5152980ec61a1600ede4
  • Run description: We used an automated multiple-steps driven method to extract relevant biomedical articles corresponding to each given topic. We performed clinical concepts extraction with ontology mapping for identifying important IDF-weighted topical keywords from the given topic notes, which were used to extract relevant diagnoses concepts from Wikipedia clinical medicine category articles. Ultimately, the Wiki concepts related to relevant diagnoses, tests, and treatments were used in mapping pertinent biomedical articles, which were further filtered by named entity and demographic information, and ordered by publication date and importance in relation to the extracted Wiki keywords.

prna4note

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prna4note
  • Participant: prna
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 6a0455caa94523f547fa22d50a85ff6b
  • Run description: We used an automated multiple-steps driven method to extract relevant biomedical articles corresponding to each given topic. We performed clinical concepts extraction with ontology mapping for identifying important IDF-weighted topical keywords from the given topic notes, which were used to extract relevant diagnoses concepts from Wikipedia clinical medicine category articles embedded into a knowledge graph architecture. Ultimately, the Wiki concepts related to relevant diagnoses, tests, and treatments were used in mapping pertinent biomedical articles, which were further filtered by named entity and demographic information, and ordered by publication date and importance in relation to the extracted Wiki keywords.

prna5note

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prna5note
  • Participant: prna
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 0ae2cd7f2606e39403e2e7b84fc03518
  • Run description: We used an automated multiple-steps driven method to extract relevant biomedical articles corresponding to each given topic. For each topic note, we predict the differential diagnoses concepts using a deep learning model trained on a collection of MIMIC II notes and Wikipedia clinical medicine category articles. Ultimately, the Wiki concepts related to relevant diagnoses, tests, and treatments were used in mapping pertinent biomedical articles, which were further filtered by named entity and demographic information, and ordered by publication date and importance in relation to the extracted Wiki keywords.

RONE

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: RONE
  • Participant: hany-miner
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: ca6975a8b73227f4d6475a252885dff6
  • Run description: 5. This run used the vector space model described in article (http://www.r-bloggers.com/build-a-search-engine-in-20-minutes-or-less/), which a basic search model using R. Term Frequency Document was created first for all the 1.25 million documents and the Description text from the list of topics was used for querying the vector space.

run1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: run1
  • Participant: iris
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 5a8c4378015c502aa414891791239fa1
  • Run description: extract UMLS terms, predict the diagnosis with wikipedia, expand org query with prediction

run2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: run2
  • Participant: iris
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 2115de0c3af6893fb2af593da8ce5968
  • Run description: extract UMLS terms, predict the diagnosis with wikipedia, expand org query with prediction

run3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: run3
  • Participant: iris
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: ed053ef2d4340fe4dd69274e15776c9c
  • Run description: extract UMLS terms, predict the diagnosis with wikipedia, expand org query with prediction

run4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: run4
  • Participant: iris
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: manual
  • MD5: dd7959713e78eb91f62f6faf21dca413
  • Run description: extract UMLS terms, predict the diagnosis with wikipedia, expand org query with prediction

run5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: run5
  • Participant: iris
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: manual
  • MD5: 5f20385412899d9b77a8b5a66da3ba2c
  • Run description: extract UMLS terms, predict the diagnosis with wikipedia, expand org query with prediction

sacmmf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: sacmmf
  • Participant: HAUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 5bee7038479ef6e666340e1f033c6329
  • Run description: Abstract based. Medical Text Indexer for query expansion; Article classification; Multi-model fusion Using summary.

SDPHBo1NE

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SDPHBo1NE
  • Participant: IAII_PUT
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 222a6d3c9df6a2dc9b4ca0749864c278
  • Run description: This is a baseline Terrier search with DPH Bo1 option and no additional query expansions.

SumClsRerank

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SumClsRerank
  • Participant: nch_risi
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: ff5f4f611d1c296db4e9b8e56b44fa2d
  • Run description: UMLS for keywords extraction and expansion. Three categories of keywords: previous, present and undecided. Elastic search to provide an initially ranked list candidates. Logistic regression classifier, trained over 2014-2015 data, for reranking.

SumCmbRank

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SumCmbRank
  • Participant: nch_risi
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: f6e6e8cdbbe5ec1acb6b608baee21264
  • Run description: UMLS for keywords extraction and expansion. Three categories of keywords: previous, present and undecided. Elastic search to provide an initially ranked list candidates. Logistic regression classifier, trained over 2014-2015 data, for reranking. Using the average rank of ES and classification as the final rank.

SumES

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SumES
  • Participant: nch_risi
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 83849e80c13eb32a8c64834d38ceff28
  • Run description: UMLS for keywords extraction and expansion. Three categories of keywords: previous, present and undecided. Elastic search to provide an initially ranked list candidates. This is ES ranking only. PM25, boosting title, abstract and body: 2,4,1. Boost previous, present and undecided 2(1),3,2

summUIOWAS3

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: summUIOWAS3
  • Participant: UIowaS
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 72941ae515b67d5b1a1c9a9afded5b05
  • Run description: We start by processing topics2016.xml and extract the 30 topics. Given the type (note, summary, or description) of topic, we use only that text category as input in a particular run. We use Metamap to extract all the UMLS concepts in the text. We use a subset of these extracted concepts in creating the query per topic. We also extract the age and gender of the patient from the text and use it in the query. Lastly we use indri to run the query on pre-indexed pmc dataset to get the final ranked list.

UDelInfoCDS1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDelInfoCDS1
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 7501f50d1ec6820be5eb05f259edfe00
  • Run description: Use the note query. Only keep the root noun phrases as the query. External resources: CTAKEs, MetaMap

UDelInfoCDS2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDelInfoCDS2
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 588533b76cf05751e4edab9b97618908
  • Run description: Use the note query. Keep the root noun phrases as the query, add the UMLS expansion for the abbreviations. External Resources: CTAKEs, MetaMap

UDelInfoCDS3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDelInfoCDS3
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: fbfb5c682ed02895ce5650e946e84dba
  • Run description: Use the note query. Keep the root noun phrases, add the UMLS expansion and MediLexicon expansion together. External Resources: CTAKEs, MetaMap, MediLexicon

UDelInfoCDS4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDelInfoCDS4
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 67b2e7b34720e9b6db25345956df1829
  • Run description: Use the description query. Only keep the root noun phrases as the query. External resources: CTAKEs, MetaMap

UDelInfoCDS5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDelInfoCDS5
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: bf7778e8dadba5c938b1b9b5f9b5a2d2
  • Run description: Use the summary query. Pseudo Relevance Feedback is applied to the original query.

udelNB

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: udelNB
  • Participant: udel
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 5fab8e7bea08281418d26faa2884a024
  • Run description: A baseline run by the information retrieval lab at the University of Delaware. It uses the note version of queries and it is produced by Terrier IR platform with In_expB2 model and KL divergence for query expansion.

udelNRef

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: udelNRef
  • Participant: udel
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 0f7a8b33cad4ec53d92ac6b2e227b6ee
  • Run description: A run by the information retrieval lab at the University of Delaware. It uses the note version of queries and it is produced by Terrier IR platform with In_expB2 model and KL divergence for query expansion. In addition, this run uses an automatic reformulation approach based on term re-weighing such that terms that are non relevant to the clinical query are removed whereas other terms are either boosted (due to their importance) or have their weights reduced (because they are less important).

udelSB

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: udelSB
  • Participant: udel
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 84577981fd52d920e21ee7f877800a25
  • Run description: A baseline run by the information retrieval lab at the University of Delaware. It uses the summary version of queries and produced by Terrier IR platform with In_expB2 model and KL divergence for query expansion.

udelSDI

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: udelSDI
  • Participant: udel
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 0f170ecd3f0d6196bbd3851a7c3925c3
  • Run description: A run by the information retrieval lab at the University of Delaware. It uses the summary version of queries and it is produced by Terrier IR platform with In_expB2 model and KL divergence for query expansion. In addition, this run uses external resource (mayo clinic) to inffer the possible diagnosis for the last 20 topics (tests and treatments) by expanding the 5 most relevant diagnoses for each clinical case.

udelSRef

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: udelSRef
  • Participant: udel
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 26cf81e9ed8d13260616e9f2579279f3
  • Run description: A run by the information retrieval lab at the University of Delaware. It uses the summary version of queries and it is produced by Terrier IR platform with In_expB2 model and KL divergence for query expansion. In addition, this run uses an automatic reformulation approach based on term re-weighing such that terms that are non relevant to the clinical query are removed whereas other terms are either boosted (due to their importance) or have their weights reduced (because they are less important).

UNTIIANA

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UNTIIANA
  • Participant: UNTIIA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: ef959b10c9bb88d5f9d76cbd1f882733
  • Run description: This run uses Note section. Terrier IR system is used for indexing and retrieval and Python regular expressions are rigorously used in preprocessing the PubMed data-set.

UNTIIANM

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UNTIIANM
  • Participant: UNTIIA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: manual
  • MD5: 1d1594978793127030fa69db5b670cd0
  • Run description: This run is generated using the key terms in the Note section. The key terms are extracted manually by a medical expert. Additionally we used Terrier IR system for indexing and retrieval and UMLS Meta-thesaurus for getting the full form of acronyms and abbreviations in the note section.

UNTIIANMERG

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UNTIIANMERG
  • Participant: UNTIIA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 74312f7df9b4182d159545f2818c398b
  • Run description: This run is an automatic run generated using the Note section of the topics. We used Terrier IR system for indexing and retrieval. For this run results of 5 different weighing models are merged together and re-ranked based on their normalized similarity score.

UNTIIASA

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UNTIIASA
  • Participant: UNTIIA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 2590878f6861e367f950f50f491c3fd2
  • Run description: This run uses summery section of the topic as query, terrier IR system is used for indexing and retrieval. Additionally Python Regular expressions are extensively used in preprocessing the PubMed articles.

UNTIIASMERG

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UNTIIASMERG
  • Participant: UNTIIA
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/27/2016
  • Type: automatic
  • MD5: 502aa5cbf947a82e9ce53d0ced8d4567
  • Run description: This run uses the summary section of the topics and we used Terrier IR system for indexing and retrieval. The ranking of this run is a merge of results from 5 different weighing models. The re-ranking is based on the normalized score after merging.

UWM0

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UWM0
  • Participant: UWM
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/20/2016
  • Type: automatic
  • MD5: 597d88ffd42b860d53a3e6ef04783666
  • Run description: Base run. The summary fields from original topics (http://trec-cds.appspot.com/topics2016.xml) have been used as queries. Bayesian smoothing with Dirichlet Prior and Porter stemmer had been set up as default for the retrieval in Terrier.

UWM1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UWM1
  • Participant: UWM
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/20/2016
  • Type: automatic
  • MD5: 8b0426e8ee4e06dccb8bfc4c4a6eac50
  • Run description: Query expansion with the words of reference titles. Top 5 retrieved documents for each topic were chosen from base run. The frequency of the words that appeared in the reference titles for top 5 retrieved documents has been counted. Frequently occurred words, which are included in the reference titles of each document, have been selected as the words for query expansion in addition to the base run query. , Selected words ordered by frequency (descending order) have been added up to maximum 10. The words with same lowest frequency have been removed in case that the total number of words exceeds 10. The optimal number of top retrieved documents, 5, and the optimal number of words, 10 have been chosen by referring to the results based on 2015 CDS Track dataset. In order to collect data of references and citing articles for each retrieved document, Europe PCM articles REST(Representational State Transfer)ful APIs ( http://europepmc.org/RestfulWebService) have been used. Stop words have been removed.

UWM2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UWM2
  • Participant: UWM
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/20/2016
  • Type: automatic
  • MD5: b7b7e7b58a74355116cbc2dca17583e6
  • Run description: Query expansion with the words of titles whose articles are citing top 5 retrieved documents for each topic. In a similar fashion to UWM1, Top 5 retrieved documents for each topic were chosen from base run. The frequency of the words that appeared in the titles has been counted. Frequently occurred words, which are included in the titles of documents that each retrieved document is cited by, have been selected as the words for query expansion in addition to the base run query. Some queries are same to original queries if they are not cited by other articles. Selected words ordered by frequency (descending order) have been added up to maximum 10. The words with same lowest frequency have been removed in case that the total number of words exceeds 10. The optimal number of top retrieved documents, 5, and the optimal number of words, 10 have been chosen by referring to the results based on 2015 CDS Track dataset. In order to collect data of references and citing articles for each retrieved document, Europe PCM articles REST(Representational State Transfer)ful APIs ( http://europepmc.org/RestfulWebService) have been used. Stop words have been removed.

WHUIRGroup1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: WHUIRGroup1
  • Participant: WHUIRGroup
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 913643215657666c9e6f547a108e74ad
  • Run description: We just use the nxml2txt tools process the PubMed documents. The use the Indri to Index. For the topic, we choose the notes field. We use smartStopWords both in indics and queries. Last, we use the Indri Language Model to get the result.
  • Code: https://github.com/spyysalo/nxml2txt

WHUIRGroup2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: WHUIRGroup2
  • Participant: WHUIRGroup
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 07e941d97b8192545d8a199958845108
  • Run description: We use the Indri to Index the PubMed documents. The field we used are "abstract","title" and "body". We use smartStopWords both in indics and queries. Then we use word2vec to expand the queries. We used the expanded queries and Indri LM model to get the run result.

WHUIRGroup4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: WHUIRGroup4
  • Participant: WHUIRGroup
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: c554b0f6e9a5443c5ce22ede4ebe0fc9
  • Run description: We just use the nxml2txt tools process the PubMed documents. Then use the Indri to Index gettingIndex1. We use the Indri to Index the PubMed documents. The field we used are "abstract","title" and "body".Then use the Indri to Index getting Index2. We use smartStopWords both in indics and queries. We used these Indics and BM25,TFIDF and LM to query the summaries query,get 6 runs. For each run,we can get the q-d's score and rank.We used these numbers as features.There is 12 features for each q-d pair. We used RankLib to train model and predict the result. The training data is CDS2015topics running in these two indics. The Learning Method is LambdaMART.
  • Code: https://sourceforge.net/p/lemur/wiki/RankLib/ https://github.com/spyysalo/nxml2txt

WHUIRGroup5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: WHUIRGroup5
  • Participant: WHUIRGroup
  • Track: Clinical Decision Support
  • Year: 2016
  • Submission: 7/28/2016
  • Type: automatic
  • MD5: 61b235ca94cc1025b5c78b3bb670cbe2
  • Run description: We use the same way with WHHUIRGroup2 runs.Just replace the queries.In this run,we use notes instead of summaries.

WHUIRGroup6

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix