Skip to content

Runs - Clinical Decision Support 2014

atigeo1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: atigeo1
  • Participant: atigeo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: d425043d6b4617ab2d156d98ad6d3f3d
  • Run description: indri.age.none.norm.mesh.final_topics.20140725.indri.none.neg.none.mesh.final_topics.20140725 Uses NegEx & Metamap

atigeo2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: atigeo2
  • Participant: atigeo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 44e3fce7b574bb8d83ff6e90e314504b
  • Run description: indri.age.none.norm.mesh.final_topics.20140725.indri.none.neg.none.mesh.final_topics.20140725 Uses NegEx & Metamap

atigeo3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: atigeo3
  • Participant: atigeo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 1be8492588c9755c8d3c8e07abca3694
  • Run description: solr.age.neg.norm.none.final_topics.20140725.indri.age.neg.norm.mesh.final_topics.20140725 Uses NegEx & Metamap

atigeo4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: atigeo4
  • Participant: atigeo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 0ca40fe17ce44175c4fbdd896784400d
  • Run description: solr.age.neg.norm.none.final_topics.20140725.indri.age.neg.norm.mesh.final_topics.20140725 Uses NegEx & Metamap

atigeo5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: atigeo5
  • Participant: atigeo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 07c0e1bc09883c9553a7113fb425b1d3
  • Run description: indri.age.neg.norm.mesh.final_topics.20140725 Uses NegEx & Metamap

bacon

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: bacon
  • Participant: BigPig
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: d51456d54cc5ea30090c4893aab3c6c5
  • Run description: metamap

BiTeMSIBtex1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BiTeMSIBtex1
  • Participant: BiTeM_SIBtex
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/22/2014
  • Type: automatic
  • Task: main
  • MD5: d73f5f6a3bc5eea9de1ba95aaaf11cd8
  • Run description: These are the strategies we used for run 1. - we indexed at the document level - we had one index - we added mesh terms to the text. Mesh terms were both imported from MEDLINE (and repeated 10 times), and locally mapped with a classical Rabin-Karp algorithm (repeated as many times as they were mapped). MeSH terms were also mapped and added in the queries. - we used an additional "MeSHtarget" strategy : MeSHtargets are aimed to represent the question type (diagnosis, test, or treatment). We exploited the UMLS Semantic Network in order to have three MeSH subsets for each type. Then, we added a specific term to the document ("meshtargetdiag", "meshtargettreat", "meshtargettest"), each time we mapped a mesh term belonging to one of these subsets. The query also contained the asked MeSHtargets (e.g. "meshtargetdiag" for queries from 1 to 10) repeated three times. - reranking: if the article type was review-article or case-report, its score had a +20% boost. - IR was performed with Terrier, stopwords and Porter stemming, weighting scheme BM25, and query expansion. All numbers were removed from the queries. External resources : MeSH.

BiTeMSIBtex2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BiTeMSIBtex2
  • Participant: BiTeM_SIBtex
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/22/2014
  • Type: automatic
  • Task: main
  • MD5: 503aa037b3c61763f8d9e7d19cbc6c79
  • Run description: These are the strategies we used for run 2. - same strategies than run 1, but we used two different indexes: one for text and meshtargets, the other with mesh terms and meshtargets. Queries contained respectively text and meshtargets, and mesh terms and meshtargets. Results from two indexes were merged by linear combination (same weight to both indexes). External resources : MeSH.

BiTeMSIBtex3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BiTeMSIBtex3
  • Participant: BiTeM_SIBtex
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/23/2014
  • Type: automatic
  • Task: main
  • MD5: f223e31a39cf01e260742431f53053d6
  • Run description: These are the strategies we used for run 3. - same strategies than run 2, but we indexed at the section level. One document per section was created. For MeSH terms retrieved from MEDLINE, they were repeated three times instead of ten. We retrieved sections but kept the document id. We kept the score of the first retrieved section. - the MeSHtargets strategy was not applied External resources : MeSH.

BiTeMSIBtex4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BiTeMSIBtex4
  • Participant: BiTeM_SIBtex
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/22/2014
  • Type: automatic
  • Task: main
  • MD5: 46d105f2c722d932c9a889e5b03a627f
  • Run description: These are the strategies we used for run 4. - same strategies than run 3, but the MeSHtargets strategy was applied External resources : MeSH.

BiTeMSIBtex5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BiTeMSIBtex5
  • Participant: BiTeM_SIBtex
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/22/2014
  • Type: automatic
  • Task: main
  • MD5: 44d275ef8229cd857ae54aa7c8173198
  • Run description: These are the strategies we used for run 5. - same strategies than run 4, but we used a citation network reranking strategy. For each document, we extracted the citations that belongs to the collection. Then we reranked all the run: for each retrieved document i (and its score RSVi), all its cited documents (whether they already were retrieved or not) had a 0.2*RSVi boost. It means that documents that are often cited by a lot of top retrieved documents have a strong boost. External resources : MeSH.

BM25

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BM25
  • Participant: DawitAfshin
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: manual
  • Task: main
  • MD5: 4cf9091438d8207db7a0dcde7fab42f3
  • Run description: This method uses query expansion/optimization to get the retrieved results. Terrier version 3.6 was used to index and retrieve using BM25. The summary queries were used to retrieve files from the collection.

BM25EXP

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: BM25EXP
  • Participant: DawitAfshin
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: manual
  • Task: main
  • MD5: d359041603a1b61cdb2699e916b1f135
  • Run description: This method uses query expansion/optimization to get the retrieved results. Terrier version 3.6 was used to index and retrieve using BM25. The summary queries were used to retrieve files from the collection. SNOMED CT, METAMAP, UMLS Metathesaurus were used to expand the query.

bolgogi

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: bolgogi
  • Participant: BigPig
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 74aa406fab2f9e07f3742fcb6500e7dd
  • Run description: metamap, lucene-bm25similarity

DAIICTdqep

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAIICTdqep
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 2c5b209dfc6746689835ce4827ca94e2
  • Run description: The run DAIICTdqep is generated using parameter free automatic query expansion technique.

DAIICTdqer8

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAIICTdqer8
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 88fa69800f19bf40b091c3a5cf735e55
  • Run description: The run DAIICTdqer8 is generated using automatic query expansion technique rocchio with beta parameter as 0.8 .

DAIICTf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAIICTf
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: fcd78f1fd15e136f072e677e6961e748
  • Run description: The submitted run DAIICTf is a fused result (by combSUM fusion method) of four different experiments with various automatic query expansion techniques.

DAIICTsqer8

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAIICTsqer8
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: d82eda003b6a69079a990b588f94fc79
  • Run description: The run DAIICTsqer8 is generated using automatic query expansion technique rocchio with beta parameter as 0.8 .

DAIICTzf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: DAIICTzf
  • Participant: DA_IICT
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 97e65a25d8d702816a6d12b2f289d51e
  • Run description: The submitted run DAIICTzf is a fused result (by z-fusion method) of four different experiments with various automatic query expansion techniques.

descript50ex

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: descript50ex
  • Participant: CSEIITV
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: c01d23c8fac85b47dd9bbb89c19f986a
  • Run description: Descriptions were used in this run. Only 50 documents were ranked in this run. For initial pruning of documents we used Lucene.

ecnuBig

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ecnuBig
  • Participant: ecnu
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: 292f621b8cc9bb83fb7de64da38a093e
  • Run description: We retrieve by Terrier and Indri search engine with BM25, DFR-BM25, TF-IDF, BB2 and PL2 weight model. Then we combine all of our results together.

ecnuIndex

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ecnuIndex
  • Participant: ecnu
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: d9a2d3d4de80dfa6bcef56433b46fbc8
  • Run description: We use Terrier search engine and BM25F weight model. We create index respectively for title, abstract, text, table, figure and reference. We retrieve with the six indexs and combine the results together in the end.

ecnuSmall

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ecnuSmall
  • Participant: ecnu
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: e7e3e15c6e1310273f773097a0c59ab6
  • Run description: We retrieve with Terrier in BM25 weight model and Indri in TFIDF model. Then we combine the results together.

ecnuWeight

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ecnuWeight
  • Participant: ecnu
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: b60a77ce777dc4cbbe8bdec2b0c21f1b
  • Run description: In query expansion, we automatic assign weight to the key words in query. Then we retrieve by Indri.

GuHNegProxL

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: GuHNegProxL
  • Participant: Georgetown
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: a556047dded04abc14ed7dbd5e98e325
  • Run description: Georgetown Run 2: Lemur Indri search engine was used for indexing and retrieval (language modeling with Dirichlet smoothing), UMLS Metathesaurus and Meta Map were used to detect medical terms and provide synonyms. The Stanford NLP Parser was used to add nouns to the list of medical terms for topics that were classified as difficult. Idf filtering was applied. NegEx was used to detect negated phrases. Overly common words among queries were also detected by indexing queries and using an idf threshold. Weights of common terms among queries and of terms mapping from negated phrases were reduced.

GuHSINeg

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: GuHSINeg
  • Participant: Georgetown
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 447aac3b7f003b4271b403d2bd776307
  • Run description: Georgetown Run 1: Lemur Indri search engine was used for indexing and retrieval (language modeling with Dirichlet smoothing), UMLS Metathesaurus and Meta Map were used to detect medical terms and provide synonyms. The Stanford NLP Parser was used to add nouns to the list of medical terms for topics that were classified as difficult. Idf filtering was applied. Then, a second level stricter filtering of the medical terms was applied using the Meta Map classifications. NegEx was used to detect negated phrases. Weights of terms mapping from negated phrases were reduced.

GuHSINegL

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: GuHSINegL
  • Participant: Georgetown
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 2e85e2368c629c5cb6f348ae1af2ccb3
  • Run description: Georgetown Run 5 Same approach as run 1, but further reduced weight of terms mapped from negated phrases.

GuHSNegProxH

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: GuHSNegProxH
  • Participant: Georgetown
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 140292e61f7878679913347e8236b1de
  • Run description: Georgetown Run 3: Lemur Indri search engine was used for indexing and retrieval (language modeling with Dirichlet smoothing), UMLS Metathesaurus and Meta Map were used to detect medical terms and provide synonyms. The Stanford NLP Parser was used to add nouns to the list of medical terms for topics that were classified as difficult. Idf filtering was applied. Then, a second level stricter filtering of the medical terms was applied using the Meta Map classifications. NegEx was used to detect negated phrases. Overly common words among queries were also detected by indexing queries and using an idf threshold. Weights of common terms among queries and of terms mapping from negated phrases were reduced.

GuHSNegProxL

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: GuHSNegProxL
  • Participant: Georgetown
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 50273b9424fbba9d56558613a8704026
  • Run description: Georgetown Run 4: Lemur Indri search engine was used for indexing and retrieval (language modeling with Dirichlet smoothing), UMLS Metathesaurus and Meta Map were used to detect medical terms and provide synonyms. The Stanford NLP Parser was used to add nouns to the list of medical terms for topics that were classified as difficult. Idf filtering was applied. NegEx was used to detect negated phrases. Overly common words among queries were also detected by indexing queries and using an idf threshold. Weights of common terms among queries and of terms mapping from negated phrases were reduced. This differs from run 3 by using a lower threshold for words common among queries.

hltcoe5drf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: hltcoe5drf
  • Participant: hltcoe
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 45bca907669bc407e6a5847e34e56890
  • Run description: JHU HAIRCUT system - statistical language model. Padded, word-spanning character 5-grams as tokens. Relevance Feedback. No domain customization. No resources. Descriptions instead of summaries from topics.

hltcoe5s

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: hltcoe5s
  • Participant: hltcoe
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 1636d9761eb62940f75ed438662f6dca
  • Run description: JHU HAIRCUT system - statistical language model. Padded, word-spanning character 5-grams as tokens. No relevance Feedback employed. No domain customization. No resources. (Note: all three submitted 5-gram runs suffered from a build error where approx. 3.5% of the corpus was mistakenly not indexed. At submission time we're rebuilding the index, but it'll be some hours still before corrected runs are available. The word-based run used the full text corpus, excepting two problematic documents our parser didn't like.)

hltcoe5srf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: hltcoe5srf
  • Participant: hltcoe
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: d4caee60eb21bb259fb3f8f45303af15
  • Run description: JHU HAIRCUT system - statistical language model. Padded, word-spanning character 5-grams as tokens. Relevance Feedback. No domain customization. No resources.

hltcoewsrf

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: hltcoewsrf
  • Participant: hltcoe
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: b7119072746b7661b8737aa8eca9bfb7
  • Run description: JHU HAIRCUT system - statistical language model. Words as tokens. Relevance Feedback. No domain customization. No resources.

icd

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: icd
  • Participant: cuhk_sls
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: b6c14694e76aa08eec867636740d6ab9
  • Run description: Words not found in the ICD-10 data were used to filter out terms in the query file. External resources are Terrier and ICD-10.

icdqe

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: icdqe
  • Participant: cuhk_sls
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: ca0936fe98d700d654cdf993af535dc8
  • Run description: ICD-10 was indexed so that each term is made into a document. Queries were first submitted to this ICD-10-based system to locate terms that can be appended to the original query. These modified queries were then used on the original document collection. External resources: Terrier and ICD-10.

InL2c1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: InL2c1
  • Participant: DawitAfshin
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: manual
  • Task: main
  • MD5: 5678968bc065e6dcd7d06297a9716f53
  • Run description: This method uses query expansion/optimization to get the retrieved results. Terrier version 3.6 was used to index and retrieve using InL2c1. The summary queries were used to retrieve files from the collection.

InL2c1EXP

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: InL2c1EXP
  • Participant: DawitAfshin
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: manual
  • Task: main
  • MD5: d0efafb485c54ea746a46d035a78b38d
  • Run description: This method uses query expansion/optimization to get the retrieved results. Terrier version 3.6 was used to index and retrieve using InL2c1. The summary queries were used to retrieve files from the collection. SNOMED CT, METAMAP, UMLS Metathesaurus were used to expand the query.

IRGURUN1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: IRGURUN1
  • Participant: georgetown_ir
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 2ee914bdb73589f4e846270ad3e6fb4c
  • Run description: The documents were indexed using ElasticSearch (a modern implementation of Lucene) and retrieved Divergence from Randomness (DRF) model. The original query is both expanded using Pseudo Relevance Feedback and contracted by filtering query terms based on their presence on health related Wikipedia pages. The retrieved results where classified into treatment, test or diagnosis using a SVM trained with a set of manually annotated papers randomly sampled from the collection. Terms were used as features (except stopwords), as well as MetaMap concepts in the title and in the abstract (when available). Finally, results were clustered and reranked

IRGURUN2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: IRGURUN2
  • Participant: georgetown_ir
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 74a5d15a21501a1e442c725e5e0f6911
  • Run description: The documents were indexed using ElasticSearch (a modern implementation of Lucene) and retrieved Divergence from Randomness (DRF) model. The original query is both expanded using Pseudo Relevance Feedback and contracted by filtering query terms based on their presence on health related Wikipedia pages. For those queries asking for a test or a diagnosis, biographical information were extracted from the retrieved documents and the query via pattern matching and used to rerank the list of results. For treatments, a SVM was trained with a set of manually annotated papers randomly sampled from the collection. We kept as features those terms whose stemmed version matched the stemmed version of a set of seed words related to treatments.

IRGURUN3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: IRGURUN3
  • Participant: georgetown_ir
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 285a693c2328012959fac08d764d44ed
  • Run description: The documents were indexed using ElasticSearch (a modern implementation of Lucene) and retrieved Divergence from Randomness (DRF) model. The original query is both expanded using Pseudo Relevance Feedback and contracted by filtering query terms based on their presence on health related Wikipedia pages. Documents where reranked based on the presence of the terms treatment, test and diagnosis in them. Different sections were weighted by the normalized inverse of their length.

IRGURUN4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: IRGURUN4
  • Participant: georgetown_ir
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 1ea7fde5f0ac3ba20a875ffc82ab3210
  • Run description: The documents were indexed using ElasticSearch (a modern implementation of Lucene) and retrieved Divergence from Randomness (DRF) model. The original query is both expanded using Pseudo Relevance Feedback and contracted by filtering query terms based on their presence on health related Wikipedia pages. Documents where reranked based on how many MetaMap concepts they shared with the original query.

IRGURUN5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: IRGURUN5
  • Participant: georgetown_ir
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 1c56347d6dd86a94bc9d6231404f3271
  • Run description: Fusion retrieval system that combines the methods used in the other four submitted runs to generate a list of ranked results for each query. Each system is weighted equally.

KISTI01

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: KISTI01
  • Participant: KISTI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: ca9f4c9d5735cc1ca8933f9cc01b01e3
  • Run description: Simple baseline using Lucene's language model

KISTI02

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: KISTI02
  • Participant: KISTI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 951ae5cf4c1a8d491ab3422fe737244b
  • Run description: Reranking using PRF by considering abbreviations based on the baseline

KISTI03

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: KISTI03
  • Participant: KISTI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 97edc8fdc6abfae565761c5c3a906dea
  • Run description: Reranking using ESA with ICD10-hierarchy

KISTI04

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: KISTI04
  • Participant: KISTI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: a29cbde487b9fa2202fe8643e1e82a24
  • Run description: Reranking using MetaMap

KISTI05

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: KISTI05
  • Participant: KISTI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: c89116a89593961646a58d11167bdd43
  • Run description: Reranking using MetaMap

manual

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: manual
  • Participant: cuhk_sls
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: manual
  • Task: main
  • MD5: ff5f94b8a57a3658e3f51e13cfcdfcab
  • Run description: Entirely manual modification of the queries by having a domain expert select words that she deemed important in understanding the case. Would be an interesting baseline to our other techniques which either add terms or remove terms automatically. External resources is just Terrier.

MERCK1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MERCK1
  • Participant: Merck_DA
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 60e66ba755648abc67f0d29e795eef56
  • Run description: NER with MER skill cartridge Indexing with Luxid Query with raw summary text

MERCK2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MERCK2
  • Participant: Merck_DA
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 2324cf4191a043e65fb8d90c0367e138
  • Run description: 1. Named entity recognition with MER skill cartridge 2. Indexing with Luxid 3. Query with pre-indexed entities

MERCK3

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MERCK3
  • Participant: Merck_DA
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: a4cc825f0216aa0e471dcbc2bb8fcd44
  • Run description: NER with MER skill cartridge Indexing with Luxid Query with pre-indexed entities and restrict to "Case reports"

mesh

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: mesh
  • Participant: cuhk_sls
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: d34c74994d2ec3e9605991a44f9233a2
  • Run description: Words not found in the MeSH data were used to filter out terms in the query file. External resources are Terrier and MeSH.

MIIjmab

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MIIjmab
  • Participant: UCLA_MII
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/25/2014
  • Type: automatic
  • Task: main
  • MD5: f4b7417925701f7a129aa673408c4bde
  • Run description: Lucene English Analyzer, Jelinek-Mercer similarity metric, indexed with unigrams and UMLS CUIs. Queries were composed automatically, expanding query based on population attributes, temporal keywords, and clinical question type. Searched titles and abstracts only.

MIIjmboost

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MIIjmboost
  • Participant: UCLA_MII
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/25/2014
  • Type: automatic
  • Task: main
  • MD5: acf395af51a0083d4e4dbbfbe0ab2197
  • Run description: Lucene English Analyzer, Jelinek-Mercer similarity metric, indexed with unigrams and UMLS CUIs. Queries were composed automatically, expanding query based on population attributes, temporal keywords, and clinical question type. Boosted some semantic types in the query: population, symptoms, findings, diseases.

MIIjmignore

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MIIjmignore
  • Participant: UCLA_MII
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/25/2014
  • Type: automatic
  • Task: main
  • MD5: 7a6f12af73ae6a2ee2a783d81be92027
  • Run description: Lucene English Analyzer, Jelinek-Mercer similarity metric, indexed with unigrams and UMLS CUIs. Queries were composed automatically, expanding query based on population attributes, temporal keywords, and clinical question type.

MIItfauto

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MIItfauto
  • Participant: UCLA_MII
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/25/2014
  • Type: automatic
  • Task: main
  • MD5: 11a785195461ab578d07e62d07f47c72
  • Run description: Lucene Standard Analyzer, default similarity metric, indexed with unigrams and UMLS CUIs. Queries were composed automatically, expanding query based on population attributes, temporal keywords, and clinical question type.

MIItfman

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: MIItfman
  • Participant: UCLA_MII
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/25/2014
  • Type: manual
  • Task: main
  • MD5: 8be1fce1151b42e7b3489e24c6d1e53c
  • Run description: Lucene Standard Analyzer, default similarity metric, indexed with unigrams and UMLS CUIs. Queries were composed manually by a clinician. Explicit mention of diagnosis in query was allowed for "treatment" and "test" questions.

myrun

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: myrun
  • Participant: IKMLAB
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: b2b812044c963986095e1edce8a648b9
  • Run description: We use tfidf to count every documents score with topic.

NOVASEARCH1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NOVASEARCH1
  • Participant: NovaSearch
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 6f987373e17c8a6b7a2ed6aa13e2aa0a
  • Run description: Indexing with Lucene. BM25L retrieval function, query expansion with MeSH and pseudo-relevance feedback with top results. External data: MeSH thesaurus for query expansion

NOVASEARCH2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NOVASEARCH2
  • Participant: NovaSearch
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 77cf7b5f814b0c263788dce16865b65d
  • Run description: Indexing with Lucene. BM25L retrieval function, query expansion with MeSH and pseudo-relevance feedback with top results. Article Weighting for question type based on MeSH term count by category. External data: MeSH thesaurus for query expansion and weighting

NOVASEARCH3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NOVASEARCH3
  • Participant: NovaSearch
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: c9db4b6e8800ad6d690ae1801fcb2c5e
  • Run description: Late fusion of NOVASEARCH1 and NOVASEARCH2 runs using Reciprocal Rank Fusion (RRF). External data: MeSH thesaurus for query expansion and weighting

NOVASEARCH4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NOVASEARCH4
  • Participant: NovaSearch
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 4797596bce52631be3b6f4032e7bcba4
  • Run description: Combination of BM25L, BM25+, Language Models, and TFIDF using RRF. Individual runs with query expansion with MeSH and pseudo-relevance feedback with top results and article Weighting for question type based on MeSH term count by category. External data: MeSH thesaurus for query expansion and weighting.

NOVASEARCH5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: NOVASEARCH5
  • Participant: NovaSearch
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 0c6ec72df33c40f0bc42a2d28ec0d215
  • Run description: Indexing with Lucene. BM25L retrieval function, query expansion with MeSH and pseudo-relevance feedback with top results from journals with the highest impact factor. Article Weighting for question type based on MeSH term count by category. External data: MeSH thesaurus for query expansion and weighting and JCR Impact Factors List 2013 for pseudo relevance feedback

ohsuAbstDef

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ohsuAbstDef
  • Participant: OHSU
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: manual
  • Task: main
  • MD5: 681d5b25295332390c44700a8ef6a0c0
  • Run description: Manually constructed queries using our search interface, run only over the abstracts of the articles.

ohsuBodyDef

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ohsuBodyDef
  • Participant: OHSU
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: manual
  • Task: main
  • MD5: 2954885cc5357576b2f1ebb204a44c5d
  • Run description: Manually constructed queries using our search interface, run over the full text of the articles.

ohsuOrigAbst

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ohsuOrigAbst
  • Participant: OHSU
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: manual
  • Task: main
  • MD5: 9b63c63abbab319a7eb4d9f097cf4321
  • Run description: First attempt at queries, with no knowledge of article body text, run only on abstracts.

ohsuOrigBody

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: ohsuOrigBody
  • Participant: OHSU
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: manual
  • Task: main
  • MD5: 597afb71397e152260d348d2b2740eb7
  • Run description: First attempt at queries, with no knowledge of article body text, run only on body text. Does blindly indexing article full text help?

origexp

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: origexp
  • Participant: cuhk_sls
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: a8ff2bd56b1b77189dae8f0b7e213f61
  • Run description: The three types of cases and their synonyms were added to each query. Synonyms were obtained from WordNet. External resources are Terrier and WordNet.

prise1

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prise1
  • Participant: super_kxlab
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/15/2014
  • Type: automatic
  • Task: main
  • MD5: bdba59670206af2aa0deaae15c9b1ea5
  • Run description: step 1. preprocess. step 2. extrat keywords. step 3. compute tf-idf. step 4. compute cosine similarity. step 5. rank. The third column "docno" is the number we found in the article under the tag "pmcid".

prna1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: prna1
  • Participant: Philips
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 1a293c276f64b516cc1690b7daac88d1
  • Run description: We used an automated multiple-steps driven method to extract relevant biomedical articles corresponding to each given topic. We performed clinical concepts extraction with ontology mapping for identifying important topical keywords, which were used to extract clinical concepts from relevant Wiki articles. The Wiki concepts were used in mapping pertinent biomedical articles, which were further filtered by named entity information, and ordered by publication date and importance in relation to the extracted Wiki keywords.

Run1BoWC

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: Run1BoWC
  • Participant: LIMSI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 330c9eaa27750f3057463050d20e41b2
  • Run description: Baseline run: 1. A manually-constructed MeSH query for each dimension (diagnosis/test/treatment) was sent to Pubmed. Based on the results we constructed 3 different subcorpora from the given corpus. 2. We then performed a plain-text-based retrieval based on BM25 in Terrier for each topic in its relevant subcorpus.

Run2MeSHDi

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: Run2MeSHDi
  • Participant: LIMSI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 31ccb1a3e31bcf60efe94c05bbbf79f7
  • Run description: 1. We build a symptom-checker type system based on disease-symptom association information extracted from OrphaNet and Disease Symptom Knowledge Database. 2. For each topic we retrieved 5 disease hypotheses which were converted to their corresponding MeSH tems. 3. For each topic we queried PubMed with a query constructed of the MeSH terms of the retrieved diseases and generic (manually-constructed) query for the relevant dimension (diagnosis/test/treatment).

Run3MeSHDiCa

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: Run3MeSHDiCa
  • Participant: LIMSI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 5d845a389941ffafc4319db6905195a3
  • Run description: 1. We build a symptom-checker type system based on disease-symptom association information extracted from OrphaNet and Disease Symptom Knowledge Database. 2. For each topic we retrieved 5 disease hypotheses which were converted to their corresponding MeSH tems. 3. For each topic we queried PubMed with a query constructed of the MeSH terms of the retrieved diseases, MeSH terms extracted from the topics (with MetaMap and Restrict-to-Mesh) and generic (manually-constructed) query for the relevant dimension (diagnosis/test/treatment).

Run4BoWDiCa

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: Run4BoWDiCa
  • Participant: LIMSI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 7b76afb684f7d8570c367cf767733abb
  • Run description: 1. We build a symptom-checker type system based on disease-symptom association information extracted from OrphaNet and Disease Symptom Knowledge Database. 2. For each topic we retrieved 5 disease hypotheses whose raw-text name variants (extracted from UMLS, OrphaNet and Disease Symptom Knowledge Database) were combined with the topic summary to form bag-of-words queries. 3. We performed plain-text-based retrieval using Terrier (BM25 scoring).

Run5BoWDiCaS

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: Run5BoWDiCaS
  • Participant: LIMSI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: b3a1ec1b4ba0f67772f6e8178d591e84
  • Run description: 1. We build a symptom-checker type system based on disease-symptom association information extracted from OrphaNet and Disease Symptom Knowledge Database. 2. For each topic we retrieved 5 disease hypotheses whose raw-text name variants (extracted from UMLS, OrphaNet and Disease Symptom Knowledge Database) as well as their symptoms (also name variants) were combined with the topic summary to form bag-of-words queries. 3. We performed plain-text-based retrieval using Terrier (BM25 scoring).

runSystem2

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: runSystem2
  • Participant: ir.cs.sfsu
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 674eed97cb4f0d3ffdc607f3b613f823
  • Run description: The run takes in a given description and uses MetaMap API to categorize the semantic types. Various semantic types are given different weight. Some semantic types are also expanded through the use of MESH. The query is generated through the use of Indri Retrieval Model. For the type treatment and test, various treatment names and test names have been gathered through webMD, which is used for the post processing, including removal and adjustment of scores retrieved.

samgyupsal

Results | Participants | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: samgyupsal
  • Participant: BigPig
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: ef019168df9a06589173fe5383864e87
  • Run description: metamap, luence-bm25similarity

SNUMedinfo1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SNUMedinfo1
  • Participant: SNUMedinfo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: bc7cf01aa9dd83c3a909be8663346149
  • Run description: Query expansion MeSH MEDLINE Task classification SVM Clinical Hedges Database

SNUMedinfo2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SNUMedinfo2
  • Participant: SNUMedinfo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: e758660b9cb605a0130f9e230707dfa8
  • Run description: Query expansion MeSH MEDLINE Task classification SVM Clinical Hedges Database

SNUMedinfo3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SNUMedinfo3
  • Participant: SNUMedinfo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: 186af407b7b9c46b66f8bf2695160b4a
  • Run description: Query expansion MeSH MEDLINE Task classification SVM Clinical Hedges Database

SNUMedinfo4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SNUMedinfo4
  • Participant: SNUMedinfo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: 5afe2708cf4bd85e451a8615bcc8446d
  • Run description: Query expansion MeSH MEDLINE Task classification SVM Clinical Hedges Database

SNUMedinfo6

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: SNUMedinfo6
  • Participant: SNUMedinfo
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: 1477543594f22043e769517e0d8f7658
  • Run description: Query expansion MeSH MEDLINE Task classification SVM Clinical Hedges Database

summary50ex

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: summary50ex
  • Participant: CSEIITV
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/27/2014
  • Type: automatic
  • Task: main
  • MD5: 4a690c27ca9ebf1622eafc6b1c7702f4
  • Run description: Summaries were used in this run. Only 50 documents were ranked in this run. For initial pruning of documents we used Lucene.

tudorComb1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: tudorComb1
  • Participant: HENRI_TUDOR_LUX
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: b00bcf3232fac727a51c6c9c4d38aaed
  • Run description: + ignore low idf terms + query terms in description field + combination of three state-of-the-art weighting model namely BM25, LGD and In_expB2 + query expansion using 30 expanded terms within top-20 documents.

tudorComb2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: tudorComb2
  • Participant: HENRI_TUDOR_LUX
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 5ffcaf8c48aec1e926afee4bf90369ee
  • Run description: + ignore low idf terms + query terms in description field + combination of three state-of-the-art weighting model namely BM25, LGD and In_expB2 + query expansion using 30 expanded terms within top-20 documents.

tudorComb3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: tudorComb3
  • Participant: HENRI_TUDOR_LUX
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 27ee3d7aa2a630e025dcc759d4b8d4a6
  • Run description: + consider low idf terms + query terms in description field + combination of three state-of-the-art weighting model namely BM25, LGD and In_expB2 + query expansion using 30 expanded terms within top-20 documents.

tudorComb4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: tudorComb4
  • Participant: HENRI_TUDOR_LUX
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: automatic
  • Task: main
  • MD5: 8815aa4e8b48650ca54e3b328c96ee1f
  • Run description: + consider low idf terms + query terms in summary field + combination of three state-of-the-art weighting model namely BM25, LGD and In_expB2 + query expansion using 30 expanded terms within top-20 documents.

tudorCombm

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: tudorCombm
  • Participant: HENRI_TUDOR_LUX
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/29/2014
  • Type: manual
  • Task: main
  • MD5: 76fd38064e37b51574c4de5a511e79c2
  • Run description: + ignore low idf terms + query terms in descrption field + combination of three state-of-the-art weighting model namely BM25, LGD and In_expB2 + query expansion using 30 expanded terms within top-20 documents. + query expansion with long form of abbrevitations (manual) + manual query weighting: give a more important weight for medical query terms

TUW1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: TUW1
  • Participant: TUW
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: ccc0723e5e567b2c62e71b80603bec2a
  • Run description: In this run, Indri is used as the base search engine system. We tested different combinations of query formulation and indexes (18 at all). Metamao and Freebase were used in some variations.

TUW2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: TUW2
  • Participant: TUW
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 2c9eb4d5b3e71d6f86e9f04906849385
  • Run description: In this run, Lucene is used as the base search engine system. We tested different combinations of query formulation and indexes (18 at all). Metamap and Freebase were used in some variations.

TUW3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: TUW3
  • Participant: TUW
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: cadbe5e48836dcf02a7a78a729e3460e
  • Run description: In this run, Xapian is used as the base search engine system. We tested different combinations of query formulation and indexes (18 at all). Metamap and Freebase were used in some variations.

TUW4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: TUW4
  • Participant: TUW
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 3a2d0b49000b99b80d4b623719fe5315
  • Run description: In this run, the previous 3 runs were combined. We tested different combinations of query formulation and indexes (18 for each). Metamap and Freebase were used in some variations.

TUW5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: TUW5
  • Participant: TUW
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: dd2da902fa6307fa428e6ea924e04f60
  • Run description: In this run, we experiment with word2vec. No external resource are used.

UDInfoCDS1

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDInfoCDS1
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/17/2014
  • Type: automatic
  • Task: main
  • MD5: a73e0665d4a61adad592ed7ae007b52c
  • Run description: This run is a concept based run. The case databases website is also used to generate the query expansion. External resources: MetaMap, Case Databases.

UDInfoCDS2

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDInfoCDS2
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/17/2014
  • Type: automatic
  • Task: main
  • MD5: ed4fdd62462fc58a7248eb65c67752f9
  • Run description: This run is a concept based run. The case databases website and UMLS are used to generate the query expansion. External resources: MetaMap, Case Databases, UMLS.

UDInfoCDS3

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDInfoCDS3
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/17/2014
  • Type: automatic
  • Task: main
  • MD5: 0b0ab55884e381f8f56cd06e79ca5d5b
  • Run description: This run is a term based run. The case databases website is used to generate the query expansion. External resources: Case Databases.

UDInfoCDS4

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDInfoCDS4
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/17/2014
  • Type: automatic
  • Task: main
  • MD5: 36558f792b76c5c9d28f3b600f5c8474
  • Run description: This run is a concept based run. The UMLS database is used to generate the query expansion. External resources: UMLS.

UDInfoCDS5

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UDInfoCDS5
  • Participant: udel_fang
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/17/2014
  • Type: automatic
  • Task: main
  • MD5: 9657efbda86c4e76308c77f9e8c48f1a
  • Run description: This run is a term based run. Only the summary query is used to generate the results. No external resource is used.

UTD0BL

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UTD0BL
  • Participant: UTDHLTRI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: f4ea6256d289b655efb9b7e6494a63e6
  • Run description: Baseline run, BM25 retrieval model using Wikipedia-inspired keywords.

UTD1QE

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UTD1QE
  • Participant: UTDHLTRI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 512f705fc45491c8077d6a040e4535c9
  • Run description: BM25 retrieval model incorporating Wikipedia, UMLS, and SNOMED information for Query Expansion.

UTD2LDA

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UTD2LDA
  • Participant: UTDHLTRI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: d9f60a8c1e119a7c8704e36424dc42ed
  • Run description: Matrix-based cosine similarity on LDA(k=100)-mapped topics and documents.

UTD3W2VE

Results | Participants | Proceedings | Input | Summary (trec_eval) | Summary (sample-eval) | Appendix

  • Run ID: UTD3W2VE
  • Participant: UTDHLTRI
  • Track: Clinical Decision Support
  • Year: 2014
  • Submission: 7/28/2014
  • Type: automatic
  • Task: main
  • MD5: 1cd10ae0006063f9c56e15efeb573c65
  • Run description: BM25-based retrieval model using Google's Word2Vec topic modelling technique for query expansion.