Skip to content

Runs - Dynamic Domain 2015

BASE_INDRI_50

Participants | Input | Summary | Appendix

  • Run ID: BASE_INDRI_50
  • Participant: DDTJU
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: fce526d2078135d30ff49929f89c17af
  • Run description: naive version for dd, use Indri index the dataset. for each query, iteration 10 times, each time give the JIGs 5 results

baseline

Participants | Input | Summary | Appendix

  • Run ID: baseline
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: f57f0415dab7a509f7d6feea0891e58f
  • Run description: For each domain, we build an index with INDRI respectively. Given a query, our system runs INDRI once and returns the top 1000 documents in order.

DDTJU_EXPLORE

Participants | Input | Summary | Appendix

  • Run ID: DDTJU_EXPLORE
  • Participant: DDTJU
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/1/2015
  • Type: automatic
  • Task: main
  • MD5: 1539f4c6ed92f2b216885f1725066fc8
  • Run description: explore with the JIGs, more exploitation, less exploration

GU_RUN3_SIMI

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: GU_RUN3_SIMI
  • Participant: georgetown
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: ea58cf9b63f022d2664006c1383000d5
  • Run description: Lemur+Similarity_Score

GU_RUN4_SIMI

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: GU_RUN4_SIMI
  • Participant: georgetown
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 6839a073b6f04ba9c3c04fbccae8580b
  • Run description: RELEVANCE+SIMILARITY

lm

Participants | Input | Summary | Appendix

  • Run ID: lm
  • Participant: KonanU
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/1/2015
  • Type: automatic
  • Task: main
  • MD5: b1675dc4846810bf579103b6aa74e758
  • Run description: query language model

lmrf

Participants | Input | Summary | Appendix

  • Run ID: lmrf
  • Participant: KonanU
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/1/2015
  • Type: automatic
  • Task: main
  • MD5: 9400232372ffdb7f67db43ba6dfdcf3e
  • Run description: query language model + relevance feedback

multir

Participants | Input | Summary | Appendix

  • Run ID: multir
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 5b37d1f047e329aa3f430bdc53cc7031
  • Run description: To begin, our system returns INDRI results in order. From the 'on_topic' feedback info, our system extracts another query, and runs INDRI again to obtain more documents which are likely to be 'on_topic'. Besides, our system utilizes an TFIDF model to remove documents similar to 'off_topic' ones.

okapi

Participants | Input | Summary | Appendix

  • Run ID: okapi
  • Participant: KonanU
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/1/2015
  • Type: automatic
  • Task: main
  • MD5: f6a019e48c304b8ef14ddf8fd51f4b45
  • Run description: baseline okapi

simti

Participants | Input | Summary | Appendix

  • Run ID: simti
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 62db1c92abe3d21e5693e6c51fc789dc
  • Run description: Based on the thousands of documents INDRI returns, we build an TFIDF model so as to calculate the similarity between each two documents. To begin, our system returns INDRI results in order. With the feedback info, our system returns documents similar to 'on_topic' ones.

simtir

Participants | Input | Summary | Appendix

  • Run ID: simtir
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 03700f328782a9aeaaca772b51b2ec93
  • Run description: Based on the thousands of documents INDRI returns, we build an TFIDF model so as to calculate the similarity between each two documents. To begin, our system returns INDRI results in order. With the feedback info, our system returns documents similar to 'on_topic' ones and removes documents similar to 'off_topic' ones.

simtir20

Participants | Input | Summary | Appendix

  • Run ID: simtir20
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 097dfa756e5ff3c1d7c2d7646b4ab6ec
  • Run description: Based on the thousands of documents INDRI returns, we build an TFIDF model so as to calculate the similarity between each two documents. To begin, our system returns INDRI results in order. With the feedback info, our system returns documents similar to 'on_topic' ones and removes documents similar to 'off_topic' ones. But before utilizing the info, the top 20 INDRI results must be returned.

subbaseline

Participants | Input | Summary | Appendix

  • Run ID: subbaseline
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/15/2015
  • Type: automatic
  • Task: judged
  • MD5: 4d706e000cbb07685a53d7f0d0e0069a
  • Run description: For each domain, we build an index with INDRI respectively. Given a query, our system runs INDRI once and returns the top 1000 documents in order.

submultir

Participants | Input | Summary | Appendix

  • Run ID: submultir
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/15/2015
  • Type: automatic
  • Task: judged
  • MD5: 805864626e1852f878578899160820cf
  • Run description: To begin, our system returns INDRI results in order. From the 'on_topic' feedback info, our system extracts another query, and runs INDRI again to obtain more documents which are likely to be 'on_topic'. Besides, our system utilizes an TFIDF model to remove documents similar to 'off_topic' ones.

subsimti

Participants | Input | Summary | Appendix

  • Run ID: subsimti
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/15/2015
  • Type: automatic
  • Task: judged
  • MD5: 926bd589b9c0d962364e1da185657e68
  • Run description: Based on the thousands of documents INDRI returns, we build an TFIDF model so as to calculate the similarity between each two documents. To begin, our system returns INDRI results in order. With the feedback info, our system returns documents similar to 'on_topic' ones.

subsimtir

Participants | Input | Summary | Appendix

  • Run ID: subsimtir
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/15/2015
  • Type: automatic
  • Task: judged
  • MD5: 2950820d87bc0c4e3c47c5a2f2b751a4
  • Run description: Based on the thousands of documents INDRI returns, we build an TFIDF model so as to calculate the similarity between each two documents. To begin, our system returns INDRI results in order. With the feedback info, our system returns documents similar to 'on_topic' ones and removes documents similar to 'off_topic' ones.

subsimtir20

Participants | Input | Summary | Appendix

  • Run ID: subsimtir20
  • Participant: BUPT_PRIS
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/15/2015
  • Type: automatic
  • Task: judged
  • MD5: 27943ba1cb722af0eaf64766a671ff0b
  • Run description: Based on the thousands of documents INDRI returns, we build an TFIDF model so as to calculate the similarity between each two documents. To begin, our system returns INDRI results in order. With the feedback info, our system returns documents similar to 'on_topic' ones and removes documents similar to 'off_topic' ones. But before utilizing the info, the top 20 INDRI results must be returned.

tfidf

Participants | Input | Summary | Appendix

  • Run ID: tfidf
  • Participant: KonanU
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/1/2015
  • Type: automatic
  • Task: main
  • MD5: ac1e975e33229fb140ba7af2f3537f13
  • Run description: baseline tfidf

ul_combi_roc.2

Participants | Input | Summary | Appendix

  • Run ID: ul_combi_roc.2
  • Participant: LavalIVA
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 13041624a1388eedf9959f2e228a8888
  • Run description: For this run we used a combination of results from Solr, LDA and Kmeans with a weight system for each algorithm to contribute to the final top 5 docs to return for the turn. We used an inversed Rocchio algorithm to search in different areas if there is no relevant document in the first turn.

ul_combi_roc_judged

Participants | Input | Summary | Appendix

  • Run ID: ul_combi_roc_judged
  • Participant: LavalIVA
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/18/2015
  • Type: automatic
  • Task: judged
  • MD5: d5fde8fe7af89d36f0b8ce9fb74d30ff
  • Run description: Use a combination of results from solr and diversification with lda and kmeans. Use of name entity recognition on feedback and use of inversed rocchio algorithm when there is no feedback.

ul_lda_roc.10

Participants | Input | Summary | Appendix

  • Run ID: ul_lda_roc.10
  • Participant: LavalIVA
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 027951609b25c5de7497be23253b0190
  • Run description: This run uses Solr as a baseline for the search engine. We use LDA to search different subtopics in top documents returned by Solr. And we process the feedback by using a NER algorithm to expand the queries. If there is no feedback we use an inverse rocchio algorithm to search in a different area. We used 10 pages max of results to explore more documents..

ul_lda_roc.2

Participants | Input | Summary | Appendix

  • Run ID: ul_lda_roc.2
  • Participant: LavalIVA
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 5a828104fc318a5683a72814b1c6fca6
  • Run description: Search engine : Solr Algorithms : LDA to find subtopics, Roccio to search in other areas and Named Entities recognition in the feedback to reformulate the query.

ul_lda_roc.3

Participants | Input | Summary | Appendix

  • Run ID: ul_lda_roc.3
  • Participant: LavalIVA
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: b61f990a5e5db7ffa2237dd6d88dfcc7
  • Run description: This run uses Solr as a basis for the search engine. We use LDA to search different subtopics in top documents by Solr. And we process the feedback by using a NER algorithm to expand the queries. If there is no feedback we an inverse rocchio algorithm to search in a different area. We used 3 pages of results for this run.

uogTrEpsilonG

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTrEpsilonG
  • Participant: uogTr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: ac7176134623f36c873585b6d396562c
  • Run description: S1 (Ranking): Each iteration mixes documents from all indices (weighted by CORI resource ranking). System becomes less risk averse (will try more documents from low scored resources) if we don't find a relevant document quickly. S2 (Learning): None S3 (Stopping Condition): First found

uogTrIL

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTrIL
  • Participant: uogTr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: f09bff4d9aebf2bebc64493f7588b072
  • Run description: S1 (Ranking): Each iteration mixes documents from all indices, treats each domain evenly. S2 (Learning): None S3 (Stopping Condition): First found

uogTrRR

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTrRR
  • Participant: uogTr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 7c96fa7ac3f85828af4cd269b894f5e0
  • Run description: S1 (Ranking): Round robin for each iteration (5 documents from each domain), where domain ordering is via CORI S2 (Learning): None S3 (Stopping Condition): First found

uogTrSI

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTrSI
  • Participant: uogTr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 8/31/2015
  • Type: automatic
  • Task: main
  • MD5: 9a060fd15d04ad39a25f2f5257032478
  • Run description: S1 (Ranking): Single index search, each iteration moves down the ranking S2 (Learning): None S3 (Stopping Condition): First found

uogTrxQuADRR

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTrxQuADRR
  • Participant: uogTr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/1/2015
  • Type: automatic
  • Task: main
  • MD5: 426f6fd0da27a70a29528f5f0b827f7c
  • Run description: S1 (Ranking): Ranking with xQuAD where pseudo-relevance feedback from the top results is used to generate potential intents. Round robin for each iteration (5 documents from each domain), where domain ordering is via CORI is used S2 (Learning): None S3 (Stopping Condition): First found

yr_mixed_long

Participants | Input | Summary | Appendix

  • Run ID: yr_mixed_long
  • Participant: yr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/18/2015
  • Type: automatic
  • Task: judged
  • MD5: cb8f8c447665238ebecba24d07856af7
  • Run description: Zoned, confidence averaged between novelty and similarity. Long runs on each topic in attempt to achieve recall on challenging topics.

yr_mixed_sim_nov

Participants | Input | Summary | Appendix

  • Run ID: yr_mixed_sim_nov
  • Participant: yr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/18/2015
  • Type: automatic
  • Task: judged
  • MD5: 9579dcbd4e52b7523762299097550f91
  • Run description: Zoned, confidence averaged between tf-idf and novelty

yr_run_no_nov

Participants | Input | Summary | Appendix

  • Run ID: yr_run_no_nov
  • Participant: yr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/18/2015
  • Type: automatic
  • Task: judged
  • MD5: a9460be9cfe364fb250f5218422b2081
  • Run description: Zoning, tf-idf, no novelty.

yr_run_with_nov

Participants | Input | Summary | Appendix

  • Run ID: yr_run_with_nov
  • Participant: yr
  • Track: Dynamic Domain
  • Year: 2015
  • Submission: 9/18/2015
  • Type: automatic
  • Task: judged
  • MD5: 8d7a86e0995efd134e2f1a30cae2cc2d
  • Run description: With zoning, novelty, and tf-idf