Skip to content

Runs - Tasks 2015

lsf

Participants | Input | Appendix

  • Run ID: lsf
  • Participant: oaqa
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: 5054a5ad297faa66365e2f9d1c0b5834
  • Run description: Used the pretrained model (described in Zi Yang and Eric Nyberg: Leveraging Procedural Knowledge for Task-oriented Search), sorted by labeling sequence scores first.
  • Code: http://github.com/ziy/pkb

lsfs

Participants | Input | Appendix

  • Run ID: lsfs
  • Participant: oaqa
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: 545ffc8580f3b4ce14f1f3d1a62e4eb4
  • Run description: Used the pretrained model (described in Zi Yang and Eric Nyberg: Leveraging Procedural Knowledge for Task-oriented Search), sorted by sequence labeling scores first. Stub articles from wikiHow included also.
  • Code: http://github.com/ziy/pkb

MSRTasksQUrun3

Participants | Proceedings | Input | Appendix

  • Run ID: MSRTasksQUrun3
  • Participant: MSRTasks
  • Track: Tasks
  • Year: 2015
  • Submission: 9/2/2015
  • Task: understanding
  • MD5: 19317aa6f8ab6692d2ffb0666d3d57d7
  • Run description: Use basic 'session' count co-occurrence from the anchor text graph of ClueWeb12. Here a session constitutes the destination URL (only links pointing at docs in ClueWeb were maintained). Queries are matched to seed queries including exact match, exact match to original query where function words are removed from original query only, and queries that are a superset of the original query after function words have been removed. Any query co-occurring in a session is considered a candidate. Globally frequent queries are removed as are queries whose cosine similarity with the original query (after function words removed) is zero or whose length is more than 4x the length of the original query (after function words removed). Final ranking is by cosine similarity to original query with function words removed weighted by number of sessions containing query.

NORM_RUN1

Participants | Input | Appendix

  • Run ID: NORM_RUN1
  • Participant: WHU_IRGroup
  • Track: Tasks
  • Year: 2015
  • Submission: 8/28/2015
  • Task: web
  • MD5: 280a557a7004acdf177e5cf4786ff971
  • Run description: first run of ad hoc task
  • Code: https://github.com/hyyc116/Tasks_TREC

NP_TU

Participants | Input | Appendix

  • Run ID: NP_TU
  • Participant: WHU_IRGroup
  • Track: Tasks
  • Year: 2015
  • Submission: 8/28/2015
  • Task: understanding
  • MD5: 84f5d0e10a6980dcf5b5fc123fa7a00e
  • Run description: the first run of task understanding.
  • Code: https://github.com/hyyc116/Tasks_TREC

rsf

Participants | Input | Appendix

  • Run ID: rsf
  • Participant: oaqa
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: f53a8099a9dc93eb1326af070a8a75ea
  • Run description: Used the pretrained model (described in Zi Yang and Eric Nyberg: Leveraging Procedural Knowledge for Task-oriented Search), sorted by task retrieval scores first.
  • Code: http://github.com/ziy/pkb

TOPIC_RUN2

Participants | Input | Appendix

  • Run ID: TOPIC_RUN2
  • Participant: WHU_IRGroup
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: bacdcb912df88c303cd3943bb95e8e80
  • Run description: the second run of task understanding
  • Code: https://github.com/hyyc116/Tasks_TREC

TOPIC_RUN2_TC

Participants | Input | Appendix

  • Run ID: TOPIC_RUN2_TC
  • Participant: WHU_IRGroup
  • Track: Tasks
  • Year: 2015
  • Submission: 9/2/2015
  • Task: completion
  • MD5: 2ac98ea557df20a50ab159f377435b39
  • Run description: the second try of task completion
  • Code: https://github.com/hyyc116/Tasks_TREC

TOPIC_RUN3

Participants | Input | Appendix

  • Run ID: TOPIC_RUN3
  • Participant: WHU_IRGroup
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: 95eea92d21c15a2d629e2a0825e8b8bf
  • Run description: the THIRD run of task understanding
  • Code: https://github.com/hyyc116/Tasks_TREC

TOPIC_RUN3_TC

Participants | Input | Appendix

  • Run ID: TOPIC_RUN3_TC
  • Participant: WHU_IRGroup
  • Track: Tasks
  • Year: 2015
  • Submission: 9/2/2015
  • Task: completion
  • MD5: 03b0a07807261f8ea83135122a37dc11
  • Run description: the second try of task completion
  • Code: https://github.com/hyyc116/Tasks_TREC

udelRun1

Participants | Input | Appendix

  • Run ID: udelRun1
  • Participant: udel
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: dedb066526e1282f68cb4802cb2b3b82
  • Run description: Extracts phrases from ClueWeb12 collection and ranks these phrases on their information retrieval potential.

udelRun2

Participants | Input | Appendix

  • Run ID: udelRun2
  • Participant: udel
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: b7b6d9b84a91a13b96a661657cd551b5
  • Run description: Trims the original queries if necessary to retrieve better phrases from the ClueWeb12 collection and ranks these phrases on their information retrieval potential.

udelRun2CSpam

Participants | Input | Appendix

  • Run ID: udelRun2CSpam
  • Participant: udel
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: completion
  • MD5: 0c7a843e7419fd22cc871eec1d40f3af
  • Run description: Uses the Task Understanding run(udelRun2) to retrieve top 100 documents for every key-phrase. These documents are pooled together and ranked on the basis of the number of time these documents were retrieved. It is based on the assumption that if a document is retrieved by multiple key-phrases, it is more relevant to the topic.

udelTTTUAOL

Participants | Input | Appendix

  • Run ID: udelTTTUAOL
  • Participant: udel
  • Track: Tasks
  • Year: 2015
  • Submission: 9/2/2015
  • Task: understanding
  • MD5: bf28baa44561839502c1e3205d4ca5fe
  • Run description: We used AOL queries. They were splitted to sessions by a simple time based method.

webis1

Participants | Proceedings | Input | Appendix

  • Run ID: webis1
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: understanding
  • MD5: 850edbfc7d9903da33d617a2dc419724
  • Run description: extract related queries using: * googleSuggest * bing * aol * interestgraph * wiki * google * freebase * wikidata * netspeak * chatnoir

webisA1

Participants | Proceedings | Input | Appendix

  • Run ID: webisA1
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: web
  • MD5: 1ea60d8a03b7da218934d73e0b6ce1ae
  • Run description: axiomatic approach of chatnoir2 baseline

webisA2

Participants | Proceedings | Input | Appendix

  • Run ID: webisA2
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: web
  • MD5: 308099f85a291af1dfde2354f298c638
  • Run description: chatnoir2 baseline without top 20 of run 1

webisA3

Participants | Proceedings | Input | Appendix

  • Run ID: webisA3
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: web
  • MD5: c716bc7b05192f2cbcab69ddae905125
  • Run description: tfidf baseline without top 20 of run 1 & 2

webisC1

Participants | Proceedings | Input | Appendix

  • Run ID: webisC1
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: completion
  • MD5: c25cfde2eaf407ad98b524950f3b8c5b
  • Run description: interleaved combination of results

webisC2

Participants | Proceedings | Input | Appendix

  • Run ID: webisC2
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: completion
  • MD5: c92368563964c6fd0565e62dae22c22e
  • Run description: combination based in top10 of each query without top 20 from run 1

webisC3

Participants | Proceedings | Input | Appendix

  • Run ID: webisC3
  • Participant: Webis
  • Track: Tasks
  • Year: 2015
  • Submission: 9/1/2015
  • Task: completion
  • MD5: 5825ca72195a2f85821644a7417d44ac
  • Run description: interleaved without top 20 of run 1 and run 2