Skip to content

Runs - Crowdsourcing 2013

Hrbust123

Participants | Proceedings | Appendix

  • Run ID: Hrbust123
  • Participant: Hrbust
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/7/2013
  • Task: basic
  • MD5: 6810dfd18bb1222089cfe82994ff3db1
  • Run description: For the TREC2013 Crowdsourcing Track, we put forward a solution strategy based on multi-communication platform and multi-type crowds. To bring together a wide rang of participants to support and participate in crowdsourcing evaluation taskwe adopt the various popular social networking platforms to spread widely, including website promotion, SNS social networking, microblog, WeChat and instant communication tools. We divide the crowd into three groups, Expert Group, Trustee Group and Volunteer Group by the degree of confidence, to judge probability of relevance of different topics and different web content on a six-point scale. Expert group judged all 3470 topic-doc pairs from 10 topics, and asked their friends for help receiving a number of judgments, that is treated as Trustee Groups results. After that, we posted messages on the above-mentioned social networking platforms. In order to ensure the topic-doc distribution on average, our system randomly selected topic-doc pairs and recommend some pairs that had the less number of judgments for volunteers, who were from these platforms. Finally, we selected 15 judgments for each topic-doc pair. In the judgments, we confirmed the final label, which had the higher frequency of occurrence, for each pair. Calculating the ratio of the sum of the deviations of the topic-docs final label value and its 15 label values, and the product of the maximum deviation of 6 relevance labels and the number of the judgment. And 1 minus the ratio is the probability of relevance of final label and judgments. Through the website, we achieve the online evaluation of the Hrbust-Crowdsourcing.

NEUPivot1

Participants | Proceedings | Appendix

  • Run ID: NEUPivot1
  • Participant: NEUIR
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/8/2013
  • Task: basic
  • MD5: 1690d6dfd23aa0a2543575f8d9939b39
  • Run description: This run has been judged by one judge (a graduate student) using preference judgments.

pris

Participants | Appendix

  • Run ID: pris
  • Participant: PRIS
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/6/2013
  • Task: basic
  • MD5: e9a30ca4421e398860aa0b5d7fb19a99
  • Run description: There 3 people in our team. We spent 3 weeks to get the task completed. We used AMT platform.

udelJudge1

Participants | Appendix

  • Run ID: udelJudge1
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: 16bb92fcaa60935fbd50f1959fd86ba4
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge2

Participants | Appendix

  • Run ID: udelJudge2
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: fbd9cae29f5c1366288f4c19fef35b81
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge3

Participants | Appendix

  • Run ID: udelJudge3
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: 79bbdf750b2bed29e0944f6087d5b465
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge4

Participants | Appendix

  • Run ID: udelJudge4
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: d139cd6318ff0d37823e4933f1db11b2
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge5

Participants | Appendix

  • Run ID: udelJudge5
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: 79b1b10e9d37bbd252261763b09a3046
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge6

Participants | Appendix

  • Run ID: udelJudge6
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: 9c8c939fced6c7dbd314a7e9a5472559
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge7

Participants | Appendix

  • Run ID: udelJudge7
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: 6d49b1bdfb2c73cc57f984afe59d9402
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.

udelJudge8

Participants | Appendix

  • Run ID: udelJudge8
  • Participant: udel
  • Track: Crowdsourcing
  • Year: 2013
  • Submission: 9/5/2013
  • Task: standard
  • MD5: 63cce6e2a9fa788f3bc0485f05c4e90f
  • Run description: Fully automated run using 3 search engines on the pool of docs. Rules based on the position of a document for a topic in each of the 3 systems' ranking determines whether a document is relevant or not. Each udelJudgeX run has rules slightly different from the other udelJudgeY runs.