Skip to content

Runs - Million Query 2009

iiithAuEQ

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: iiithAuEQ
  • Participant: SIEL
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: 399ab6a8153328e6d2e6dadd04f63ed5
  • Run description: We developed the basic system on top of Lucene using Hadoop infrastructure. Salient features incorporated in the current run, over first 10,657 queries included: - Authoritative Boost: Having identified high authority domains, giving special boost to 'first' relevant hit from these domains. - De-duplication in results: Avoiding closely similar pages to be listed in succession. - Distributed Search: Parallel search over split indices using RMI based distributed search.

iiithAuthPN

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: iiithAuthPN
  • Participant: SIEL
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: 400605149b91f77583f80df3edff38ed
  • Run description: We developed the basic system on top of Lucene using Hadoop infrastructure. Salient features incorporated in the current run, over first 1000 queries included: - Query Expansion: Giving higher boost to sub-queries containing Named Entity terms in the original query. - De-duplication in results: Avoiding closely similar pages to be listed in succession. - Authoritative Boost: Having identified high authority domains, giving special boost to 'first' relevant hit from these domains. - Distributed Search: Parallel search over split indices using RMI based distributed search.

iiithExpQry

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: iiithExpQry
  • Participant: SIEL
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: 280bfba929def2c53f27f2ca29735178
  • Run description: We developed the basic system on top of Lucene using Hadoop infrastructure. Salient features incorporated in the current run, over first 2451 queries included: - Query Expansion: Giving higher boost to sub-queries containing Named Entity terms in the original query. - De-duplication in results: Avoiding closely similar pages to be listed in succession. We tried a slightly more constrained De-duplication hueristic to avoid redundancies. - Distributed Search: Parallel search over split indices using RMI based distributed search.

irra1mqa

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: irra1mqa
  • Participant: IRRA
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: 8ee8ee8ee217079b63b2ba37ec84a1dd
  • Run description: This is the base IRRA run.

irra1mqd

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: irra1mqd
  • Participant: IRRA
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: 2b670435f567f90572eba1baae0e7386
  • Run description: This is a variation of the IRRA base run.

irra2mqa

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: irra2mqa
  • Participant: IRRA
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: a80cab1a84410ba62c1ff02598263cc0
  • Run description: This is another variation of the base IRRA run.

irra2mqd

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: irra2mqd
  • Participant: IRRA
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: c866ba2a0e1a2fe8db2db7200d5eaffa
  • Run description: This is another variation of the base IRRA run.

irra3mqd

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: irra3mqd
  • Participant: IRRA
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: 5a6fc83245ad8da714780605aa66e9e2
  • Run description: This is another variation of the base IRRA run.

NeuSvmBase

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NeuSvmBase
  • Participant: NEU
  • Track: Million Query
  • Year: 2009
  • Submission: 8/18/2009
  • Type: automatic
  • MD5: 24fd0f9b993d71684e4dd6cb535f200d
  • Run description: -run indri to get 2000 docs -extract features -run svm-light(ranking mode), trained on MQ08 data

NeuSvmHE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NeuSvmHE
  • Participant: NEU
  • Track: Million Query
  • Year: 2009
  • Submission: 8/18/2009
  • Type: automatic
  • MD5: b68640b19c8aa059682b014ad5dbafa1
  • Run description: 1. run indri to get 2000 docs 2. extract features 3. predict hard/easy using Jensen-Shannon divergence among ranking features 4. train svm-light (ranking mode) on hard queries from MQ08 data; use it on predicted hard queries. 5. train svm-light (ranking mode) on easy queries from MQ08 data; use it on predicted easy queries.

NeuSvmPR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NeuSvmPR
  • Participant: NEU
  • Track: Million Query
  • Year: 2009
  • Submission: 8/18/2009
  • Type: automatic
  • MD5: b1b9a5c7766e9574aec20afa902fef53
  • Run description: 1. run indri to get 2000 docs 2. extract features 3. predict precision/recall for each query using a svm classifier trained on MQ08 query precision/recall tags 4. train svm-light (ranking mode) on precision-oriented queries from MQ08 data; use it on predicted precision-oriented queries. 5. train svm-light (ranking mode) on recall-oriented queries from MQ08 data; use it on predicted recall-oriented queries.

NeuSvmPRHE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NeuSvmPRHE
  • Participant: NEU
  • Track: Million Query
  • Year: 2009
  • Submission: 8/18/2009
  • Type: automatic
  • MD5: 76b39437c4898ab700f1bad8693abf50
  • Run description: 1. run indri to get 2000 docs 2. extract features 3. predict hard/easy using a svm classifier trained on MQ08 hard/easy tags 4. predict precision/recall using a svm classifier trained on MQ08 hard/easy tags 5. train svm-light (ranking mode) on hard&precision/hard&recall/easy&precision/easy&recall queries from MQ08 data; for each query, use the appropriate svm model according to the predictions described in 3 and 4.

NeuSvmStefan

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NeuSvmStefan
  • Participant: NEU
  • Track: Million Query
  • Year: 2009
  • Submission: 8/18/2009
  • Type: automatic
  • MD5: 4cefba5ee43fcf21b961e63d546d6a78
  • Run description: 1. run indri to get 2000 docs 2. extract features 3. rank documents by svm-light (ranking mode) trained on a different query log from a major web search engine for overlap between the top 100 documents retrieved by Microsoft Bing and the top 2000 documents retrieved by indri

Sab9mq1bf1

Results | Participants | Input | Summary | Appendix

  • Run ID: Sab9mq1bf1
  • Participant: SABIR
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: 555cb170d7f7c19e283cec0a4bcb2139
  • Run description: Blind feedback. ltu.Lnu initial run, 25 docs, add 20 terms. Rocchio with a,b,c weights of 32,64,128. consider all other docs nonrel. Simple classification. Anchormap for difficulty, ratio of sim at rank 20 to sim at rank 0 for precision. Sort topics, label top 25% and bottom 25% in each class.

Sab9mq1bf4

Results | Participants | Input | Summary | Appendix

  • Run ID: Sab9mq1bf4
  • Participant: SABIR
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: f1aba564ceae5e703122921714d7ed8e
  • Run description: Blind feedback. ltu.Lnu initial run, 15 docs, add 5 terms. Rocchio with a,b,c weights of 32,64,128. consider all other docs nonrel. Simple classification. Anchormap for difficulty, ratio of sim at rank 20 to sim at rank 0 for precision. Sort topics, label top 25% and bottom 25% in each class.

Sab9mq2bf1

Results | Participants | Input | Summary | Appendix

  • Run ID: Sab9mq2bf1
  • Participant: SABIR
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: 2fd81d636b4eae0bc91567244dd15c6e
  • Run description: Blind feedback. ltu.Lnu initial run, 25 docs, add 20 terms. Rocchio with a,b,c weights of 32,8,0. rel doc wts are ltu. Simple classification. Anchormap for difficulty, ratio of sim at rank 20 to sim at rank 0 for precision. Sort topics, label top 20% and bottom 20% in each class.

Sab9mqBase1

Results | Participants | Input | Summary | Appendix

  • Run ID: Sab9mqBase1
  • Participant: SABIR
  • Track: Million Query
  • Year: 2009
  • Submission: 8/14/2009
  • Type: automatic
  • MD5: e1ddfc77fb5d8c671351bd77e721db5e
  • Run description: Base case SMART run on priority 1 (1000 topics) ltu.Lnu

Sab9mqBase4

Results | Participants | Input | Summary | Appendix

  • Run ID: Sab9mqBase4
  • Participant: SABIR
  • Track: Million Query
  • Year: 2009
  • Submission: 8/13/2009
  • Type: automatic
  • MD5: a365ab138872e40b89a4a2ca73dce896
  • Run description: Basic SMART ltu-Lnu run, no expansion

udelIndDM

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: udelIndDM
  • Participant: UDel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: 38d101ac5bf9b1315f2ff8f3f279b93d
  • Run description: indri run with Metzler & Croft dependence modeling. parameters trained in a quasi-semi-supervised fashion using last year's MQ data.

udelIndPR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: udelIndPR
  • Participant: UDel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: f1a7e984646bfd44b4b7eb6f45661419
  • Run description: basic indri run with a PageRank document prior.

udelIndri

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: udelIndri
  • Participant: UDel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: 45974b988c82d9373822a9b7a7730895
  • Run description: baseline indri run. parameters trained in a quasi-semi-supervised fashion using last year's MQ data.

udelIndRM

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: udelIndRM
  • Participant: UDel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: a9051ae217233a8ce18b047fec02120b
  • Run description: indri run with Lavrenko & Croft relevance models. parameters trained in a quasi-semi-supervised fashion using last year's MQ data.

udelIndSP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: udelIndSP
  • Participant: UDel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/17/2009
  • Type: automatic
  • MD5: 1bd75edda92bccd826d62de803f4d744
  • Run description: basic indri run with a "domain trust" document prior. "domain trust" is based on the frequencies of occurrence of a domain on external, publicly-available URL and sendmail whitelists and blacklists.

UDMQAxBL

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDMQAxBL
  • Participant: EceUdel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: 3fb332a6966326b6831a83a0ceb34872
  • Run description: Axiomatic method.

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDMQAxBLlink
  • Participant: EceUdel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: d5eb91470275a8bad081caea9bc1a7a1
  • Run description: Axiomatic method with anchor text.

UDMQAxQE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDMQAxQE
  • Participant: EceUdel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: 3a119ddd7ecc819c89d9d76dbf32137e
  • Run description: Axiomatic method with query expansion

UDMQAxQEWeb

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDMQAxQEWeb
  • Participant: EceUdel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: 2661bf759def0155490b42a725812b27
  • Run description: Axiomatic method and query expansion with web data.

UDMQAxQEWP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDMQAxQEWP
  • Participant: EceUdel
  • Track: Million Query
  • Year: 2009
  • Submission: 8/19/2009
  • Type: automatic
  • MD5: 455f3abf8ff962c5bda33f55c6749237
  • Run description: Axiomatic method and query expansion with Wikipedia collection

uiuc09Adpt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiuc09Adpt
  • Participant: UIUC
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: 326979c28b953f850392fb1cf851c103
  • Run description: An adaptive retrieval model based on query classification results

uiuc09GProx

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiuc09GProx
  • Participant: UIUC
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: 8cc34865407e7d4be6a88935ada0fd83
  • Run description: A generative proximity feedback model

uiuc09KL

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiuc09KL
  • Participant: UIUC
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: 0318d7548edf759c1e30932a63e4a89e
  • Run description: Basic KL-divergence retrieval model

uiuc09MProx

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiuc09MProx
  • Participant: UIUC
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: 23c490709d9e1c94840ca670c81f7dcd
  • Run description: A mixture proximity feedback model

uiuc09RegQL

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiuc09RegQL
  • Participant: UIUC
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: d877c154d76f71b97a08ccf16d47967f
  • Run description: Regularized query likelihood for document weighting to improve the relevance model

uogTRMQdpA10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTRMQdpA10
  • Participant: uogTr
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: 4eea95bf8b295427f04c4e3984c7c809
  • Run description: Parameter free DFR model with anchor text. 10k queries.

uogTRMQdph40

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogTRMQdph40
  • Participant: uogTr
  • Track: Million Query
  • Year: 2009
  • Submission: 8/20/2009
  • Type: automatic
  • MD5: 9b43b1cf080131822319a2ae9511288b
  • Run description: Parameter free DFR model. 40k queries.