Skip to content

Runs - Session 2012

ACombSnip.RL1

Results | Participants | Input | Summary | Appendix

  • Run ID: ACombSnip.RL1
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: 431473f4daae9a3f33096e1e21b02366
  • Run description: Sequential Dependency + Combining Related Queries + Query Expansion using Snippets and Clicked Snippets

ACombSnip.RL2

Results | Participants | Input | Summary | Appendix

  • Run ID: ACombSnip.RL2
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: 710fc8ed5534c73e8c3faaf0eb4f512c
  • Run description: Sequential Dependency + Combining Related Queries + Query Expansion using Snippets and Clicked Snippets

ACombSnip.RL3

Results | Participants | Input | Summary | Appendix

  • Run ID: ACombSnip.RL3
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: b4a4a86b61a89daec3aef079cccfc6d6
  • Run description: Sequential Dependency + Combining Related Queries + Query Expansion using Snippets and Clicked Snippets

ACombSnip.RL4

Results | Participants | Input | Summary | Appendix

  • Run ID: ACombSnip.RL4
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 848adae37ad459e434085926580c9b3c
  • Run description: Sequential Dependency + Combining Related Queries + Query Expansion using Snippets and Clicked Snippets

BDocExpDoc.RL1

Results | Participants | Input | Summary | Appendix

  • Run ID: BDocExpDoc.RL1
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: 9e44de3aa8da3a381f79f95c1d9d9e92
  • Run description: Sequential Dependency + Query Expansion from Previous Queries Document, Document and Clicked Document

BDocExpDoc.RL2

Results | Participants | Input | Summary | Appendix

  • Run ID: BDocExpDoc.RL2
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: ae6b0611c4ec13cc39cf67cbee8826ff
  • Run description: Sequential Dependency + Query Expansion from Previous Queries Document, Document and Clicked Document

BDocExpDoc.RL3

Results | Participants | Input | Summary | Appendix

  • Run ID: BDocExpDoc.RL3
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: 8b85596685108900c964028332c22965
  • Run description: Sequential Dependency + Query Expansion from Previous Queries Document, Document and Clicked Document

BDocExpDoc.RL4

Results | Participants | Input | Summary | Appendix

  • Run ID: BDocExpDoc.RL4
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 5dc8c621de33b168b7b38ec9088a443c
  • Run description: Sequential Dependency + Query Expansion from Previous Queries Document, Document and Clicked Document

CWIrun1.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun1.RL1
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: 36a347932fb21e6d46ca8596aaf10e10
  • Run description: This run tests a novel logic-probabilistic model: task RL1 uses a uninformative prior

CWIrun1.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun1.RL2
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: f85cab553ce7d1bde01b0e55e3b92fe3
  • Run description: This run tests a novel logic-probabilistic model: task RL2 uses a uninformative prior updated with a query event whose likelihood is estimated by means of a Kolmogorov-Smirnov test

CWIrun1.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun1.RL3
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: 20be2926572fbcfa31525bc6163e1468
  • Run description: This run tests a novel logic-probabilistic model: task RL2 uses a prior extended with result lists and updated with a query event whose likelihood is estimated by means of a Kolmogorov-Smirnov test.

CWIrun1.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun1.RL4
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 20be2926572fbcfa31525bc6163e1468
  • Run description: This run tests a novel logic-probabilistic model: task RL3 uses a prior extended with result lists and updated with a query event whose likelihood is estimated by means of a Kolmogorov-Smirnov test.

CWIrun3.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun3.RL1
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: 74697df95fd01c63206efeb29f79d3f6
  • Run description: logic probabilistic model, uninformative prior based on a vocabulary extracted from the top 3 documents in response to the topic descriptions

CWIrun3.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun3.RL2
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: 912b3458f8afe825e784af7119e152e0
  • Run description: logic probabilistic model, uninformative prior based on a vocabulary extracted from the top 3 documents in response to the topic descriptions. In RL2 the prior is updated with the observed queries + a likelihood function for the query event

CWIrun3.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun3.RL3
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: ac018369b015119d322056852b372786
  • Run description: logic probabilistic model, uninformative prior based on a vocabulary extracted from the top 3 documents in response to the topic descriptions. In RL3 the prior is extended with the subtopics derived from the result lists

CWIrun3.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CWIrun3.RL4
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: f16d9c5c8bbfdf4886d3fe2b21a794e9
  • Run description: logic probabilistic model, uninformative prior based on a vocabulary extracted from the top 3 documents in response to the topic descriptions. In RL4 the prior is extended with the subtopics derived from the click lists

essexSAnchor.RL1

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSAnchor.RL1
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL1
  • MD5: c71f77b8c8a437e1b648bf66ae193070
  • Run description: RL1 is simply generated by using the current query and the maximum likelihood language model. Waterloo spam rankings were used to filter spam is applied as a post retrieval process.

essexSAnchor.RL2

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSAnchor.RL2
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL2
  • MD5: 718cee4017ff0e73bbf6ff82b297df7c
  • Run description: RL2 was generated by expanding the current query with expansions inferred using all the queries in the session. Association rules were applied on query logs (simulated from anchor logs) to derive the expansions.

essexSAnchor.RL3

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSAnchor.RL3
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL3
  • MD5: 151b770848dba83fba465b755c18c526
  • Run description: RL3 was generated by expanding the current query with expansions inferred from the anchor text of documents displayed to the user throughout the session.

essexSAnchor.RL4

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSAnchor.RL4
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL4
  • MD5: 0401f3837d355b1fbe497073d6385991
  • Run description: RL4 was generated by expanding the current query with expansions inferred from the anchor text of documents clicked by the user throughout the session.

essexSWiki.RL1

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSWiki.RL1
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL1
  • MD5: 159016359d1b1c06df34ad2a9b9906a9
  • Run description: RL1 is simply generated by using the current query and the maximum likelihood language model. Waterloo Spam rankings were used to filter spam as a post retrieval process.

essexSWiki.RL2

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSWiki.RL2
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL2
  • MD5: e222504c5d7db02f262ba1c31e87831c
  • Run description: RL2 was generated by using the current query and the term 'wikipedia' as an expansion!

essexSWiki.RL3

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSWiki.RL3
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL3
  • MD5: e222504c5d7db02f262ba1c31e87831c
  • Run description: RL3 was generated by using the current query and the term 'wikipedia' as an expansion!

essexSWiki.RL4

Results | Participants | Input | Summary | Appendix

  • Run ID: essexSWiki.RL4
  • Participant: UoE
  • Track: Session
  • Year: 2012
  • Submission: 8/24/2012
  • Type: automatic
  • Task: RL4
  • MD5: e222504c5d7db02f262ba1c31e87831c
  • Run description: RL4 was generated by using the current query and the term 'wikipedia' as an expansion!

guphrase1.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase1.RL1
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: bc0bca90daadf71569a285bd00a4db9b
  • Run description: find phrases in the current query from feedback

guphrase1.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase1.RL2
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: 383220988e845d923cbc4b0a3ce59267
  • Run description: find phrases in the previous and current queries from feedback

guphrase1.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase1.RL3
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: 526909ccdbc89401e13e22da274154a4
  • Run description: select previous queries by discovering user's intention

guphrase1.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase1.RL4
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 526909ccdbc89401e13e22da274154a4
  • Run description: select previous queries by discovering user's intention from click data

guphrase2.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase2.RL1
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: cd29bc35adf7ee2e8652c98eb505d221
  • Run description: group the current query into phrases based on the feedback

guphrase2.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase2.RL2
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: a378fae6d2660872e4d438e619a43a73
  • Run description: group the previous and current queries into phrases based on the feedback

guphrase2.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase2.RL3
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: 2bfedfe37056c17757a1f366baacbb6a
  • Run description: select the previous results by discovering the user's intention

guphrase2.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: guphrase2.RL4
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 2bfedfe37056c17757a1f366baacbb6a
  • Run description: select the previous results by discovering the user's intention and click data

gurelaxphr.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: gurelaxphr.RL1
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: a354aeff937251c37655a1f34031f720
  • Run description: group words in the current query into phrases by a relaxed rule from the feedback

gurelaxphr.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: gurelaxphr.RL2
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: 6fd20c84f40a4e20d578f0060ab131e3
  • Run description: group words in the previous and current queries into phrases by a relaxed rule from the feedback

gurelaxphr.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: gurelaxphr.RL3
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: 8e9f598f0d749f292553effa8aa329c9
  • Run description: choose the previous queries by discovering user's intention

gurelaxphr.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: gurelaxphr.RL4
  • Participant: Georgetown
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 57c8fb2013db4af681c64f13ff1f417e
  • Run description: choose the previous queries by discovering user's intention from the click data

ICTNET12SER1.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER1.RL1
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL1
  • MD5: 53c47b742135503ceda47c213ad588a9
  • Run description: result from our ad-hoc system

ICTNET12SER1.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER1.RL2
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL2
  • MD5: 376753066933a50fd1cc5603433f4f2f
  • Run description: rerank ad-hoc results based on query extending

ICTNET12SER1.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER1.RL3
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL3
  • MD5: 07af76f877878863a5379455e1279b47
  • Run description: construct a virtual document of the query from the ranked list and rerank ad-hoc results based on cos_sim score

ICTNET12SER1.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER1.RL4
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL4
  • MD5: 0bbfb20bbcf76af4f988a8582817a0bb
  • Run description: using clicked document(both contents and dwell time) to reranking our ad-hoc results

ICTNET12SER2.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER2.RL1
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL1
  • MD5: cc6b8a10be981d9b862986559877ca4a
  • Run description: using cos-sim with google search reuslts as the description of the query

ICTNET12SER2.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER2.RL2
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL2
  • MD5: 261a6d6d86e6eedd007aaefff3836bd4
  • Run description: using learning to rank algorithm to rerank our ad-hoc results

ICTNET12SER2.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER2.RL3
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL3
  • MD5: 97935c18a0a10aca6961675f95988322
  • Run description: using learning to rank algorithm to rerank our ad-hoc results

ICTNET12SER2.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER2.RL4
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL4
  • MD5: 9d54a49ce7dfa5f0000569376b734665
  • Run description: using learning to rank algorithm to rerank our ad-hoc results

ICTNET12SER3.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER3.RL1
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL1
  • MD5: 41530843d5bfe68b39be55cdc682d9fd
  • Run description: using learning to rank algorithm to rerank our ad-hoc results, with a feature calculating cos-sim with google search reuslts as the description of the query

ICTNET12SER3.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER3.RL2
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL2
  • MD5: 0a36edd065fefc71d674baa7a11fcd5d
  • Run description: using learning to rank algorithm to rerank our ad-hoc results, with a feature calculating cos-sim with google search reuslts as the description of the query

ICTNET12SER3.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER3.RL3
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL3
  • MD5: 2882914c0538993af9b376133d6884f1
  • Run description: using learning to rank algorithm to rerank our ad-hoc results, with a feature calculating cos-sim with google search reuslts as the description of the query

ICTNET12SER3.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTNET12SER3.RL4
  • Participant: ICTNET
  • Track: Session
  • Year: 2012
  • Submission: 8/26/2012
  • Type: automatic
  • Task: RL4
  • MD5: 49d249e59aa58e23a5587f0d809ebee0
  • Run description: using learning to rank algorithm to rerank our ad-hoc results, with a feature calculating cos-sim with google search reuslts as the description of the query

PITTSHQM.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQM.RL1
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: b0e41c0543bcaedf0bd93cd6ed906a4d
  • Run description: Using SHQM method last year (without SDM features).

PITTSHQM.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQM.RL2
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: 286fbd9f01a172b1b77853afe833f2be
  • Run description: Using SHQM method last year (without SDM features).

PITTSHQM.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQM.RL3
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: 5c3117f518ed237455673c2c7cd91e0c
  • Run description: Using SHQM method last year (without SDM features).

PITTSHQM.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQM.RL4
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: 83c8961d9a64bbd09dc378d604d235e3
  • Run description: Using SHQM method last year (without SDM features).

PITTSHQMnov.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMnov.RL1
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: 603114b0632831cf8e7efe0f735e3803
  • Run description: Using SHQM method last year (without SDM features) and consider adaptive browsing novelty.

PITTSHQMnov.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMnov.RL2
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: 3e0550dd4929b2bc26d5bf0e4ec5bd80
  • Run description: Using SHQM method last year (without SDM features) and consider adaptive browsing novelty.

PITTSHQMnov.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMnov.RL3
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: 8149c3a8fe49d7e7e377bb8eb6468bc8
  • Run description: Using SHQM method last year (without SDM features) and consider adaptive browsing novelty.

PITTSHQMnov.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMnov.RL4
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: f9678863ee6b3d3a6fa78fd7a8a7d2a1
  • Run description: Using SHQM method last year (without SDM features) and consider adaptive browsing novelty.

PITTSHQMsdm.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsdm.RL1
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: a66059d8c2813cb703d15484ca524eb7
  • Run description: Using SHQM method last year (including SDM features).

PITTSHQMsdm.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsdm.RL2
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: ff2a5184279f0deed9dfb04e8a3ae7b9
  • Run description: Using SHQM method last year (including SDM features).

PITTSHQMsdm.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsdm.RL3
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: 58be4fd867edaa04039dfdd9de4911ff
  • Run description: Using SHQM method last year (including SDM features).

PITTSHQMsdm.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsdm.RL4
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: 67a65cd9a74e4af2649fe6054d997fc6
  • Run description: Using SHQM method last year (including SDM features).

PITTSHQMsnov.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsnov.RL1
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: f4d7d6b878790416c94cf8b3815ab985
  • Run description: Using SHQM method last year (including SDM features) considering adaptive browsing novelty.

PITTSHQMsnov.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsnov.RL2
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: 436c7e7ea23589e91847cb6561173ab8
  • Run description: Using SHQM method last year (including SDM features) considering adaptive browsing novelty.

PITTSHQMsnov.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsnov.RL3
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: 56413664309c1bd27eba020a3969a6ef
  • Run description: Using SHQM method last year (including SDM features) considering adaptive browsing novelty.

PITTSHQMsnov.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PITTSHQMsnov.RL4
  • Participant: PITT
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: 72d941c34dbeaed44cdb018fa31ac900
  • Run description: Using SHQM method last year (including SDM features) considering adaptive browsing novelty.

RutgersHu.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersHu.RL1
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL1
  • MD5: f115840d65e2204b47d2f7fcd4cd0c32
  • Run description: RUN RutgersHu first manually classified the task type according to task description, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersHu.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersHu.RL2
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL2
  • MD5: 675b458e295fc8d156fa967b748d9a79
  • Run description: RUN RutgersHu first manually classified the task type according to task description, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersHu.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersHu.RL3
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL3
  • MD5: 675b458e295fc8d156fa967b748d9a79
  • Run description: RUN RutgersHu first manually classified the task type according to task description, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersHu.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersHu.RL4
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL4
  • MD5: 285be65252c79035e769851f3c90dfee
  • Run description: RUN RutgersHu first manually classified the task type according to task description, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersM.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersM.RL1
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: 8d24bda4cd860755263f9d8bbfb09b20
  • Run description: RUN RutgersM used behavioral measures in each session and applied the predictive model to predict the task type automatically, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersM.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersM.RL2
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: f413e405ad1c81b9b07d4e98e2d94f4e
  • Run description: RUN RutgersM used behavioral measures in each session and applied the predictive model to predict the task type automatically, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersM.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersM.RL3
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: f413e405ad1c81b9b07d4e98e2d94f4e
  • Run description: RUN RutgersM used behavioral measures in each session and applied the predictive model to predict the task type automatically, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

RutgersM.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RutgersM.RL4
  • Participant: ruiiltrec2012
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: b2b463cdad520d640b97aaa5c486fd73
  • Run description: RUN RutgersM used behavioral measures in each session and applied the predictive model to predict the task type automatically, and then applied task-specific model to predict document usefulness. Relevance feedback was conducted on the predicted useful documents.

TUDrun.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: TUDrun.RL1
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: a149a1363b7eeec074820c0ed54efdcd
  • Run description: Run developed in collaboration with TUD: baseline

TUDrun.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: TUDrun.RL2
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: 36812fb11010fabaf8ec731c650267d9
  • Run description: Run developed in collaboration with TUD: similarity based on a co-occurrence graph

TUDrun.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: TUDrun.RL3
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: 36812fb11010fabaf8ec731c650267d9
  • Run description: Run developed in collaboration with TUD: similarity based on a co-occurrence graph

TUDrun.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: TUDrun.RL4
  • Participant: CWI
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 36812fb11010fabaf8ec731c650267d9
  • Run description: Run developed in collaboration with TUD: similarity based on a co-occurrence graph

UAlbany.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UAlbany.RL1
  • Participant: UAlbanySession
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL1
  • MD5: 8d05a75d627e7948950493703cb34a76
  • Run description: used pseudo relevant feedback; limited symbols ("'",".","(",")") in the queries were manually changed to 64 based

UAlbany.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UAlbany.RL2
  • Participant: UAlbanySession
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL2
  • MD5: a45e3d8f2beaac99136ebad2ee10e66f
  • Run description: compared the adjacent two queries and picked the 'good' query; used the good queries to expand the 'current query'; limited symbols (like "'", "." in a URL, "(", ")") were converted to 64 based format manually; pseudo relevant feedback was used;

UAlbany.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UAlbany.RL3
  • Participant: UAlbanySession
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL3
  • MD5: db08d9d69ce6e4400547bbd28afe7d67
  • Run description: compared the adjacent two queries and picked the 'good' query; criteria were set to pick 'good' pages of 'good' queries; used the good queries and the titles of the good pages to expand the 'current query'; limited symbols (like "'", "." in a URL, "(", ")") were converted to 64 based format manually; pseudo relevant feedback was used;

UAlbany.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UAlbany.RL4
  • Participant: UAlbanySession
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL4
  • MD5: c5a47cc4cc205efe46f496899dfac152
  • Run description: compared the adjacent two queries and picked the 'good' query; results in the clicked list of the good queries were judged and good pages were picked; if there was no click for an interaction, good pages were picked using the same criteria as those used in RL3; used the good queries and the titles of the good pages to expand the 'current query'; limited symbols (like "'", "." in a URL, "(", ")") were converted to 64 based format manually; pseudo relevant feedback was used;

webis12cnqe.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnqe.RL1
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: f133262ca47ca3bd25be6a8cc3f4db1e
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12indqe is that the ChatNoir search engine is used instead of Indri.

webis12cnqe.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnqe.RL2
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: 9eea8c34fe168ec34bd3d9bcabf45f9c
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12indqe is that the ChatNoir search engine is used instead of Indri.

webis12cnqe.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnqe.RL3
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: d83ddc26f1d5ae237139a9ac27239c9c
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12indqe is that the ChatNoir search engine is used instead of Indri.

webis12cnqe.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnqe.RL4
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: fce7aaf91845d8daf8bfe70db69c1712
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12indqe is that the ChatNoir search engine is used instead of Indri.

webis12cnse.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnse.RL1
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL1
  • MD5: fcbba1b6671cc49d1217c78ada97204c
  • Run description: In this run we manually defined which sessions in the TREC 2012 session track dataset cover the same topic and expand the given query only with key phrases extracted from these similar sessions (from queries in RL2, from document titles, snippets and full texts in RL3). In RL4 we add results that were already clicked by users in other sessions, but this time we don't restrict this expansion to documents that were clicked at least two times. Thus, much more documents are added to the result list than in webis12cnqe and webis12indqe.

webis12cnse.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnse.RL2
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL2
  • MD5: 3b7274d94efd4bc59307cdc3dcda7fee
  • Run description: In this run we manually defined which sessions in the TREC 2012 session track dataset cover the same topic and expand the given query only with key phrases extracted from these similar sessions (from queries in RL2, from document titles, snippets and full texts in RL3). In RL4 we add results that were already clicked by users in other sessions, but this time we don't restrict this expansion to documents that were clicked at least two times. Thus, much more documents are added to the result list than in webis12cnqe and webis12indqe.

webis12cnse.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnse.RL3
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL3
  • MD5: 367954bd6f287909bfb2c569ba68c130
  • Run description: In this run we manually defined which sessions in the TREC 2012 session track dataset cover the same topic and expand the given query only with key phrases extracted from these similar sessions (from queries in RL2, from document titles, snippets and full texts in RL3). In RL4 we add results that were already clicked by users in other sessions, but this time we don't restrict this expansion to documents that were clicked at least two times. Thus, much more documents are added to the result list than in webis12cnqe and webis12indqe.

webis12cnse.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12cnse.RL4
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: manual
  • Task: RL4
  • MD5: ec8ceb7567fa6d70f0cd772052bcd151
  • Run description: In this run we manually defined which sessions in the TREC 2012 session track dataset cover the same topic and expand the given query only with key phrases extracted from these similar sessions (from queries in RL2, from document titles, snippets and full texts in RL3). In RL4 we add results that were already clicked by users in other sessions, but this time we don't restrict this expansion to documents that were clicked at least two times. Thus, much more documents are added to the result list than in webis12cnqe and webis12indqe.

webis12indqe.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12indqe.RL1
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL1
  • MD5: b26523e98e7b60142c65ba695e054112
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (spam, duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12cnqe is that the Indri search engine is used instead of ChatNoir.

webis12indqe.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12indqe.RL2
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL2
  • MD5: dafa2a7e54391f998af4e83b01a22d5c
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (spam, duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12cnqe is that the Indri search engine is used instead of ChatNoir.

webis12indqe.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12indqe.RL3
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL3
  • MD5: 03d70f4467c1286c5193d51b435a418c
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (spam, duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12cnqe is that the Indri search engine is used instead of ChatNoir.

webis12indqe.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: webis12indqe.RL4
  • Participant: webis
  • Track: Session
  • Year: 2012
  • Submission: 8/29/2012
  • Type: automatic
  • Task: RL4
  • MD5: 1e0da4d51f4575513bd7c76ddd2f432d
  • Run description: This full automatic run first decides whether query expansion might be useful or not (we identified new queries and acronyms to be useful). Then, we automatically extract key phrases of the previous queries (RL2) and of the titles, snippets and full document texts of the already shown results (RL3). In RL4 we use only these documents for key phrase extraction that were clicked by the user and thus seem to be especially helpful. The original queries get segmented and Wikipedia articles whose titles exactly match one of the segments are added to the result list. Furthermore in RL4 we add documents of automatically detected similar sessions (given both in the TREC 2012 session track data and of an locally existing search log) that were clicked at least two times. In a last step the result list gets filtered (spam, duplicates, results with more than 7000 words and already clicked documents) and Wikipedia articles in the top-100 get ranked to top. The difference to webis12cnqe is that the Indri search engine is used instead of ChatNoir.

wildcat1.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat1.RL1
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL1
  • MD5: d91297360999fdc93adfc658c55b25be
  • Run description: wildcat1.RL1 uses the current query to get the search results. The documents with spam score less than 40 are filtered out.

wildcat1.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat1.RL2
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL2
  • MD5: 14b2ed72397882c8e2a941c24235b6f0
  • Run description: wildcat1.RL2 uses user behavior model to do the query expansion. The terms in current query and in history queries are given different weights.

wildcat1.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat1.RL3
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL3
  • MD5: 5e6f60582f2d804845368e58809e4f11
  • Run description: wildcat1.RL3 uses anchor texts provided by Twente to do the query expansion. This run extracts the anchor texts of the websites in ranked lists and extracts 10 words with highest frequency in the anchor texts. wildcat1.RL3 added the top 10 words to the query to generate a new query. The terms in current query have weight 0.7 and the expanded terms of anchor texts have weight 0.3 .

wildcat1.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat1.RL4
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL4
  • MD5: 26acf6b7f161abc9c12201382b46f4e5
  • Run description: In this run, we use the results of wildcat1.RL3 and re-rank the results according to their similarities with the clicked documents. The attention time on the clicked documents affect the re-rank results.

wildcat2.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat2.RL1
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL1
  • MD5: f1541c5963fa97dc96de0c8fefa3ccc4
  • Run description: wildcat2.RL1 uses the current query to get the search results. The documents are re-ranked by pagerank score.

wildcat2.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat2.RL2
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL2
  • MD5: 92d6a1af65d83f4a2528b4d355d3d60b
  • Run description: wildcat2.RL2 uses the current query to get the search results. The documents are re-ranked by pagerank score and indri score.

wildcat2.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat2.RL3
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL3
  • MD5: a4f55645ecba3bd1689930822b2c4780
  • Run description: In this run, we use the rank information of ranked lists for history queries to re-rank the search results of wildcat1.RL2

wildcat2.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat2.RL4
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL4
  • MD5: 2fe49aacb6a3dd99efd6323fd1d2be78
  • Run description: In this run, we use the titles and snippets of the clicked documents to expand the query. This run uses the expanded query to get the search results and the spam documents are filtered out.

wildcat3.RL1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat3.RL1
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL1
  • MD5: a71faf6997552e0fbf4685fe79b7cc6f
  • Run description: wildcat3.RL1 uses the current query to get the search results. And this run calculates the similarity between the returned search results and the current query, then re-ranks the results according to the similarity score.

wildcat3.RL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat3.RL2
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL2
  • MD5: 26bf4082c2082dd777ff1a35abe91101
  • Run description: This run uses user behavior model to do the query expansion.After the results are returned, this run calculates the similarity between the returned documents in search results and the expanded query, then re-ranks the results according to the similarity score.

wildcat3.RL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat3.RL3
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL3
  • MD5: d23110edb2ce649121e972e411240794
  • Run description: wildcat3.RL3 uses the meta tags in the documents of ranked lists for past queries to do the query expansion. The terms in the current query have weight 0.5 and the top 10 terms extracted from meta have weight 0.5 .

wildcat3.RL4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: wildcat3.RL4
  • Participant: pris411
  • Track: Session
  • Year: 2012
  • Submission: 8/28/2012
  • Type: automatic
  • Task: RL4
  • MD5: cb9dbb1eb1cdd5c70088e7d2c40dd6d4
  • Run description: wildcat3.RL4 uses the clicked order to do the query expansion. wildcat3.RL4 extracts terms of the clicked titles and gives them with different weights according to the click sequence. Then this run combines the current query with the weighted terms of clicked titles to generate the new query.

WQExpFqDSnip.RL1

Results | Participants | Input | Summary | Appendix

  • Run ID: WQExpFqDSnip.RL1
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL1
  • MD5: 4d263dd08180d38065e8c7c972ab4751
  • Run description: Sequential Dependency + Query Expansion from Previous Queries, Snippets and Clicked Snippets

WQExpFqDSnip.RL2

Results | Participants | Input | Summary | Appendix

  • Run ID: WQExpFqDSnip.RL2
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL2
  • MD5: 5058c66f686319002bb2a8417207564d
  • Run description: Sequential Dependency + Query Expansion from Previous Queries, Snippets and Clicked Snippets

WQExpFqDSnip.RL3

Results | Participants | Input | Summary | Appendix

  • Run ID: WQExpFqDSnip.RL3
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL3
  • MD5: b4892b9d375c6b1b5614b741fc64defe
  • Run description: Sequential Dependency + Query Expansion from Previous Queries, Snippets and Clicked Snippets

WQExpFqDSnip.RL4

Results | Participants | Input | Summary | Appendix

  • Run ID: WQExpFqDSnip.RL4
  • Participant: udel
  • Track: Session
  • Year: 2012
  • Submission: 8/30/2012
  • Type: automatic
  • Task: RL4
  • MD5: 79c84b4039ce4a349a7978c9cd77b48c
  • Run description: Sequential Dependency + Query Expansion from Previous Queries, Snippets and Clicked Snippets