Skip to content

Runs - Session 2010

bpacad10s1.RL1

Participants | Input | Summary

  • Run ID: bpacad10s1.RL1
  • Participant: budapest_acad
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 763826a7d3f831fb54a0138b3c68aa23
  • Run description: Simple AND query made from the original topics, ranked with Okapi BM25.

bpacad10s1.RL2

Participants | Input | Summary

  • Run ID: bpacad10s1.RL2
  • Participant: budapest_acad
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 6a3523694035d56628edd0d69bdd6a9b
  • Run description: Simple AND queries are made from the RL2 topics, ranked with Okapi BM25.

bpacad10s1.RL3

Participants | Input | Summary

  • Run ID: bpacad10s1.RL3
  • Participant: budapest_acad
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 1a4ba757838bdbc17d561d3c7dd70203
  • Run description: Documents in the RL2 list were re-ranked based on the question type and whether they occurred in RL1 or not. Question type was determined from the surface form of the question.

bpacad10s2.RL3

Participants | Input | Summary

  • Run ID: bpacad10s2.RL3
  • Participant: budapest_acad
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 047ea13b03e63a7aac3bbb03e43413ba
  • Run description: Weighted union of the RL1 and RL2 result lists.

CARDBNG.RL1

Participants | Input | Summary

  • Run ID: CARDBNG.RL1
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: b806c9c9cf3a97117a2d0c5886ca4576
  • Run description: ran the input queries as-is

CARDBNG.RL2

Participants | Input | Summary

  • Run ID: CARDBNG.RL2
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: c8d64608e69f627301f79467637d4d0f
  • Run description: ran the input queries as-is

CARDBNG.RL3

Participants | Input | Summary

  • Run ID: CARDBNG.RL3
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: fe5de71259c2453f8e450954711dff16
  • Run description: Uses Bing search engine to categorize quer1 to query2 transition type (generic/specific/drifting etc.) and use query expansion according to categorization above.

CARDWIKI.RL1

Participants | Input | Summary

  • Run ID: CARDWIKI.RL1
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 013d9b6a1873856f74d74b3d202a5df2
  • Run description: Ran given queries as-is

CARDWIKI.RL2

Participants | Input | Summary

  • Run ID: CARDWIKI.RL2
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 53a7322f83775e7f17df9a312c5bea25
  • Run description: Ran given queries as-is

CARDWIKI.RL3

Participants | Input | Summary

  • Run ID: CARDWIKI.RL3
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: dec32e68ef2e804a0ee045b661a972c2
  • Run description: Uses Wikipedia search engine to categorize query1 to query2 transition type (generic/specific/drifting etc.). Next step, it uses query expansion according to categorization above.

CARDWNET.RL1

Participants | Input | Summary

  • Run ID: CARDWNET.RL1
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 91cfad4b79b4281578d204b7f74db789
  • Run description: Ran given queries as-is

CARDWNET.RL2

Participants | Input | Summary

  • Run ID: CARDWNET.RL2
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 69f9efa63655549a5ef59e5df5625415
  • Run description: Ran given queries as-is

CARDWNET.RL3

Participants | Input | Summary

  • Run ID: CARDWNET.RL3
  • Participant: UALR_Srini
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: e4fdf69e105151e3a05717f402300058
  • Run description: Uses Wordnet dictionary for query expansion of (query2 intersection query1) and (query2 union query1)

CengageS10R1.RL1

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R1.RL1
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 72ea012f4b6131ff78ca021e4cec70b3
  • Run description: Query Term Weighting, Corpus terms collocation expansion.

CengageS10R1.RL2

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R1.RL2
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: a67a9c5f7062736314c10b1fc54c8ccc
  • Run description: Query Term Weighting, Corpus terms collocation expansion

CengageS10R1.RL3

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R1.RL3
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: af288792977f404a2db11ffa7c905e57
  • Run description: Query Term Weighting, Corpus terms collocation expansion

CengageS10R2.RL1

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R2.RL1
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 4692f53e2b987febf21edf3b6f525e07
  • Run description: Term weighting, Usage-log expansion, Corpus collocation expansion, Pseudo-relevance expansion

CengageS10R2.RL2

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R2.RL2
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: d6eaca51a2ceb78939790277b983877a
  • Run description: Term weighting, Usage-log expansion, Corpus collocation expansion, Pseudo-relevance expansion

CengageS10R2.RL3

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R2.RL3
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 97e5bd81e111259876e628e8e667eaaa
  • Run description: Term weighting, Usage-log expansion, Corpus collocation expansion, Pseudo-relevance expansion

CengageS10R3.RL1

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R3.RL1
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: f541fb67a8f6aae5f82c2a9fa9d2362f
  • Run description: WordNet expansion, Category re-ranking

CengageS10R3.RL2

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R3.RL2
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 1512d6022a6dd225867d2bf2a036f200
  • Run description: WordNet expansion, Category re-ranking

CengageS10R3.RL3

Participants | Proceedings | Input | Summary

  • Run ID: CengageS10R3.RL3
  • Participant: GALE
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 46c9a6463c553fa1c2f96e92567fadda
  • Run description: WordNet expansion, Category re-ranking

essex1.RL1

Participants | Proceedings | Input | Summary

  • Run ID: essex1.RL1
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 42e38f14eae430a972dffc950967f96c
  • Run description: In all our runs (essex1, essex2, essex3) we used the publicly available service to search the ClueWeb09 data set provided by the Indri search engine. For Ranked Lists RL1,RL2 we submit the queries as they are to the search engine which in turn uses the Query Likelihood model to retrieve a ranked list of documents. We also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked lists.

essex1.RL2

Participants | Proceedings | Input | Summary

  • Run ID: essex1.RL2
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: dc9738dc30d09dfa8d521bbeea809364
  • Run description: In all our runs (essex1, essex2, essex3) we used the publicly available service to search the ClueWeb09 data set provided by the Indri search engine. For Ranked Lists RL1,RL2 we submit the queries as they are to the search engine which in turn uses the Query Likelihood model to retrieve a ranked list of documents. We also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked lists.

essex1.RL3

Participants | Proceedings | Input | Summary

  • Run ID: essex1.RL3
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 948d5af4261123dbd017eb7abc9acba7
  • Run description: This run is our first baseline method to retrieve ranked list RL3. We take a very simple approach to generate the list by submitting a new query consisting of both queries in the session to the Indri search engine. As in other runs, we also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked list.

essex2.RL1

Participants | Proceedings | Input | Summary

  • Run ID: essex2.RL1
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: c99d4d8b943a80b5d42356fd320cbc7a
  • Run description: In all our runs (essex1, essex2, essex3) we used the publicly available service to search the ClueWeb09 data set provided by the Indri search engine. For Ranked Lists RL1,RL2 we submit the queries as they are to the search engine which in turn uses the Query Likelihood model to retrieve a ranked list of documents. We also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked lists.

essex2.RL2

Participants | Proceedings | Input | Summary

  • Run ID: essex2.RL2
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 1f0f970e30f11b97506038c9629b63af
  • Run description: In all our runs (essex1, essex2, essex3) we used the publicly available service to search the ClueWeb09 data set provided by the Indri search engine. For Ranked Lists RL1,RL2 we submit the queries as they are to the search engine which in turn uses the Query Likelihood model to retrieve a ranked list of documents. We also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked lists.

essex2.RL3

Participants | Proceedings | Input | Summary

  • Run ID: essex2.RL3
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 4f7e1c5921ad04589fce0170f0acf5ea
  • Run description: This run is our second baseline method to retrieve ranked list RL3. It reflects on the assumption that the user is not satisfied with the first set of results and that is why she reformulated her original query. In this baseline we use a naive way to utilise the original query by filtering the retrieved documents for the reformulated query in the session. The filtering works simply by eliminating the documents which appear in the first ranked list. As in other runs, we also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked list.

essex3.RL1

Participants | Proceedings | Input | Summary

  • Run ID: essex3.RL1
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: a6fe5d26c712d4c218cb7a0ff657819c
  • Run description: In all our runs (essex1, essex2, essex3) we used the publicly available service to search the ClueWeb09 data set provided by the Indri search engine. For Ranked Lists RL1,RL2 we submit the queries as they are to the search engine which in turn uses the Query Likelihood model to retrieve a ranked list of documents. We also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked lists.

essex3.RL2

Participants | Proceedings | Input | Summary

  • Run ID: essex3.RL2
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 60893c533df6cef718339c846f1d46e3
  • Run description: In all our runs (essex1, essex2, essex3) we used the publicly available service to search the ClueWeb09 data set provided by the Indri search engine. For Ranked Lists RL1,RL2 we submit the queries as they are to the search engine which in return uses the Query Likelihood model to retrieve a ranked list of documents. We also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked lists.

essex3.RL3

Participants | Proceedings | Input | Summary

  • Run ID: essex3.RL3
  • Participant: EssexUni
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 146eabd6f37ef48b4572a71a1dab9f05
  • Run description: In this run we developed a method for extracting useful terms and phrases to expand the reformulated query in the session. Our method stems from previous work in using query logs to extract related queries and our work in the Autoadapt project to learn domain models from query logs. Due the lack of avilability of query logs we instead used an anchor log constructed from the same dataset (the ClueWeb09 category B dataset) to simulate the query log. The anchor log was extracted and made publicly available by the University of Twente. Due to the lack of space, we summarize our method in these simple words: Using the anchor log we extract the top common related queries of both queries in the session, we then expand the reformulated query with the extracted phrases or terms and the original query. As in other runs, we also used the Waterloo Spam Rankings for the ClueWeb09 Dataset to filter the spam documents from the ranked list.

RMITBase.RL1

Participants | Proceedings | Input | Summary

  • Run ID: RMITBase.RL1
  • Participant: RMIT
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 9b8a8bb5b6868a6458ab5b60297c7774
  • Run description: Apply the Drichlet model(from Lemur Toolkit) with original Query

RMITBase.RL2

Participants | Proceedings | Input | Summary

  • Run ID: RMITBase.RL2
  • Participant: RMIT
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 3ed44f654a252807ac727a7cc47707cf
  • Run description: Apply the Drichlet model(from Lemur Toolkit) with reformulate query

RMITBase.RL3

Participants | Proceedings | Input | Summary

  • Run ID: RMITBase.RL3
  • Participant: RMIT
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 9eb9aa2ab19e6b521bcd867f866c5c9a
  • Run description: Combine Original query and reformulated query and Apply the Drichlet model(from Lemur Toolkit) on it .

RMITExp.RL1

Participants | Proceedings | Input | Summary

  • Run ID: RMITExp.RL1
  • Participant: RMIT
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 5ce8187b0eeae6075f10ceb910f61935
  • Run description: Expand original query with google suggested queries and apply Dirichlet Language model (from lemur Toolkit)

RMITExp.RL2

Participants | Proceedings | Input | Summary

  • Run ID: RMITExp.RL2
  • Participant: RMIT
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: a18c752e4175dab5c1ba17dcb206e907
  • Run description: Expand reformulated query with google suggested queries and apply Dirichlet Language model (from lemur Toolkit)

RMITExp.RL3

Participants | Proceedings | Input | Summary

  • Run ID: RMITExp.RL3
  • Participant: RMIT
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 614f8c4cc718ceef13ef0511fcf4bb41
  • Run description: Expand combined query (original query plus reformulated query) with google suggested queries and apply Dirichlet Language model (from lemur Toolkit)

udelIndriA.RL1

Participants | Input | Summary

  • Run ID: udelIndriA.RL1
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: efcd7caf6d1ea052cd6e10433343467a
  • Run description: category A, title query

udelIndriA.RL2

Participants | Input | Summary

  • Run ID: udelIndriA.RL2
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: a23020d4960ca60edeee09f9d5b8c8f6
  • Run description: category A, title query

udelIndriA.RL3

Participants | Input | Summary

  • Run ID: udelIndriA.RL3
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: b8e47d1873a3bf0314fd990681a4af10
  • Run description: category A, title query, push down dups of docs in RL1

udelIndriASF.RL1

Participants | Input | Summary

  • Run ID: udelIndriASF.RL1
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 97fc1f0d00ef6f32dab6925a69a97d2c
  • Run description: category A, title query, waterloo spam docs filtered

udelIndriASF.RL2

Participants | Input | Summary

  • Run ID: udelIndriASF.RL2
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 255f7f39d4d8e149c6a821afa837f731
  • Run description: category A, title query, waterloo spam docs filtered

udelIndriASF.RL3

Participants | Input | Summary

  • Run ID: udelIndriASF.RL3
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 6f37869978818c89f73970b49fec800c
  • Run description: category A, title query, waterloo spam docs filtered, dups from RL1 pushed down

udelIndriB.RL1

Participants | Input | Summary

  • Run ID: udelIndriB.RL1
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 4fe2b9ba78077f9e519d2aebc7388132
  • Run description: category B, title query

udelIndriB.RL2

Participants | Input | Summary

  • Run ID: udelIndriB.RL2
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 39299e434044ab7a53552db52cb4b5b8
  • Run description: category B, DM query

udelIndriB.RL3

Participants | Input | Summary

  • Run ID: udelIndriB.RL3
  • Participant: udel
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: fd5af02f01e33e98312b13541f44eef5
  • Run description: category B, DM query, dup docs in RL1 pushed down

UM10SibmA.RL1

Participants | Input | Summary

  • Run ID: UM10SibmA.RL1
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL1
  • MD5: 41455f2d95a2787621eb8a04abc93076
  • Run description: impact-BM25, plus anchor and link, method A

UM10SibmA.RL2

Participants | Input | Summary

  • Run ID: UM10SibmA.RL2
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL2
  • MD5: 05cdecf661ab6762b3a7eb2f4507d47a
  • Run description: impact-BM25, plus anchor and link, method A

UM10SibmA.RL3

Participants | Input | Summary

  • Run ID: UM10SibmA.RL3
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL3
  • MD5: 1ed81df6cc276c81a6b02ea6de8ababc
  • Run description: impact-BM25, plus anchor and link, method A

UM10SibmbB.RL1

Participants | Input | Summary

  • Run ID: UM10SibmbB.RL1
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL1
  • MD5: aa12de29d4d71a3b916a930ca6751a65
  • Run description: impact-BM25, plus anchor and link, method B

UM10SibmbB.RL2

Participants | Input | Summary

  • Run ID: UM10SibmbB.RL2
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL2
  • MD5: 51a8bdc037361186f3fb495f263216b3
  • Run description: impact-BM25, plus anchor and link, method B

UM10SibmbB.RL3

Participants | Input | Summary

  • Run ID: UM10SibmbB.RL3
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL3
  • MD5: 54f5f24ebbcd198891a3deffe7aa50eb
  • Run description: impact-BM25, plus anchor and link, method B

UM10SimpA.RL1

Participants | Input | Summary

  • Run ID: UM10SimpA.RL1
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL1
  • MD5: 71e4b48c0959f2ac4d8da4116d16cfd0
  • Run description: impact, no anchor or link, method A

UM10SimpA.RL2

Participants | Input | Summary

  • Run ID: UM10SimpA.RL2
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL2
  • MD5: 3877bc52905c88ffdd0bab9e069fcd21
  • Run description: impact, no anchor or link, method A

UM10SimpA.RL3

Participants | Input | Summary

  • Run ID: UM10SimpA.RL3
  • Participant: unimelb
  • Track: Session
  • Year: 2010
  • Submission: 8/26/2010
  • Type: automatic
  • Task: RL3
  • MD5: b25688da36ebbef37d16ee32ee6f222e
  • Run description: impact, no anchor or link, method A

USIML052010.RL1

Participants | Proceedings | Input | Summary

  • Run ID: USIML052010.RL1
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 66020ec6c2ed6929ab92e030e23279ce
  • Run description: RL1 is generated using Terrier's BM25 implementation.

USIML052010.RL2

Participants | Proceedings | Input | Summary

  • Run ID: USIML052010.RL2
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 690814dc21fd3dac455f2141919240a2
  • Run description: RL2 is generated using Terrier's BM25 implementation.

USIML052010.RL3

Participants | Proceedings | Input | Summary

  • Run ID: USIML052010.RL3
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 4e02b9928c28903353ab514e4a5821dc
  • Run description: RL3 generated from RL2-based query relevance model weighted by divergence to RL1-based query model with a smoothing parameter of 0.5.

USIML092010.RL1

Participants | Proceedings | Input | Summary

  • Run ID: USIML092010.RL1
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 497bc120b0f67c51385701e4e3339029
  • Run description: RL1 is generated using Terrier's BM25 implementation.

USIML092010.RL2

Participants | Proceedings | Input | Summary

  • Run ID: USIML092010.RL2
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 717e356307a721e92376aa2147544c07
  • Run description: RL2 is generated using Terrier's BM25 implementation.

USIML092010.RL3

Participants | Proceedings | Input | Summary

  • Run ID: USIML092010.RL3
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 8942d8fce10045bae72479c0743463f5
  • Run description: RL3 is generated from RL2-based query relevance model weighted by divergence to RL1-based query model with a smoothing parameter of 0.9.

USIRR2010.RL1

Participants | Proceedings | Input | Summary

  • Run ID: USIRR2010.RL1
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: b0b3629ee4fb9ecc6798af86fe7376e6
  • Run description: RL1 is generated using Terrier's BM25 implementation.

USIRR2010.RL2

Participants | Proceedings | Input | Summary

  • Run ID: USIRR2010.RL2
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 496e3efb170d7752be35b471cecd9511
  • Run description: RL2 is generated using Terrier's BM25 implementation.

USIRR2010.RL3

Participants | Proceedings | Input | Summary

  • Run ID: USIRR2010.RL3
  • Participant: ULugano
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 9c3a4ca644ab7f2b6b4d2c8827a556bc
  • Run description: RL3 is generated by scoring documents according to the weighted summation of reciprocal ranks for documents in RL1 and RL2, where the weight given to documents from RL1 is negative and RL2 is positive.

uvaExt1.RL1

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt1.RL1
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: c087197f3fa9a125002e0b6dd2e1dadb
  • Run description: baseline: standard web retrieval run. retrieval using indri language modeling and phrases, then diversification based on LDA clustering

uvaExt1.RL2

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt1.RL2
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: d20e08759873204b428ecbfb614a8246
  • Run description: baseline: standard retrieval run using indri language modeling and phrases, no diversification

uvaExt1.RL3

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt1.RL3
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 3ff8485b9957db99a2a978040dfca07b
  • Run description: bias results towards the original query using blind relevance feedback. the follow-up query is expanded using the relevance feedback terms generated using the first query.

uvaExt2.RL1

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt2.RL1
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 3f3c5df2774a5e2e0e9287aed55f0c29
  • Run description: same as uvaExt1.RL1 baseline: standard web retrieval run. retrieval using indri language modeling and phrases, then diversification based on LDA clustering

uvaExt2.RL2

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt2.RL2
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: f7aeedaf1d1ed5a1751308e3a0da9c32
  • Run description: retrieval using indri language modeling and phrases + blind relevance feedback

uvaExt2.RL3

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt2.RL3
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 18b0b1dc63745f63d4b41c4ff2583d55
  • Run description: expand the follow-up query using blind relevance feedback, but remove relevance feedback terms that were generated by the original query.

uvaExt3.RL1

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt3.RL1
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 8d3a98dc35046c1a7a9508257ecada2f
  • Run description: same as uvaExt1.RL1 baseline: standard web retrieval run. retrieval using indri language modeling and phrases, then diversification based on LDA clustering

uvaExt3.RL2

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt3.RL2
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 828becb97b7d1fbbc3c7c47eed783f14
  • Run description: retrieval using indri language modeling and phrases, then diversification based on LDA clustering

uvaExt3.RL3

Participants | Proceedings | Input | Summary

  • Run ID: uvaExt3.RL3
  • Participant: UAms
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: d5623a225ea2912b080146c5c30d6668
  • Run description: expand the follow-up query using blind relevance feedback for the original and follow-up query

webis2010.RL1

Participants | Proceedings | Input | Summary

  • Run ID: webis2010.RL1
  • Participant: Webis
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL1
  • MD5: 7f9d2233795d9b7c8216312a4515d757
  • Run description: Query segmentation and maximum query, against a third-party search engine interface. Additional resources: Google N-gram corpus (for query segmentation)

webis2010.RL2

Participants | Proceedings | Input | Summary

  • Run ID: webis2010.RL2
  • Participant: Webis
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL2
  • MD5: 2114014e59875bc0efdae3ac02019458
  • Run description: Query segmentation and maximum query, against a third-party search engine interface. Additional resources: Google N-gram corpus (for query segmentation)

webis2010.RL3

Participants | Proceedings | Input | Summary

  • Run ID: webis2010.RL3
  • Participant: Webis
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: caeb103b5f5f5e47aa4a04c74bde230e
  • Run description: Query segmentation and maximum query, against a third-party search engine interface. Additional resources: Google N-gram corpus (for query segmentation)

webis2010w.RL3

Participants | Proceedings | Input | Summary

  • Run ID: webis2010w.RL3
  • Participant: Webis
  • Track: Session
  • Year: 2010
  • Submission: 8/25/2010
  • Type: automatic
  • Task: RL3
  • MD5: 3ab2ee9af4780b7ab81c7e856764376c
  • Run description: Query segmentation and maximum query, against a third-party search engine interface, with additional weighting applied. Additional resources: Google N-gram corpus (for query segmentation)