Skip to content

Runs - Federated Web Search 2014

basedef

Participants | Input | Appendix

  • Run ID: basedef
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 9404e8a37b1b0dce1eef4d82475d01b3
  • Run description: This is a simple baseline and use the default rank of documents in each search engine.

drexelRS1

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS1
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: 66dee396efc6e8695ac5f50ab1bcb3e8
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stop words). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Resource Selection is based on the CRCSExp algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS1mR

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS1mR
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: a8497d14418ec4e2f46ec0c4e50af76d
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource reciprocal rank (based on RS run result) as the weight. The corresponding RS run is drexelRS1 (CRCSExp).

drexelRS2

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS2
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: d0556fb98ad162cd20f0bbdaeadf1db7
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Resource Selection is based on the ReDDE algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS3

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS3
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: 9ae9e3dcac67af7a062a95603a6ee0a4
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Resource Selection is based on the CiSSApprox algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS4

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS4
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: ec996e458d822576689e7f9950646796
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Resource Selection is based on the CiSS algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS4mW

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS4mW
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: 9e99350039a93cce7aaa8c112816fccc
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource score (based on RS run result) as the weight. The corresponding RS run is drexelRS4 (CiSS).

drexelRS5

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS5
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: ee01f89c0a0f6562ab5c86ac41ae6ec4
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with BM25 retrieval model (k=1.2, b=0.75). Resource Selection is based on the CRCSLinear algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS6

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS6
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: d6b9a73ffcff2e1d74b8361b575881e0
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: Markov Random Field retrieval model with sequential dependence of query terms (Dirichlet smoothing mu=1350). Resource Selection is based on the ReDDETop algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS6mR

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS6mR
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: 665ebae0d02a67aac54f7de729a0daff
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource reciprocal rank (based on RS run result) as the weight. The corresponding RS run is drexelRS6 (ReDDETop).

drexelRS6mW

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS6mW
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: 2f01a48a736032652a586cc5716d7477
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource score (based on RS run result) as the weight. The corresponding RS run is drexelRS6 (ReDDETop).

drexelRS7

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS7
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: e3b76b5a930935eb18472c4b0b055426
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: Markov Random Field retrieval model with sequential dependence of query terms (Dirichlet smoothing mu=1350). Resource Selection is based on the SUSHI algorithm as implemented in the LiDR toolkit by Ilya Markov.

drexelRS7mW

Participants | Proceedings | Input | Appendix

  • Run ID: drexelRS7mW
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: 9b4da0a6af7901f59edbec3d9eab65ff
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource score (based on RS run result) as the weight. The corresponding RS run is drexelRS7 (SUSHI).

drexelVS1

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS1
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 5bc219d8a17805efd03fef99bec52866
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the CRCSExp resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

drexelVS2

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS2
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 4ed424c627b38c7ab93475b9d9159748
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the ReDDE resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

drexelVS3

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS3
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 7dedc0bf63114d3c5773a82365e81324
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the CiSSApprox resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

drexelVS4

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS4
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: c8202d1ae50b8d8e4dc033107c6c4a7a
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with language model (Dirichlet smoothing mu=1350). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the CiSS resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

drexelVS5

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS5
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 481e8ae8ecf032c39cbdf348dd80dde8
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: plain query terms with BM25 retrieval model (k=1.2, b=0.75). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the CRCSLinear resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

drexelVS6

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS6
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 346a031770f0ba921bc6d6d7b8e71112
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: Markov Random Field retrieval model with sequential dependence of query terms (Dirichlet smoothing mu=1350). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the ReDDETop resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

drexelVS7

Participants | Proceedings | Input | Appendix

  • Run ID: drexelVS7
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 0bb1a168433804f2995e90af5999d852
  • Run description: Index: Central Sample Index overall all the sampled docs using Indri 5.5 (Krovtz stemmer, not removing stopwords). Retrieval: Markov Random Field retrieval model with sequential dependence of query terms (Dirichlet smoothing mu=1350). Vertical Selection is done by treating a vertical as a single resource with its constituent engines combined, and then selecting verticals using the SUSHi resource selection algorithm as implemented in the LiDR toolkit by Ilya Markov. An additional truncating step is employed to stop selecting a vertical when the discounted gain of selecting that vertical is below certain threshold.

ecomsv

Participants | Input | Appendix

  • Run ID: ecomsv
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 6e1c7c7f819fc7ab75c331d779f5ef2b
  • Run description: In this method, we use the seif and vertical selection to caculate the result

ecomsvt

Participants | Input | Appendix

  • Run ID: ecomsvt
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 42eeae6fcbc4239d47347a90754648ee
  • Run description: In this method, we use the seif,vertical selection and tfidf to caculate the result.

ecomsvz

Participants | Input | Appendix

  • Run ID: ecomsvz
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: fb8dece0cd30aaf7f213910eaee84800
  • Run description: In this method, we use the seif,vertical selection and different Similarity measures to caculate the result.

ekwma

Participants | Input | Appendix

  • Run ID: ekwma
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: ba2f268612733007cc1d61131e30e0b0
  • Run description: In this method, we use wordnet to do query expension and do mapping when the query and category share the same key word.

eseif

Participants | Input | Appendix

  • Run ID: eseif
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: d1dd1d22b4272bc1a323ab1561869917
  • Run description: In this method, we use seif which estimated from trec 2013 dataset to measure the importance of search engines and rank a query-independent result.

esevs

Participants | Input | Appendix

  • Run ID: esevs
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: 40abb0262be30cb389c2858444c22d2a
  • Run description: This result return from the resouce selection ranking. We mark the query using the label of search engines in its resource selection ranking.

esevsru

Participants | Input | Appendix

  • Run ID: esevsru
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: df684541d684845a7d1cf9b9761e45d4
  • Run description: On the basis of the classification returns from the resouce selection ranking we also add some rules to improve the result.

esmimax

Participants | Input | Appendix

  • Run ID: esmimax
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: c68b48ecf0d51dc932764187f6006c60
  • Run description: In this method, we use different algorithms, such as jaccard, dice, consine to caculate the similarity between queries and search engines. Besides we also extract the LSA.

esvru

Participants | Input | Appendix

  • Run ID: esvru
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: 3f1695d2d027e70f9fe70d4e133f3114
  • Run description: On the basis of the SVM algorithm we also add some rule to do the classification.

etfidf

Participants | Input | Appendix

  • Run ID: etfidf
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 0a9eb6c2903b2cde95d9acb4c5828520
  • Run description: In this method, we use tfidf to simply caculate the similarity between queries and search engines

FW14basemR

Participants | Proceedings | Input | Appendix

  • Run ID: FW14basemR
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: b2a43bb0746d6a306b39eeb7cf6f8c0d
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource reciprocal rank (based on RS run result) as the weight. The corresponding RS run is FW14base.

FW14basemW

Participants | Proceedings | Input | Appendix

  • Run ID: FW14basemW
  • Participant: dragon
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/16/2014
  • Task: merging
  • MD5: f50565337fec84c9b92a61b4b3dc1a33
  • Run description: Snippets are ranked by their reciprocal rank score (k=60) multiplying by their corresponding resource score (based on RS run result) as the weight. The corresponding RS run is FW14base.

FW14Docs100

Participants | Proceedings | Input | Appendix

  • Run ID: FW14Docs100
  • Participant: info_ruc
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: b298e70c63b73b6a4eb93b64cb83517b
  • Run description: Run LDA topic analysis over the whole collection of sampled documents. The topic distribution of each resource is calculated as the average of topic distributions of all the sampled documents in this resource. Each query is expanded by Google and then its topic distribution is inferred using the trained topic models. Finally the resources are ranked by according to the similarities of their topic distribution vectors to that of the query. In this run, the number of topics in training LDA topic models is set to 100.

FW14Docs50

Participants | Proceedings | Input | Appendix

  • Run ID: FW14Docs50
  • Participant: info_ruc
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 037b6396f3e2c4c0c8ae34ab006396f4
  • Run description: Run LDA topic analysis over the whole collection of sampled documents. The topic distribution of each resource is calculated as the average of topic distributions of all the sampled documents in this resource. Each query is expanded by Google and then its topic distribution is inferred using the trained topic models. Finally the resources are ranked by according to the similarities of their topic distribution vectors to that of the query. In this run, the number of topics in training LDA topic models is set to 50.

FW14Docs75

Participants | Proceedings | Input | Appendix

  • Run ID: FW14Docs75
  • Participant: info_ruc
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 371746dcf873368524a08837f20a8f24
  • Run description: Run LDA topic analysis over the whole collection of sampled documents. The topic distribution of each resource is calculated as the average of topic distributions of all the sampled documents in this resource. Each query is expanded by Google and then its topic distribution is inferred using the trained topic models. Finally the resources are ranked by according to the similarities of their topic distribution vectors to that of the query. In this run, the number of topics in training LDA topic models is set to 75.

FW14Search100

Participants | Proceedings | Input | Appendix

  • Run ID: FW14Search100
  • Participant: info_ruc
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: d1b933612445499241920f361db0f155
  • Run description: Concatenate all the sampled snippets for each resource into one large document, and run LDA topic analysis over the collection of these large documents. The topic distribution of each resource is then that of its corresponding large document. Each query is expanded by Google and then its topic distribution is inferred using the trained topic models. Finally the resources are ranked by according to the similarities of their topic distribution vectors to that of the query. In this run, the number of topics in training LDA topic models is set to 100.

FW14Search50

Participants | Proceedings | Input | Appendix

  • Run ID: FW14Search50
  • Participant: info_ruc
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: c0f6bfe8a23930d34353112dd18ef3fa
  • Run description: Concatenate all the sampled snippets for each resource into one large document, and run LDA topic analysis over the collection of these large documents. The topic distribution of each resource is then that of its corresponding large document. Each query is expanded by Google and then its topic distribution is inferred using the trained topic models. Finally the resources are ranked by according to the similarities of their topic distribution vectors to that of the query. In this run, the number of topics in training LDA topic models is set to 50.

FW14Search75

Participants | Proceedings | Input | Appendix

  • Run ID: FW14Search75
  • Participant: info_ruc
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 0b78b0e5e1895b5895dc2175fc7d1a0b
  • Run description: Concatenate all the sampled snippets for each resource into one large document, and run LDA topic analysis over the collection of these large documents. The topic distribution of each resource is then that of its corresponding large document. Each query is expanded by Google and then its topic distribution is inferred using the trained topic models. Finally the resources are ranked by according to the similarities of their topic distribution vectors to that of the query. In this run, the number of topics in training LDA topic models is set to 75.

googTermWise7

Participants | Proceedings | Input | Appendix

  • Run ID: googTermWise7
  • Participant: CMU_LTI
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: e7c97c3c49f351c88aee6f9fcb92d42e
  • Run description: A more aggressive expansion strategy is used where we add 3 terms within a threshold per each query term.

googUniform7

Participants | Proceedings | Input | Appendix

  • Run ID: googUniform7
  • Participant: CMU_LTI
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 1ed74835d73d282a6addfe6ca4c216fe
  • Run description: Simple query expansion that finds 3 closest terms within a threshold to a query and adds them to the query.

ICTNETRM01

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM01
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: d489045c553f47d428659b9e2a2b1971
  • Run description: information retrieval method with duplicated results

ICTNETRM02

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM02
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: 4d684f80158bf2beed7592d81083d440
  • Run description: information retrieval method without duplicated results according to URL

ICTNETRM03

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM03
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: 57cf333f150e95f214685f38f009a449
  • Run description: similar to PR algorithm, using Google API to extend query, remove duplicated results based on URL, calculate similarity using LSI model.

ICTNETRM04

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM04
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: 16288459fefc2f0d66458c4398521c31
  • Run description: remove duplicated results based on URL, calculate similarity using LSI model with 5 topics.

ICTNETRM05

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM05
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: 6a34e90aea1f109ccab08aa859525218
  • Run description: fusion of LSI model, IR method and PR-similar algothrim without duplication removal.

ICTNETRM06

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM06
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: f56b2dc22feaadfe6627cf1d2f3445c6
  • Run description: fusion of LSI model, IR method and PR-similar algothrim with duplication removal based on URL.

ICTNETRM07

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRM07
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: 47414b33c48d5f695fbd0f4d3043445e
  • Run description: fusion of IR method and PR-similar algothrim with duplication removal based on URL.

ICTNETRS01

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS01
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: 8eed1d6fa537a260b7552a52c000e96b
  • Run description: search result with Lucene

ICTNETRS02

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS02
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: 55b098ff827cf4b7453c61d3e740d54c
  • Run description: result of two-pass filting based on IR result

ICTNETRS03

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS03
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: f20a3380af45e23f187689705145157f
  • Run description: result of two-pass filting based on classification result

ICTNETRS04

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS04
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: cb1bceacc641a2c79c568cdd59cd64d8
  • Run description: LSI model with 100 topics and PR*2

ICTNETRS05

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS05
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: 42381dd503b2a782d57083dafae0e908
  • Run description: two-pass filting of LSI model with 100 topics and PR*2

ICTNETRS06

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS06
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: 5f55c956f62f62c76db1cf04f30df846
  • Run description: two-pass filting of two files, LSI model with 100 topics and PR*2

ICTNETRS07

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETRS07
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: e4c5532d5e07e5ad2bd217dc5473f97d
  • Run description: IR result and two-pass filting without PR

ICTNETVS02

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS02
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: vertical
  • MD5: 506ac8a8f0a1867908d779a82a0f1623
  • Run description: sampling from given vertical data, then calculate the classification probability based on Google API

ICTNETVS03

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS03
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: vertical
  • MD5: cec0aa2ae8b0bf8a99fc023224a2684a
  • Run description: merge result of vertical based on google API and query LSI model

ICTNETVS04

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS04
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: vertical
  • MD5: e8edcef1950c2cc5dd2bafd8cbcc2308
  • Run description: merge result of several LSI models and classification model

ICTNETVS05

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS05
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: vertical
  • MD5: 4bed86aa1290baced1d1cb2c6c4c1f23
  • Run description: merge result of several LSI models, classification model and other model

ICTNETVS06

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS06
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: vertical
  • MD5: e028142cd2c6cabaf0721d825d4a2928
  • Run description: voting classification for sampled given vertical data

ICTNETVS07

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS07
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: vertical
  • MD5: b419580603598192019ae3b3eea1c833
  • Run description: result merged of A,B,C: A: query(without stopword) classification based on vertical feature(terms tfidf) B: query feature(co-occurrence terms without stopword) classification based on vertical feature(terms tfidf) C: query result with field URL, TITLE, H1, CONTENT

ICTNETVS1

Participants | Proceedings | Input | Appendix

  • Run ID: ICTNETVS1
  • Participant: ICTNET
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/8/2014
  • Task: vertical
  • MD5: a6360bb75323e4366647c87c26dafbe0
  • Run description: based on search result with URL,TITLE,H1,CONTENT in stemmed data index.

NTNUiSrs1

Participants | Proceedings | Input | Appendix

  • Run ID: NTNUiSrs1
  • Participant: NTNUiS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: 2aa1376af0768641e42b518c9e64abbe
  • Run description: Baseline using document text only

NTNUiSrs2

Participants | Proceedings | Input | Appendix

  • Run ID: NTNUiSrs2
  • Participant: NTNUiS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: 597ffb6f45015ef43bf64f4faacdbf8c
  • Run description: Learning to rank approach trained on FedWeb'13 data

NTNUiSrs3

Participants | Proceedings | Input | Appendix

  • Run ID: NTNUiSrs3
  • Participant: NTNUiS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: be8fb106e7e93c807ab48dd6ee891b54
  • Run description: Learning to rank approach trained on FedWeb'12 + '13 data

NTNUiSvs2

Participants | Proceedings | Input | Appendix

  • Run ID: NTNUiSvs2
  • Participant: NTNUiS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 08f04c7bb29579af71f83bd295c5103f
  • Run description: Based on resource selection run NTNUiSrs2

NTNUiSvs3

Participants | Proceedings | Input | Appendix

  • Run ID: NTNUiSvs3
  • Participant: NTNUiS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 4a07aa4e6c3c6ef98686997e5760d068
  • Run description: Based on resource selection run NTNUiSrs3

plain

Participants | Proceedings | Input | Appendix

  • Run ID: plain
  • Participant: CMU_LTI
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: ae4dd177789a17a47853a8d3f6eb2a33
  • Run description: The query terms are combined using the standard indri #combine operator and the documents are ordered by this score.

SCUTKapok1

Participants | Input | Appendix

  • Run ID: SCUTKapok1
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/13/2014
  • Task: merging
  • MD5: 00d1bca5748e432284de216b09de2532
  • Run description: using the resource and vertical weight to make the result merging.The formula is score= (score1)+(1-)(verticalweight*resourceweight)

SCUTKapok2

Participants | Input | Appendix

  • Run ID: SCUTKapok2
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/14/2014
  • Task: merging
  • MD5: 0fbc550b4198a00caff8dc32a7c09f2d
  • Run description: We re-coumpute the resourceScore for the baseline here.

SCUTKapok3

Participants | Input | Appendix

  • Run ID: SCUTKapok3
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 1a84a85a67269dfba7cd9261107920cc
  • Run description: using the resource and vertical weight to do the result merging.The formula is totalscore = (scoreresourceweight)+(1-)*vertical weight.

SCUTKapok4

Participants | Input | Appendix

  • Run ID: SCUTKapok4
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 33c66f42bbbce5e4c9f52e9f6c63e471
  • Run description: using another formula to calculate the score,rise the vertical weight .

SCUTKapok5

Participants | Input | Appendix

  • Run ID: SCUTKapok5
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: f442d4dc83fd035b3e0721788b00c687
  • Run description: using another formula to calculate the score,rise the vertical weight .

SCUTKapok6

Participants | Input | Appendix

  • Run ID: SCUTKapok6
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: b10400b3f73e2450db4f5eb5a30685f3
  • Run description: this run uses the formula:ascore+bresourcesweight+c*verticalsweightfcalculate the score .consider the weight of resources and verticals.

SCUTKapok7

Participants | Input | Appendix

  • Run ID: SCUTKapok7
  • Participant: SCUTKapok
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 342a02b7763e338bd0921617ad97898a
  • Run description: this run uses the formula :a(score)+bresources+c*verticals to calculate the scores with the weight of resources and vertical,without duplication.

sdm5

Participants | Proceedings | Input | Appendix

  • Run ID: sdm5
  • Participant: CMU_LTI
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: e97496305c2b90c095ac0fdeca916b39
  • Run description: Use the sequential dependency model with uniform weights for the query terms and the bigram and window components.

svmtrain

Participants | Input | Appendix

  • Run ID: svmtrain
  • Participant: ECNUCS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: 257c2488eebea1060f8a4b4cc55b8c25
  • Run description: In this method, we use google search to do query expension and use svm algorithm to finish the classification.

udelftrsbs

Participants | Proceedings | Input | Appendix

  • Run ID: udelftrsbs
  • Participant: udel
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: 419c6263ea4922533117f30dfcd4fd19
  • Run description: Top 100 documents selected from a query likelihood run.

udelftrssn

Participants | Proceedings | Input | Appendix

  • Run ID: udelftrssn
  • Participant: udel
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: 7d9b973b76f0c756b37ed3a8ac98bfac
  • Run description: Top 100 snippets were selected from a snippet index.

udelftvql

Participants | Proceedings | Input | Appendix

  • Run ID: udelftvql
  • Participant: udel
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: 7702c56109f5ca7c8505c7276b8db9a2
  • Run description: Top 100 documents retrieved by query likelihood method were grouped by their verticals and their verticals were ranked.

udelftvqlR

Participants | Proceedings | Input | Appendix

  • Run ID: udelftvqlR
  • Participant: udel
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: f2695cd351ce8441b6fd3a29c6b7c867
  • Run description: We reranked our baseline run by using some rules.

uiucGSLISf1

Participants | Proceedings | Input | Appendix

  • Run ID: uiucGSLISf1
  • Participant: uiucGSLIS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: 781108d3baf4e82ca8eebd6acef66b1e
  • Run description: Resources were ranked according to query clarity score.

uiucGSLISf2

Participants | Proceedings | Input | Appendix

  • Run ID: uiucGSLISf2
  • Participant: uiucGSLIS
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/17/2014
  • Task: resource
  • MD5: c47b4974c3f250471040d24504706a6e
  • Run description: Resources were by the TF-IDF score of the query in the resource.

ULuganoCL2V

Participants | Proceedings | Input | Appendix

  • Run ID: ULuganoCL2V
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: 3f9c07425b2fd18ed4f0ab263975353b
  • Run description: Vertical selection based on resources that were ranked by relevance given a topic and opinion per collection level

ULuganoColL2

Participants | Proceedings | Input | Appendix

  • Run ID: ULuganoColL2
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: b51854cbd0e0df367cb980762634796c
  • Run description: This run ranks the resources based on relevance given a topic and opinion per collection level

ULuganoDFR

Participants | Proceedings | Input | Appendix

  • Run ID: ULuganoDFR
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: ad3bf1c66dbfcab741762dc4729e9489
  • Run description: This run is the baseline for the resource selection. The run uses the DFR_BM25 retrieval algorithm to rank the resources.

ULuganoDFRV

Participants | Proceedings | Input | Appendix

  • Run ID: ULuganoDFRV
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: c7b28893936e760b0f3057bec8a25e96
  • Run description: This run is the baseline for the vertical selection. The run uses the ranked resources returned by the DFR_BM25 retrieval algorithm to select the verticals.

ULuganoDL2V

Participants | Proceedings | Input | Appendix

  • Run ID: ULuganoDL2V
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: vertical
  • MD5: a986930fc1ffa8feaf460c11d2634427
  • Run description: Vertical selection based on resources that were ranked by relevance given a topic and opinion per document level

ULuganoDocL2

Participants | Proceedings | Input | Appendix

  • Run ID: ULuganoDocL2
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: c43cb265c8f9b6035497d6771819d6af
  • Run description: The run uses the ranked list of documents returned by the baseline to rank the resources based on relevance and opinion per document level

ULugDFRNoOp

Participants | Proceedings | Input | Appendix

  • Run ID: ULugDFRNoOp
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: a37bc3dd8b8b6742b942217076a0578a
  • Run description: This run uses the resource selection run that was based on the DFR_BM25 method and the results snippets

ULugDFROp

Participants | Proceedings | Input | Appendix

  • Run ID: ULugDFROp
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: c23cc904049a3b19240f303e5d113025
  • Run description: This run uses the resource selection that is based on the DFR_BM25 method and the results snippets. The snippets in the result list are diversified by opinion (positive, negative, neutral)

ULugFWBsNoOp

Participants | Proceedings | Input | Appendix

  • Run ID: ULugFWBsNoOp
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 2b3a3111c7371045246986b5cea212fb
  • Run description: This run used the baseline resource collection and the search results snippets to produce the ranking of the snippets

ULugFWBsOp

Participants | Proceedings | Input | Appendix

  • Run ID: ULugFWBsOp
  • Participant: ULugano
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 9/15/2014
  • Task: merging
  • MD5: 72ee574517ab883eafbae5a22293d099
  • Run description: This run used the baseline resource selection and the results snippets. The snippets in the result list were diversified by opinion (positive, negative, neutral).

UPDFW14r1ksm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14r1ksm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: resource
  • MD5: aea7d5456a595c85c0505015ca6beac5
  • Run description: TWFIRF weighting scheme with krovetz stemmer, stop list, AND-OR cascade for query term occurrence. The IDF is computed as 1 plus the ratio.

UPDFW14tiknm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14tiknm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 9526cc58fb9876b2278f1df21a6d8771
  • Run description: TWF-IRF weighting scheme with krovetz stemmer, no stop list, AND-OR cascade for query term occurrences.

UPDFW14tiksm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14tiksm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 50b0c43a093dd37fb9624ce56fbd8ce3
  • Run description: TWF-IRF weighting scheme with krovetz stemmer, stop list, AND-OR cascade for query term occurrences.

UPDFW14tinnm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14tinnm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 201ea643490d9f237d6404c6bff2e4dd
  • Run description: TWF-IRF weighting scheme with no stemmer, no stop-set and cascade of AND and OR among query terms.

UPDFW14tinsm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14tinsm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: d2cc53e9686d775f42b78336227cc1cc
  • Run description: TWF-IRF weighting scheme with no stemming, stop-list, AND-OR cascade for query terms occurrence.

UPDFW14tipnm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14tipnm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 349c7c9395849f58d1698e3a9cddc611
  • Run description: TWF-IRF weighting scheme with porter stemming, no stop-words, AND-OR cascade for occurrence of query terms

UPDFW14tipsm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14tipsm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 2e0865417de652d0fde59c1fc5dcaa85
  • Run description: TWF-IRF weighting scheme with porter stemming, stop-set file, AND-OR cascade of query terms.

UPDFW14v0knm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14v0knm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 400901e86588bcf9cb8f8437de7e851e
  • Run description: TWF weighting scheme with krovetz stemming, no stop list, AND-OR cascade for query term occurrence and IDF computed as in the original formulation.

UPDFW14v0nnm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14v0nnm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: bf20bd48aaacbb25d61ab7dee63cf429
  • Run description: TWF weighting scheme with no stemming, no stop list, AND-OR cascade for query term occurrence and IDF computed as in the original formulation.

UPDFW14v0pnm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14v0pnm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 16425ad988c928a83ce880578b9f198e
  • Run description: TWF weighting scheme with Porter stemming, no stop list, with AND-OR cascade. IDF is implemented as in the original format

UPDFW14v1knm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14v1knm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: e0d09ecc5fa7aa6afd02f3535e52d453
  • Run description: TWF weighting scheme with Krovetz stemmer, no stop list, AND-OR cascade for query term occurrences. The IDF was computed by 1 plus the ratio.

UPDFW14v1nnm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14v1nnm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: 246ff24287b55c143f5c0c7aa76be5f2
  • Run description: TWF weighting scheme with no stemming, no stop list, AND-OR cascade for query term occurrence and IDF computed as 1 plus the ratio.

UPDFW14v1pnm

Participants | Proceedings | Input | Appendix

  • Run ID: UPDFW14v1pnm
  • Participant: UPD
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/19/2014
  • Task: vertical
  • MD5: b817ada65f33fd57f125b5696d1d19e8
  • Run description: TWF weighting scheme with Porter stemmer, no stop list, AND-OR cascade for query term occurrences. The IDF was computed by 1 plus the ratio.

UTTailyG2000

Participants | Proceedings | Input | Appendix

  • Run ID: UTTailyG2000
  • Participant: ut
  • Track: Federated Web Search
  • Year: 2014
  • Submission: 8/18/2014
  • Task: resource
  • MD5: 0e1bf1a04e3b264e30b083cdef88e006
  • Run description: Taily algorithm with sample sizes according to dong.