Runs - Relevance Feedback 2009¶
CMIC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.1
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
4ad23af2e88a7e359b04ae1e24de6930
- Run description: results using websearch and filtered on Wikipedia pages only. topic 40 produced no results and added from CMIC.2
CMIC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.2
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
bf495f5b9e7585427f0387be283f2576
- Run description: Top 100 results from MSRC are clustered using DBSCAN with minpts = 3 and eps = 0.65.
CMIC.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.base
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
e3d0b36f633a0e2f3e53cd5582675980
- Run description: Base run -- BM25
CMIC.CMIC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.CMIC.1
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
76e171483a2971bce6e9dbd7d704e151
- Run description: Wikipedia CMIC.1
CMIC.CMIC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.CMIC.2
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
4eb63a1d7aa392bd4586c87afbb81826
- Run description: Diversification CMIC.2
CMIC.ilps.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.ilps.1
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a644cf3b3e89eb5b3a42b9c80b3bf383
- Run description: ilps.2 Acc2
CMIC.MSRC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.MSRC.1
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
83b2f5e942bb0a037cb877b750a0880b
- Run description: MSRC.1 Acc2
CMIC.udel.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.udel.1
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a0248129a199a58dd1d34434ce01ec7a
- Run description: udel.1 Acc2
CMIC.udel.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.udel.2
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
375f3d5319bbd654b1edc9b3b1abe847
- Run description: udel.2 Acc2
CMIC.ugTr.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.ugTr.2
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
d7c84b4815b2727310bcd26880f5e78a
- Run description: ugTr.2 Acc2
CMIC.UMas.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMIC.UMas.2
- Participant: CMIC
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
4020e985614b5f202c0571cd41c30c57
- Run description: UMas.2 Acc2
CMU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.1
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/26/2009
- Type: automatic
- Task: phase1
- MD5:
7563383316d89ad0861cd24e2aa20b62
- Run description: This is the submission for the first phase of relevance feedback track. The file contains 5 documents per query that we'd like to get feedback on. The file format is in correspondance with the trec submission guidelines.
CMU.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.base
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
9de27926f241d209c9154531c4efc27f
- Run description: baseline run, Indri with Dependence Model/MRF retrieval model.
CMU.CMIC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.CMIC.1
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
ee1ed8f6d6a3f3859125b9655d7bd210
- Run description: baseline Indri with Dependence Model/MRF retrieval model + CMIC.1 judgements, using pairwise logistic regression-based re-ranking
CMU.CMIC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.CMIC.2
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
3dbefdd45818e68107c2f4337a2e1f52
- Run description: baseline Indri with Dependence Model/MRF retrieval model + CMIC.2 judgements, using pairwise logistic regression-based re-ranking
CMU.CMU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.CMU.1
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
f7aec16347fef83297f666a9b6ede77a
- Run description: baseline Indri with Dependence Model/MRF retrieval model + CMU.1 judgements, using pairwise logistic regression-based re-ranking
CMU.FDU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.FDU.1
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
32fe1a8732862cdee3abc3f630aaeca0
- Run description: baseline Indri with Dependence Model/MRF retrieval model + FDU.1 judgements, using pairwise logistic regression-based re-ranking
CMU.MSRC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.MSRC.1
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
3a29e3bd7c60737bfd4f0db2aff04f7a
- Run description: baseline Indri with Dependence Model/MRF retrieval model + MSRC.1 judgements, using pairwise logistic regression-based re-ranking
CMU.UMas.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.UMas.2
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
f20d4ccd6594dca617eaa2e95c46f0d0
- Run description: baseline Indri with Dependence Model/MRF retrieval model + UMas.1 judgements, using pairwise logistic regression-based re-ranking
CMU.YUIR.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CMU.YUIR.2
- Participant: CMU_LIRA
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/12/2009
- Type: automatic
- Task: phase2
- MD5:
287e7c9a7812680ef91d3d3de4a365bf
- Run description: baseline Indri with Dependence Model/MRF retrieval model + YUIR.1 judgements, using pairwise logistic regression-based re-ranking
FDU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDU.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/20/2009
- Type: automatic
- Task: phase1
- MD5:
82929f24a200160c5ad211365a72ccfb
- Run description: This is an automatic and baseline run without any technique to enhance the retrieval performance, which is returned by the language model using indri toolkit.
FDURFN.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.base
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
1624a68d79fdcd0d259359c7cfb82b65
- Run description: This is a baseline run without any query expansion approach used.
FDURFN.FDU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.FDU.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
6a1d98f1a86b090d246cc478dfe52635
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
FDURFN.fub.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.fub.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
a8bb6101ee4e2981d3d6893037a4c2f0
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
FDURFN.PRIS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.PRIS.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
31c45211d442fd330ce89ebf8bb02b9a
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
FDURFN.QUT.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.QUT.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
f79135d4849b0ff80778e0e8ebf1ed9f
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
FDURFN.twen.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.twen.2
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
1c04bf6be4b09699f04c9b2cc3120758
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
FDURFN.UMas.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.UMas.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
fead165b2e35d8a42910ffd249c7a224
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
FDURFN.WatS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDURFN.WatS.1
- Participant: FDU_MEDLAB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
90dc326527429228fdf12ea9e0ce5b34
- Run description: We generated this run by using Lemur/Indri toolkit. First, we used the 5 relevant feedback files to make constrained clusterring on the pseudo-relevant documents (the top 100 documents from the baseline). Then we extracted the expansion terms from the pseudo-relevant document set and used these words to reform the query. And, we extract the Name Entity from the relenvant documents as the expansion terms as well. The run was generated using the new reformed query.
fub.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: fub.1
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/26/2009
- Type: automatic
- Task: phase1
- MD5:
edb05890a0a1962891b060dc09549fb2
- Run description: The results, presented for each topic, were obtained to maximize the diversity between documents retrieved.
FUB9RF.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.base
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
dc9656189842907c78812f24c5484831
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.CMU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.CMU.1
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a37b4ca6ba713fac4938e439e0173e7a
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.fub.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.fub.1
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
9cb0a7baff9d8864b264357a65ac401f
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.ilps.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.ilps.2
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a175c4f165ee691ba0548927d08e98aa
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.PRIS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.PRIS.1
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
b8f629cbc60537e342ea452867b421dc
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.QUT.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.QUT.1
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
605831e1e1ee3905926e768226c69035
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.twen.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.twen.2
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
c2758ab2d623ee8f96e07e1f8a0f4d68
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
FUB9RF.UMas.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FUB9RF.UMas.1
- Participant: FUB
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
33f4230ac94f5cc2ce2cef2ddc71a143
- Run description: The results were obtained to maximize the similarity between documents retrieved and relevant documents.
hit2.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: hit2.1
- Participant: hit2
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/25/2009
- Type: automatic
- Task: phase1
- MD5:
3d5f97eaf7ed50b1e2d0e3e0ddf7b906
- Run description: The top n retrieved documents for each query are clustered into five groups. The documents in the center of clusters are selected to be judged. The cluster method is K-MEDOIDS, and J-divergence is used as the distance measurement.
hit2.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: hit2.2
- Participant: hit2
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/25/2009
- Type: automatic
- Task: phase1
- MD5:
0142b745e09cbf3e5891a5337a78fadf
- Run description: The cluster method is the same to the Run hit2.1. The selected documents to be judged are highest retrieved score in the clusters.
hit2.hit2.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: hit2.hit2.1
- Participant: hit2
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
00a3d06efed6fed5a28ef4fe770299ac
- Run description: The score of the document is increased if this document is similar to the judged relevance documents. And The score of the document is decreased if this document is similar to the judged irrelevance documents.
hit2.hit2.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: hit2.hit2.2
- Participant: hit2
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
409a74c70eaf39e3625a5df97f516a96
- Run description: The score of the document is increased if this document is similar to the judged relevance documents. And The score of the document is decreased if this document is similar to the judged irrelevance documents.
ilps.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ilps.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
6aee6d2b04b274e27a5c226208059b9d
- Run description: cluster reranking
ilps.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ilps.2
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
1fc48906124fd1029e41856885c30808
- Run description: Dependence + relevance models
IlpsRF.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.base
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
cdee11f12424f867d59afd921fd84328
- Run description: MRF
IlpsRF.fub.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.fub.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
5bd59cdf06507ebfd281df3e41ba319d
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.ilps.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.ilps.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
882b73c5d60c2fdb980a552b563a57be
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.ilps.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.ilps.2
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
3b1984eaad35cb1fa6321351d7a4c4f6
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.QUT.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.QUT.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a18f6c244f5ef18b00ba1207fb2777e0
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.Sab.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.Sab.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
292fd30af3c74a08399ff929b23d74e7
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.twen.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.twen.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
d250879b8fa14631b98007b39dcea83d
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.twen.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.twen.2
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
548c364081a3b5136dee0014d8d9249b
- Run description: Result list merging of expanded queries based on judged docs
IlpsRF.WatS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IlpsRF.WatS.1
- Participant: UAms
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
5a0ce180e538392ea31aa089c9a82c54
- Run description: Result list merging of expanded queries based on judged docs
MSRC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MSRC.1
- Participant: msrc
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
876f1e77556098556c77bc885d143d5f
- Run description: Given a few top candidate documents in terms of relevance, the documents are picked based on how different they are from the already picked documents.
MSRC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MSRC.2
- Participant: msrc
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
d25e8333ab4705832585773d8d1c1b9a
- Run description: The top documents according to bm25.
MSRC.CMU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MSRC.CMU.1
- Participant: msrc
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
35e5e59b01177755551f12e896eab50e
- Run description: We get the top k most important terms and compute the similarity between unjudged and judged documents using these terms.
PRIS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
75b540374cbfe4e09b1cb9d2237b3525
- Run description: In this phase, we ues Indri as search platform. Then, from the top 4000 documents, we find 5 classify centers based on k-means.
PRIS.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.base
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
45abdc9bd71ec34bd0c3e728fd187c1e
- Run description: This is the baseline result without feedback.
PRIS.hit2.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.hit2.2
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
26cd2e18ba52d893e1751af353a44dbb
- Run description: This result is based on hit2.2 using Language Modle.
PRIS.ilps.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.ilps.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
d446353ab782057668e6b6191bba33ce
- Run description: This result is based on ilps.1 using Language Modle.
PRIS.PRIS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.PRIS.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
f90372de097f9a280056a4ff79d1c6b7
- Run description: This result is based on PRIS.1 using Language Modle.
PRIS.Sab.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.Sab.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
4b9380d66cfa70a681d208457be33757
- Run description: This result is based on Sab.1 using Language Modle.
PRIS.SIEL.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.SIEL.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
05485b729a92d7c6f50b5c9141da94e7
- Run description: This result is based on SIEL.1 using Language Modle.
PRIS.twen.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.twen.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
1a4a756739b0539ef072853f7942f7b7
- Run description: This result is based on twen.1 using Language Modle.
PRIS.UCSC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: PRIS.UCSC.1
- Participant: buptpris___2009
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
1f669d52983b5f5c1ff4de15180fee45
- Run description: This result is based on UCSC.1 using Language Modle.
QUT.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/30/2009
- Type: automatic
- Task: phase1
- MD5:
fa37400355d848a6f8ab19a6ec7fe0d0
- Run description: The process includes two phases: the training phase and the filtering phase. In the training phase, for a given topic, the topic-relevant subjects are extracted from a knowledge base constructed based on the Library of Congress Subject Headings (LCSH). The query is then expanded by adding terms in the relevant subjects and the QUT (Queensland University of Technology) library catalogs that reference to these relevant subjects. In the filtering phase, for a given topic, the pseudo-relevance-feedback process consists of two steps: (A) select a set of documents (documents with top 30 ranking scores in this run) based on the expanded query for the topic and documents titles in the Clueweb09 Category B; (B) Choose top five documents in the selected documents based on the same query and the full text (including both the title and body) of these documents.
QUT.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.base
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/21/2009
- Type: automatic
- Task: phase2
- MD5:
d9560b24b49081966b08f3658801faf2
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.CMIC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.CMIC.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
cd384a446e36c91ecc61e2a1c5647df0
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.hit2.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.hit2.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
5a02af275208282d9585f27e1d82ef96
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.MSRC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.MSRC.2
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
b6df7162a11604a11a65a6bd06df9856
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.QUT.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.QUT.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
96c0865118e4aaea9bc449063a136516
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.SIEL.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.SIEL.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
7cbe4670b6cef02e84ee4f7cd710d8a5
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.UPD.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.UPD.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
0278ac36f35bca1e8dcb10859421fda4
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.WatS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.WatS.1
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
b59d14134ac86a81180b1a5a3c0121a3
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
QUT.YUIR.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QUT.YUIR.2
- Participant: QUT_ED_Lab
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
35b727976e5179af7ad60d9be1bf0e52
- Run description: We used a kind of 2-stage model in Phase 2: (1) retrieving a candidate set from ClueWeb09 Category-B as the base run; (2) filtering the base run result by using user feedback documents. In the first stage, we extended the phase 1 model by considering commonsense knowledge acquired from the Web (using Google API), because of the limitation of the world knowledge acquired from the LCSH and QUT library catalogue. A set of 150 keywords generated from the acquired knowledge were used to expand the given query, and then it used to retrieve an initial set of candidates from the ClueWeb09 Category-B based on their titles only. The initial set was also re-sorted to generate the base run based on the expand query and the contents of the documents. In the second stage, we re-ranked the initial set of candidates by using the pattern taxonomy model (PTM) that used both positive and negative relevance feedback for each set of assigned feedbacks. To use negative relevance feedback, the model selected some constructive negative examples based on positive feedback; and then revised the extracted features from positive documents. For the topics that have positive feedback only, where we treated marked value 1 or 2 as positive, the final set of keywords was generated by using the PTM model directly. For those topics that have both positive and negative feedbacks, an initial set of keywords was generated firstly by PTM and then a revision model was used to updated the initial set of keywords based on the negative feedback. For the topics that have not got any positive feedback, top 100 keywords selected from the first stage were used as keywords discovered in a positive document, and then the revision model was used to obtain the final set of keywords.
Sab.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab.1
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/30/2009
- Type: automatic
- Task: phase1
- MD5:
c28b5b8326f77840048bfcd7c94348f1
- Run description: Basic SMART Lnu docs ltu queries top 5.
Sab9RF.base¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.base
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
dba267b43d34a277be3478ac39b66e16
- Run description: Base SMART ltu.Lnu run, full collection
Sab9RF.CMIC.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.CMIC.2
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
03b09bf41d7807928c5eedc52cc1e0cf
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.hit2.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.hit2.2
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
6fd6da03e78a2b3ee30b705a5a965251
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.MSRC.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.MSRC.2
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
5fd8a3b2a6a2379076d2c9525e2b7852
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.Sab.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.Sab.1
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
433ccda2a9aba94535d3f8ad2ae87082
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.UCSC.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.UCSC.2
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
11b51bd0bb0c5fe8f4ca14198c84f5b3
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.udel.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.udel.1
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
ff29f1aae9f94b585f497a7a84078be1
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.WatS.2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.WatS.2
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
4899906c347ed5bcad7a9319472aac59
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
Sab9RF.YUIR.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: Sab9RF.YUIR.1
- Participant: SABIR
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
62d54c0d66762acc05faae87acfdfb65
- Run description: Rocchio feedback SMART ltu.Lnu run, full collection. Expand by 50 terms. Rocchio a,b,c = 16,16,32. Nonrel wts are average weights in full collection. Doc weighting Lnu.
SIEL.1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: SIEL.1
- Participant: SIEL
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
2fe84c5b8a03387cdb052abc2518ccd6
- Run description: This is an automatic run of 50 topics over the CLUEWEB Category B dataset. The documents are top 5 relevant documents retrieved using our system. These documents will be judged by users as relevant and non-relevant and used for relevance feedback.
twen.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.1
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/27/2009
- Type: automatic
- Task: phase1
- MD5:
8104493d797928b9bddbae5467300948
- Run description: Language modeling with a combination of keyword, phrase-based and window-based retrieval. The top retrieved documents were filtered according to their domain and documents with a domain that was already retrieved at a higher rank were removed. The top 5 documents of this filtered list was used for phase 1.
twen.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.2
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/27/2009
- Type: automatic
- Task: phase1
- MD5:
be15478b2267d9db8fd02553840443d0
- Run description: Language modeling. Corpus was neither stopped nor stemmed. Top retrieved documents per query used for phase 1.
twen.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.base
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
468085485bbd60d42830321966333a95
- Run description: baseline run. language modeling and spam detection.
twen.FDU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.FDU.1
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
fcb50e4f5c416bc189c8dce60ae3b38a
- Run description: setting of feedback terms and documents depending on the available relevance judgments; baseline setting is language modeling; spam detection as a prost-processing step.
twen.ilps.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.ilps.1
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
fd063b4b52bb575d732379cb5fe1b9a5
- Run description: number of feedback docs and terms dependent on the given qrels, language modeling on the full document; spam detection as post-processing step.
twen.twen.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.twen.1
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a4cd8011f123fe0f12d80af72c27b5a3
- Run description: qrel dependent setting of feedback terms and documents; language modeling with document length prior; spam detection
twen.twen.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.twen.2
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
ffc5b8dc3c3848ef48da04fddf0a80f5
- Run description: spam filtered run; varying number of feedback terms and documents.
twen.UCSC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.UCSC.2
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
11467de666b6fae6d2c90c8efcea641d
- Run description: qrel dependent setting of feedback terms and documents; document length prior; spam detection
twen.ugTr.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.ugTr.1
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
efcf8922a981b67f9fb8b39e91eb6afa
- Run description: approach with document length prior, otherwise as previous runs.
twen.YUIR.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: twen.YUIR.2
- Participant: utwente
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
642b359ec3bbfbfb0f7708d5f431ba12
- Run description: qrel dependent setting of feedback terms and documents, Language Modeling, spam detection
UCSC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.1
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/30/2009
- Type: automatic
- Task: phase1
- MD5:
ee9549e835a338c9bf8fb9f7eea1a40c
- Run description: This run uses transductive experimental design (TED) active learning method.
UCSC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.2
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/30/2009
- Type: automatic
- Task: phase1
- MD5:
3847a7dbf442ca56f5358a34a8301d44
- Run description: This run uses k-means clustering method
UCSC.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.base
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
8ca8349f9f0b7475e55fbd12887ac8d3
- Run description: result of baseline run
UCSC.CMIC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.CMIC.1
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
0312f34228d4d3763a4f4491ea86d808
- Run description: result of run with results of CMIC.1
UCSC.FDU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.FDU.1
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
284e3543a22077b8b4867a57e08bfdc6
- Run description: result of run with results of FDU.1
UCSC.MSRC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.MSRC.1
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
ca5f9a307e6b450772d3aafd06374299
- Run description: result of run with results of MSRC.1
UCSC.UCSC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.UCSC.1
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
c7316ce1f7696387503caba2eea65a24
- Run description: result of run with results of UCSC.1
UCSC.UCSC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.UCSC.2
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
70252a664e691a11e27a788d49d4eb31
- Run description: result of run with results of UCSC.2
UCSC.udel.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.udel.2
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
e1bc485677fb9efc7452652cc9117a38
- Run description: result of run with results of udel.2
UCSC.ugTr.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.ugTr.2
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
26800566e0f0f0fc5d881120562676fb
- Run description: result of run with results of ugTr.2
UCSC.UMas.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UCSC.UMas.2
- Participant: UCSCIRKM
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
4d9411661f222a8971adf58b1606d27d
- Run description: result of run with results of UMas.2
udel.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
982ef7fa1a50c3f042b1c04ae3a1d206
- Run description: used MTC to pick documents out of a set of runs formed using different retrieval methods and query formats
udel.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel.2
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
ded508ae75cc27b2481ed87bada1729d
- Run description: pruned documents from ranked list using doc-doc similarities in order to get a more diverse set
udel2.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.base
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
ff4439d9f1a13d261dc211150bd0f940
- Run description: baseline indri run
udel2.fub.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.fub.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
6f842d69dfd5546d59ea52a9f9b47385
- Run description: used judgments on fub.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.Sab.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.Sab.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
8bad2419610d61b26ab6a97f6fe7bd12
- Run description: used judgments on Sab.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.SIEL.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.SIEL.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
fe3dc2b4d23c1150ef3c095fa3aa867b
- Run description: used judgments on SIEL.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.twen.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.twen.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
ec0fee346b3732d07eaee55bdc8afb0b
- Run description: used judgments on twen.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.UCSC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.UCSC.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
64923bee1b2f6bdb49c1e1f5784a3212
- Run description: used judgments on UCSC.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.udel.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.udel.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
a445a0cb85f284a40fe0c585e1cdb842
- Run description: used judgments on udel.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.udel.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.udel.2
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
e17762ba9a718966b7c2dfa7340fe8af
- Run description: used judgments on udel.2 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
udel2.WatS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: udel2.WatS.1
- Participant: UDel
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/25/2009
- Type: automatic
- Task: phase2
- MD5:
bd8e3d564216f89b034e3c3945b5800c
- Run description: used judgments on WatS.1 docs to predict relevance of unjudged docs (among all docs retrieved by systems that input to the udel.1 set), then used relevance predictions to do relevance feedback with Lavrenko & Croft relevance models
ugTr.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
4b675a91eb6df682df64a13cc72bc46d
- Run description: Parameter free DFR model.
ugTr.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.2
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
7435989c41712d31678c97217aad11da
- Run description: Another parameter free DFR model.
ugTr.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.base
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
d8fb276af4da2968591f25cddc491bc0
- Run description: Baseline without relevance feedback, parameter free Divergence from Randomness model.
ugTr.CMU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.CMU.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
97cda330853fb31068108e520b415659
- Run description: Baseline with relevance feedback, parameter free Divergence from Randomness model. Feedback set CMU.1.
ugTr.hit2.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.hit2.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
168e908df258845cb41a8485cbd43a45
- Run description: Baseline with relevance feedback, parameter free Divergence from Randomness model. Feedback set hit2.1.
ugTr.ilps.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.ilps.2
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
08ff42391191ebeb2217dc47c6962a2e
- Run description: Baseline with relevance feedback, parameter free Divergence from Randomness model. Feedback set ilps.2
ugTr.ugTr.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.ugTr.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a01bdb13c53d8714b4d2c33c14749ea7
- Run description: Relevance feedback, parameter free Divergence from Randomness model. Feedback set ugTr.1
ugTr.ugTr.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.ugTr.2
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
043c96cf5f585d3eeca87858b199fd0f
- Run description: Relevance feedback, parameter free Divergence from Randomness model. Feedback set ugTr.2
ugTr.UMas.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.UMas.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
88940b13edca2785565e0e95f8d98021
- Run description: Relevance feedback, parameter free Divergence from Randomness model. Feedback set UMas.1
ugTr.UPD.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.UPD.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
3fafecdec09d86bc474b831cf0c0fa93
- Run description: Relevance feedback, parameter free Divergence from Randomness model. Feedback set UPD.1
ugTr.YUIR.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ugTr.YUIR.1
- Participant: uogTr
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
07fc964239072ebfaa25eb163e0fdaff
- Run description: Relevance feedback, parameter free Divergence from Randomness model. Feedback set YUIR.1
UMa9RF.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.base
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
398c0566983a9e36c2ee187c2830eba5
- Run description: Term dependency model by Markov random field + Pseudo Relevance Feedback by Relevance Model
UMa9RF.ilps.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.ilps.1
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
f308f4e7d38382e3c9344c3e374822b2
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMa9RF.PRIS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.PRIS.1
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
c2ba16fb11ae433c4a6d564cebec606d
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMa9RF.UCSC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.UCSC.2
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
b76b0339f35ca7e8fb3cc3ea98ddb2c7
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMa9RF.ugTr.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.ugTr.1
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
5470e6434fcdc1bd7128655ec036a6a2
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMa9RF.UMas.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.UMas.1
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
97794c9bb38044914401d657c487b78b
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMa9RF.UMas.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.UMas.2
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
f9c66a32ff54db6f5f81a104044afcf4
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMa9RF.YUIR.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMa9RF.YUIR.2
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
09c2244dc31d65e8e96c5b801b7320ef
- Run description: Term dependency model by Markov random field + Supervised Term Weighting for Relevance Feedback
UMas.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMas.1
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/30/2009
- Type: automatic
- Task: phase1
- MD5:
d8b500a5125060914511ea4f5dd19031
- Run description: Run constructed by automatically comparing the retrieved ranked lists between a query likelihood model and the Dependency Model as defined in Don Metzler's thesis. Both runs were conducted using Indri.
UMas.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMas.2
- Participant: UMass
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/30/2009
- Type: automatic
- Task: phase1
- MD5:
8f5d2a301ffc911ad55e8aac6a4936d3
- Run description: Run constructed by automatically comparing the retrieved ranked lists between a query likelihood model and the Dependency Model as defined in Don Metzler's thesis, however the second run allows for further investigation in the differences between the two models. Both runs were conducted using Indri.
UPD.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/28/2009
- Type: automatic
- Task: phase1
- MD5:
e7cf484c19961e28fd8ef7ad722414c3
- Run description: The documents in the collection were ranked by the adoption of the BM25 weighting scheme. Keywords extracted from the title, the content, and the META tag "keywords" were used. No stemming was adopted at indexing and query time. Each query was divided in its constituent terms and the OR clause was used to group query terms. The top10 retrieved results were re-ranked according to the presence of query terms in the URL.
UPD9RF.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.base
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
cd2f446da29459c75cd397bfa379f93e
- Run description: In this run (the baseline run) the documents are ranked by the BM25 weighting scheme and then the top10 retrieved documents are re-ranked according to the number of query keywords present in the url.
UPD9RF.CMU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.CMU.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
accf1255f7b47b7e0c5a696d07badb9a
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the CMU.1 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents: in particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
UPD9RF.hit2.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.hit2.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
365f71a7baf9fb754616bac1a5ab6041
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the hit2.1 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents. In particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
UPD9RF.ilps.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.ilps.2
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
999c857ff256d01784a4a0f67c23bb10
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the ilps.2 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents. In particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
UPD9RF.PRIS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.PRIS.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
3c669b7e7098131dc3699b08d7c37a14
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the PRIS.1 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents: in particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
UPD9RF.QUT.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.QUT.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
08d97dcd9d795eaaca6e1b023d432895
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the QUT.1 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents: in particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
UPD9RF.UMas.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.UMas.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
049753f3533b2ca7092e514044e6d860
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the UMas.1 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents: in particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
UPD9RF.UPD.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UPD9RF.UPD.1
- Participant: UPD
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
b2a315618a6599fbdc67ed4eb208aff9
- Run description: In this run the top 5 weighted keywords are extracted from the top 2 documents assessed as relevant in the UPD.1 results file. If only one relevant document is available, then only such document is used as evidence. Then a cooccurrence matrix of the selected keywords is computed by using contiguous text windows of size 11. The coocurrence matrix is decomposed by SVD and the principal eigenvector is used to re-rank the documents: in particular the documents are re-ranked according to the distance from the subspace spanned by the selected eigenvector. Each document is represented as a vector of TF-IDF weights. The top 2500 documents retrieved by the baseline are re-ranked. If no relevant documents are available for a topic, then the baseline results are returned.
WAT2.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.base
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
57425895044be4e591ce580bac37df22
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.CMIC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.CMIC.2
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
447e189eb4a92ee796944dcedb161af9
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.hit2.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.hit2.2
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
841e59f96b2d382435bf516d9cb4f3f3
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.MSRC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.MSRC.2
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
c541d817174211758dffbe86410b0d2a
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.UCSC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.UCSC.1
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
01993d6449e8d7b71c441cb1b6690dd2
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.udel.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.udel.1
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
12672dcad3889cbb459ae00f42dcecb4
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.UPD.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.UPD.1
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
bd8f48be915c51861db5a88888b94dd9
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.WatS.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.WatS.2
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
a233deffa3e75016f1548762ef9c3f1e
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WAT2.YUIR.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WAT2.YUIR.1
- Participant: Waterloo
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/17/2009
- Type: automatic
- Task: phase2
- MD5:
d0c7cb9e705f7201cd97139f2c46c5a8
- Run description: All runs consisted of three passes over progressively smaller subsets of the collection. (a) On-line logistic regression over the English ClueWeb09 collection, using all substrings of the query as binary features. (Alphabetic only, case insensitive). Only the first 35K bytes of each page was used. All 50 topics were processed with a single pass. (b) Same as (a) but over only the enwp (Wikipedia) documents. (c) Naive Bayes classifer, using binary byte 4-grams as features. (No preprocessing at all, except for selection of the first 35K bytes of each page.) Each topic was processed separately. Training data: base run: very relevant: first-ranked from (b) relevant: second-ranked from (b) notrel: 6,000 pages selected at random from full English collection relfeed runs: very relevant: as per qrels relevant: as per qrels notrel: 6,000 pages selected at random from full English collection Note: very relevent examples were given double weight (trained twice) Validation data: None. This is an automatic run. But we did compose 67 of our own queries that we used for pilot experiments. "Test" data: The classifier was run on the top 10K documents from (a) plus the top 10K documents from (b). Overall, the top-scored 1000 documents were submitted to NIST. P.S. Yes, indeed, we used spam filtering methods. The logistic regression was modified for speed and to process 50 topics simultaneously. The naive bayes was an unmodified spam filter, run using the TREC spam filter toolkit. We knew from previous experiments that Naive Bayes was more robust to training noise than logistic regression; this seemed to be confirmed in our pilot expermients. That's why we used it for the relevance feedback pass.
WatS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
6d272c59825d3f01de76d9ef75c6a59a
- Run description: Dependence model mixed with web search engine query expansion.
WatS.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.2
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
1cb7e0551fc3dcd10d9d5b90902f8582
- Run description: Dependence model mixed with web search engine query expansion (this run consists of the results ranked 5-10 of the same run that produced WatS.1, which has ranks 1-5).
WatS.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.base
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
09efe963296c73cebe8481bd9bee6546
- Run description: Dependence models used to get top 10 docs from which we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform blind feedback in combination with the original dependence model. Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.fub.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.fub.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
2969bbb5d01974513b90ae4b51e1aa7b
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blind feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.Sab.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.Sab.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
9c3b451d15728b0847debd2fdbb318d8
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blink feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.SIEL.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.SIEL.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
0d42728cceeb58cf5215dcbff69edaf8
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blink feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.twen.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.twen.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
edc88878ae26000a4646c8c97959009b
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blind feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.twen.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.twen.2
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
497bc68c13332e3a599c4949614061fa
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blind feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.UCSC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.UCSC.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
15ba10e52a4a0d4f59dddaaee88922da
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blink feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.WatS.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.WatS.1
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
168d363b3b1466e8e25bf909d58ed3a6
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blink feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
WatS.WatS.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: WatS.WatS.2
- Participant: UWaterlooMDS
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/23/2009
- Type: automatic
- Task: phase2
- MD5:
0897623a6a80b0b0cd1ee46d8834cd00
- Run description: From the relevant documents, we build a relevance model from which we take the top 100 most frequent terms from which we select the top 25 point-wise kl divergence terms and perform feedback in combination with the original dependence model (see WatS.base description). If no rel docs, dependence models used to get top 10 docs which we treat as relevant (blink feedback, see WatS.base). Krovetz stemming and 418 stopwords and Dirichlet prior smoothing.
YUIR.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.1
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
ea7905c5b2c33a8bc86b3465225eea83
- Run description: extract title and content fields from all the web pages, and use DFR model to rank separately, and then combine the two scores linearly.
YUIR.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.2
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 6/29/2009
- Type: automatic
- Task: phase1
- MD5:
598dc62a1cdab158bccbf18f74bd263e
- Run description: extract title and content fields from all the web pages, and use BM25 modelwith default parameters to rank separately, and then combine the two scores linearly.
YUIR.base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.base
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
a015d60ffdddc7b6a53d3e64a1aff1fc
- Run description: Pseudo relevance feedback with top-ranked documents from first-pass retrieval. BM25 for term weighting and KL divergence for query expansion.
YUIR.CMIC.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.CMIC.1
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
cdae66dfea95aae3187e963807b877ef
- Run description: BM25 is used as a basic term weighting model. A query context-based model is used for relevance feedback.
YUIR.FDU.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.FDU.1
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
b54099b33eb6e56b0fb545e2e98a576d
- Run description: BM25 for term weighting and KL divergence for Rocchio's relevance feedback.
YUIR.UCSC.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.UCSC.2
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
f4ed909caa453bd064dbb9c710cd29e6
- Run description: BM25 is used as a basic term weighting model. A query context-based model is used for relevance feedback.
YUIR.ugTr.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.ugTr.1
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
e59f421fddd95fef0b4b77b756804559
- Run description: BM25 for term weighting and KL divergence for Rocchio's relevance feedback.
YUIR.UMas.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.UMas.2
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
f3041245c548b3781ea9f92dd270a475
- Run description: BM25 for term weighting and KL divergence for Rocchio's relevance feedback.
YUIR.YUIR.1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.YUIR.1
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
5e233ebd563f021dcd2da190c6a1ac96
- Run description: BM25 for term weighting and KL divergence for Rocchio's relevance feedback.
YUIR.YUIR.2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: YUIR.YUIR.2
- Participant: york09
- Track: Relevance Feedback
- Year: 2009
- Submission: 8/24/2009
- Type: automatic
- Task: phase2
- MD5:
43478015953b3747d8d259cc56bff795
- Run description: BM25 is used as a basic term weighting model. A query context-based model is used for relevance feedback.