Overview - Relevance Feedback 2008¶
Proceedings
| Data
| Runs
| Participants
There were 3 main goals for this track: 1. Target evaluating and comparing just the RF algorithm - all groups will work with exactly the same relevance judgments (for the most part). (Next year, the relevance evidence groups can use will expand). Hopefully compare both statistical and NLP-intensive use of relevance information (what makes a document relevant). 2. Establish good baseline RF results for multiple amounts of relevance info. 3. Try to establish, for these runs, the amount of improvement possible with more relevance info.
Track coordinator(s):
- C. Buckley, Sabir Research
- S. Robertson, Microsoft
Tasks:
A
: no relevance info (baseline retrieval)B
: 1 relevant documentC
: 3 relevant documents and 3 non-relevant documentsD
: 10 judged documentsE
: large amounts of judged documents
Track Web Page: https://trec.nist.gov/data/relevance.feedback08.html