Proceedings - Lateral Reading 2024¶
Monster Ranking¶
Charles L. A. Clarke (University of Waterloo)Siqing Huo (University of Waterloo)Negar Arabzadeh (University of Waterloo)
- Participant: WaterlooClarke
- Paper: https://trec.nist.gov/pubs/trec33/papers/WaterlooClarke.lateral.rag.pdf
- Runs: uwclarke_auto | uwclarke_auto_summarized | UWClarke_rerank
Abstract
Participating as the UWClarke group, we focused on the RAG track; we also submitted runs for the Lateral Reading Track. For the retrieval task (R) of the RAG Track, we attempted what we have come to call “monster ranking”. Largely ignoring cost and computational resources, monster ranking attempts to determine the best possible ranked list for a query by whatever means possible, including explicit LLM-based relevance judgments, both pointwise and pairwise. While a monster ranker could never be deployed in a production environment, its output may be valuable for evaluating cheaper and faster rankers. For the full retrieval augmented generation (RAG) task we explored two general approaches, depending on if generation happens first or second: 1) Generate an Answer and support with Retrieved Evidence (GARE). 2) Retrieve And Generate with Evidence (RAGE).
Bibtex
@inproceedings{WaterlooClarke-trec2024-papers-proc-1,
author = {Charles L. A. Clarke (University of Waterloo)
Siqing Huo (University of Waterloo)
Negar Arabzadeh (University of Waterloo)},
title = {Monster Ranking},
booktitle = {The Thirty-Third Text REtrieval Conference Proceedings (TREC 2024), Gaithersburg, MD, USA, November 15-18, 2024},
series = {NIST Special Publication},
volume = {xxx-xxx},
publisher = {National Institute of Standards and Technology (NIST)},
year = {2024},
trec_org = {WaterlooClarke},
trec_runs = {uwclarke_auto, uwclarke_auto_summarized, UWCrag, UWCrag_stepbystep, UWCgarag, monster, uwc1, uwc2, uwc0, uwcCQAR, uwcCQA, uwcCQR, uwcCQ, uwcBA, uwcBQ, UWClarke_rerank},
trec_tracks = {lateral.rag}
url = {https://trec.nist.gov/pubs/trec33/papers/WaterlooClarke.lateral.rag.pdf}
}