Run description: Aggretriever is a dense retriever with semantic and lexical matching. We initialize with coCondenser and train with official MS MARCO training queries (with BM25 hard negatives) with a batch size of 64 for 3 epochs on single GPU. Detail is described in https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00556/116046/Aggretriever-A-Simple-Approach-to-Aggregate
Run description: First stage is an ensemble of BM25 + SPLADE++SD + SPLADE++ED + SLIM + AGGRetriever Second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro Third step is an ensemble of 3 duo rankers over the top50 duoT5 PRP-FlanT5-3b PRP-FlanT5-UL2 Fourth step is RankGPT4 over the top30, which is then ensemble with the 3rd stage
Run description: First stage is BM25, second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro
Run description: First stage is BM25+SPLADE++ED+SPLADE++SD, second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro
Run description: First stage is a very large ensemble of BM25+DOCT5+SPLADEPP+SPLADESD+AGG+SLIM, second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro We then do a third step with an ensemble of 3 duo rankers over the top50, duoT5, PRP-FlanT5-3b and PRP-FlanT5-UL2. We finally finish by applying RankGPT4 over the top30 and ensembling with the previous step.
Run description: First stage is a very large ensemble of BM25+DOCT5+SPLADEPP+SPLADESD+AGG+SLIM, second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro We then do a third step with an ensemble of 3 duo rankers over the top50, duoT5, PRP-FlanT5-3b and PRP-FlanT5-UL2. We finally finish by applying RankGPT4 over the top30.
Run description: First stage is BM25, second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro
Run description: First stage is an ensemble of BM25 + SPLADE++SD + SPLADE++ED Second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro
Run description: First stage is an ensemble of BM25 + SPLADE++SD + SPLADE++ED + SLIM + AGGRetriever. Second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro
Run description: First stage is an ensemble of BM25 + SPLADE++SD + SPLADE++ED + SLIM + AGGRetriever. Second stage is an ensemble of 5 rerankers: naver/trecdl22-crossencoder-albert naver/trecdl22-crossencoder-debertav2 naver/trecdl22-crossencoder-debertav3 naver/trecdl22-crossencoder-electra naver/trecdl22-crossencoder-rankT53b-repro Third step is an ensemble of 3 duo rankers over the top50: duoT5 PRP-FlanT5-3b PRP-FlanT5-UL2
Run description: BM25 over entire msmarco-passage-v2 inverted index, Generative relevance feedback using google/flant5-xxl (8 bit quantized), reranking using crystina-z/monoELECTRA_LCE_nneg31
Run description: Generative query expansion using google/flant5-xxl (8-bit quantized), BM25 over entire msmarco-passage-v2 inverted index, adaptive reranking using crystina-z/monoELECTRA_LCE_nneg31 with BM25 Graph
Run description: Generative query expansion using google/flant5-xxl (8-bit quantized), BM25 over entire msmarco-passage-v2 inverted index, reranking using crystina-z/monoELECTRA_LCE_nneg31
Run description: Generative query expansion using google/flant5-xxl (8-bit quantized), BM25 over entire msmarco-passage-v2 inverted index, adaptive reranking using crystina-z/monoELECTRA_LCE_nneg31 with BM25 Graph
Run description: SPLADE retrieval using naver/splade-cocondenser-ensembledistil, adaptive reranking using crystina-z/monoELECTRA_LCE_nneg31 with BM25 Graph
Run description: This run uses multiple LLM models to judge candidate document pairs in a pair-wise approach and finally aggregates the judgments of all models to generate the final ranking result. This is a Zero-shot learning approach.
Run description: This run utilized the GPT3.5 model to generate the re-ranking results via a List-wise approach. This is a Zero-shot learning approach.