Skip to content

Runs - Fair Ranking 2022

0mt5

Participants | Input | Summary | Appendix

  • Run ID: 0mt5
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: coordinators
  • MD5: d8947118a8b0a04ac55287c1b42704a0
  • Run description: monoT5-3B-10K

0mt5_e

Participants | Input | Summary | Appendix

  • Run ID: 0mt5_e
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: editors
  • MD5: b94c89733266ee6d964c0d94c68011f0
  • Run description: monoT5-3B-10K

0mt5_p

Participants | Input | Summary | Appendix

  • Run ID: 0mt5_p
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: coordinators
  • MD5: eb4e674e2b459874d0a8d1b17bea8dd1
  • Run description: monoT5-3B-10K with post-processing

0mt5_p_e

Participants | Input | Summary | Appendix

  • Run ID: 0mt5_p_e
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: editors
  • MD5: 01de573380030f8282737bc53642196c
  • Run description: monoT5-3B-10K with post-processing

ans_bm25

Participants | Input | Summary | Appendix

  • Run ID: ans_bm25
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: coordinators
  • MD5: bfab18f1673e56980533277acfcab8b8
  • Run description: Anserini/Pyserini BM25

ans_bm25_e

Participants | Input | Summary | Appendix

  • Run ID: ans_bm25_e
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: editors
  • MD5: 8110d8958536037ab180e0f23330583a
  • Run description: Anserini/Pyserini BM25

bm25_p

Participants | Input | Summary | Appendix

  • Run ID: bm25_p
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: coordinators
  • MD5: 8576a3cffb2659e0ff355108de681db1
  • Run description: Anserini/Pyserini BM25 with post-processing

bm25_p_e

Participants | Input | Summary | Appendix

  • Run ID: bm25_p_e
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: editors
  • MD5: dfdf8d5f55067e7b2602b46aa5b248f7
  • Run description: Anserini/Pyserini BM25 with post-processing

FRT_attention

Participants | Input | Summary | Appendix

  • Run ID: FRT_attention
  • Participant: V-Ryerson
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/3/2022
  • Task: coordinators
  • MD5: c3e8a598aed4172fda89f09899440c3e
  • Run description: This run uses fairness categories including article quality. We extracted 'article-text', 'hyperlinks', 'categories' from text corpus to build BM25 corpus. We selected top 1000 relevant wikipages based on these BM25 scores and semantic score using BERT embeddings. We re-rank these 1000 wikipages based on categories distribution, we compute fairness categories using top 5000 relevant wikipages with BM25 and BERT scores We used the inverse distribution (1-category distirbution) as weight, and re-ranked wikipages such that minority groups would get more attentions.

FRT_constraint

Participants | Input | Summary | Appendix

  • Run ID: FRT_constraint
  • Participant: V-Ryerson
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/3/2022
  • Task: coordinators
  • MD5: cbb4e5344e9c3fe91c329424792219bf
  • Run description: This run uses fairness categories including article quality. We extracted 'article-text', 'hyperlinks', 'categories' from text corpus to build BM25 corpus. We selected top 1000 relevant wikipages based on these BM25 scores and semantic score using BERT embeddings. We re-rank these 1000 wikipages based on categories distribution, we compute fairness categories using top 5000 relevant wikipages with BM25 and BERT scores We used the difference between categories distribution and ranked list's categories distribution as weight, and re-ranked wikipages such that category distirbution is enforced at each rank.

FRT_diversity

Participants | Input | Summary | Appendix

  • Run ID: FRT_diversity
  • Participant: V-Ryerson
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/3/2022
  • Task: coordinators
  • MD5: fdaeda6ac64e564ec858a13b05541307
  • Run description: This run uses fairness categories including article quality. We extracted 'article-text', 'hyperlinks', 'categories' from text corpus to build BM25 corpus. We selected top 1000 relevant wikipages based on these BM25 scores and semantic score using BERT embeddings. We re-rank these 1000 wikipages based on categories diversity, by converting fairness categories into vector and compute the distances.

rmit_cidda_ir_1

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_1
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 87ffbfde8a895b7f0f13aba1f7f15ebf
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection.

rmit_cidda_ir_2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_2
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 2b9252d4aceec415c839b47ef43d4151
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with two attributes (num_sitelinks and category); ranking fusion using Reciprocal Ranking Fusion (RRF) to obtain the final ranking.

rmit_cidda_ir_3

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_3
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 3e5bc8e423512b856fc18b152e9c7b03
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with all attributes; ranking fusion with an adapted version of Reciprocal Ranking Fusion (RRF) that uses a weighting schema to obtain the final rank. The weighting schema is obtained using heuristics in an Analytical Hierarchical Process (AHP).

rmit_cidda_ir_4

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_4
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: da3c7fd94fba75353662e5fe354224ea
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with two attributes (years and category); ranking fusion using Reciprocal Ranking Fusion (RRF) to obtain the final ranking.

rmit_cidda_ir_5

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_5
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 1e8e8c03720f39b4e9c012b33cc70650
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with two attributes (gender and category); ranking fusion using Reciprocal Ranking Fusion (RRF) to obtain the final ranking.

rmit_cidda_ir_6

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_6
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 62b20b6506506223a45845b8cbd54440
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with all attributes; ranking fusion with an adapted version of Reciprocal Ranking Fusion (RRF) that uses a weighting schema to obtain the final rank. The weighting schema is obtained using heuristics in an Analytical Hierarchical Process (AHP).

rmit_cidda_ir_7

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_7
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 4f981ee76f5c05e85c36765228f01afe
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with all attributes; ranking fusion with an adapted version of Reciprocal Ranking Fusion (RRF) that uses a weighting schema to obtain the final rank. The weighting schema is obtained using heuristics in an Analytical Hierarchical Process (AHP).

rmit_cidda_ir_8

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: rmit_cidda_ir_8
  • Participant: rmit_cidda_ir
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: 2ac0e675f38f17a1a02ab95e421daa5f
  • Run description: Ad hoc retrieval using BM25 as provided by Pyserini with default parameters (k1=0.9, b=0.4) over a text-only collection; re-ranking using explicit search result diversification using PM-2 with all attributes; ranking fusion with an adapted version of Reciprocal Ranking Fusion (RRF) that uses a weighting schema to obtain the final rank. The weighting schema is obtained using heuristics in an Analytical Hierarchical Process (AHP).

tmt5

Participants | Input | Summary | Appendix

  • Run ID: tmt5
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: coordinators
  • MD5: 6bb6c4986c29c050635def1abbe328a7
  • Run description: monoT5-3B-10K trained for 5K steps on TREC-F21+22

tmt5_e

Participants | Input | Summary | Appendix

  • Run ID: tmt5_e
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: editors
  • MD5: 46477be95c38ca13b3d91f587498f825
  • Run description: monoT5-3B-10K trained for 5K steps on TREC-F21+22

tmt5_p

Participants | Input | Summary | Appendix

  • Run ID: tmt5_p
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: coordinators
  • MD5: 34742b41b8b10f870f022ecffba4e105
  • Run description: monoT5-3B-10K trained for 5K steps on TREC-F21+22 with post-processing

tmt5_p_e

Participants | Input | Summary | Appendix

  • Run ID: tmt5_p_e
  • Participant: h2oloo
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/6/2022
  • Task: editors
  • MD5: 472a85502db833818aac7d457c934be5
  • Run description: monoT5-3B-10K trained for 5K steps on TREC-F21+22 with post-processing

UDInfo_F_bm25

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_F_bm25
  • Participant: udel_fang
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: a22c2319fd23a6869bc9f4aa5ee9fa46
  • Run description: This is a BM25 base run before doing any fairness-aware re-rank. The rank is based on non-increasing bm25 scores (keywords-doc) only.

UDInfo_F_lgbm2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_F_lgbm2
  • Participant: udel_fang
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: 11e415880599fd7d2b9cacf9e0c1b8f7
  • Run description: This is a static method without quality scores in ranking. We use Sentence-BERT to embed documents, queries, and fairness annotations (e.g., for gender, the fairness annotations are the sentence "male female non-binary"). Then, we compute similarity scores between these embeddings as our contextual features for training a GBDT-based model. We encode fairness by modifying the ground truth label. The ground truth we used was a weighted sum of both relevance and point-wise AWRF using log decay. Last, for this specific run, we use the trained GBDT-based model to re-rank every 20 documents in the BM25 base run.

UDInfo_F_lgbm4

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_F_lgbm4
  • Participant: udel_fang
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: 244861c7eb00dfa8798fadd5962b100f
  • Run description: This is a static method without quality scores in ranking. We use Sentence-BERT to embed documents, queries, and fairness annotations (e.g., for gender, the fairness annotations are the sentence "male female non-binary"). Then, we compute similarity scores between these embeddings as our contextual features for training a GBDT-based model. We encode fairness by modifying the ground truth label. The ground truth we used was a weighted sum of both relevance and point-wise AWRF using log decay. Last, for this specific run, we use the trained GBDT-based model to re-rank every 40 documents in the BM25 base run.

UDInfo_F_mlp2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_F_mlp2
  • Participant: udel_fang
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: 0f9187d959a0c9335c98e0d11e3dcb1d
  • Run description: This is a static method without quality scores in ranking. We use Sentence-BERT to embed documents, queries, and fairness annotations (e.g., for gender, the fairness annotations are the sentence "male female non-binary"). Then, we compute similarity scores between these embeddings as our contextual features for training an MLP model. We encode fairness by modifying the ground truth label. The ground truth we used was a weighted sum of both relevance and point-wise AWRF using log decay. Last, for this specific run, we use the trained MLP model to re-rank every 20 documents in the BM25 base run.

UDInfo_F_mlp4

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_F_mlp4
  • Participant: udel_fang
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: 4f6688c2ab7e32b9983bd0265e87eb02
  • Run description: This is a static method without quality scores in ranking. We use Sentence-BERT to embed documents, queries, and fairness annotations (e.g., for gender, the fairness annotations are the sentence "male female non-binary"). Then, we compute similarity scores between these embeddings as our contextual features for training an MLP model. We encode fairness by modifying the ground truth label. The ground truth we used was a weighted sum of both relevance and point-wise AWRF using log decay. Last, for this specific run, we use the trained MLP model to re-rank every 40 documents in the BM25 base run.

UoGRelvOnlyT1

Participants | Input | Summary | Appendix

  • Run ID: UoGRelvOnlyT1
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: coordinators
  • MD5: da57e057a2cf6d2ab6fcd1bbd84cd643
  • Run description: A relevance-only baseline with no fairness intervention for Task 1. The retrieval was done with ColBERT.

UogTRelvOnlyT2

Participants | Input | Summary | Appendix

  • Run ID: UogTRelvOnlyT2
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/2/2022
  • Task: editors
  • MD5: 9f67fabaa4cee02a933be43418dc2cac
  • Run description: A sequence of the same ranking for every query based only on relevance. There are no fairness interventions in this run. The retrieval was done with ColBERT. This run will be used as a baseline.

UoGTrExpE1

Participants | Input | Summary | Appendix

  • Run ID: UoGTrExpE1
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: d5f601422fc433cc1ac79fc11f58ca46
  • Run description: UoGTrExpE1 uses a heuristic approach to rerank an initial retrieval set using expected exposure targets. The reranking uses adapted techniques from diversification, such as PM-2. The target exposures in these runs dont follow the metric but try to evenly distribute the exposure between attributes.

UoGTrExpE2

Participants | Input | Summary | Appendix

  • Run ID: UoGTrExpE2
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: e57a17af1589e2309946ec63dfe523bd
  • Run description: Similar to UoGTrExpE1 this run uses a heuristic approach to rerank an initial retrieval set using expected exposure targets. In the run traditional diversification techniques are adapted. The initial retrieval was done with ColBERT. The expected exposure targets were partly created with the help of the released metric and the relevance estimation of our initial retrieval.

UoGTrMabSAED

Participants | Input | Summary | Appendix

  • Run ID: UoGTrMabSAED
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: editors
  • MD5: 7f8a32d921cfd6d44e7cb4412b254c34
  • Run description: UoGTrMabSAED uses a Multi-Armed Bandit approach. An agent tries to find the optimal strategy when adding rankings to the sequence. The rankings are selected from a pool of rankings where each ranking is optimised to be fair to only one individual attribute. This approach uses a variation of an epsilon-decay strategy. No weights are used in the creation of the pool of the rankings.

UoGTrMabSaNR

Participants | Input | Summary | Appendix

  • Run ID: UoGTrMabSaNR
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: editors
  • MD5: f85fa340632d8e359e26d3ad51ebf9bc
  • Run description: UoGTrMabSA uses a Multi-Armed Bandit approach. An agent tries to find the optimal strategy when adding rankings to the sequence. The rankings are selected from a pool of rankings where each ranking is optimised to be fair to only one individual attribute. The initial retrieval was done with ColBERT as implemented in PyTerrier. In contrast to our other MAB approaches, there is no randomisation in this approach and no weighting of the rankings is used.

UoGTrMabSaWR

Participants | Input | Summary | Appendix

  • Run ID: UoGTrMabSaWR
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: editors
  • MD5: cfc46e537e859938805b4304f33a3b74
  • Run description: UoGTrMabSaWR uses a Multi-Armed Bandit approach. An agent tries to find the optimal strategy when adding rankings to the sequence. The rankings are selected from a pool of rankings where each ranking is optimised to be fair to only one individual attribute. The initial retrieval was done with ColBERT as implemented in PyTerrier. This run uses no weighting and a randomisation approach for the exploring phase of the agent.

UoGTrMabWeSA

Participants | Input | Summary | Appendix

  • Run ID: UoGTrMabWeSA
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: editors
  • MD5: 87f581979c12d3286d813e88099d3f3d
  • Run description: UoGTrMabWeSA uses a Multi-Armed Bandit approach. An agent tries to find the optimal strategy when adding rankings to the sequence. The rankings are selected from a pool of rankings. For every protected attribute, there are 3 different rankings with different fairness-relevance relationships. The initial retrieval was done with PyTerrier-Colbert

UoGTrQE

Participants | Input | Summary | Appendix

  • Run ID: UoGTrQE
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: 616d268b48730f98f4f73a8782fc0d08
  • Run description: UoGTrQE uses a traditional query expansion method to expand a query with the goal to improve the distribution of documents in a fair manner.

UoGTrT1ColPRF

Participants | Input | Summary | Appendix

  • Run ID: UoGTrT1ColPRF
  • Participant: UoGTr
  • Track: Fair Ranking
  • Year: 2022
  • Submission: 9/1/2022
  • Task: coordinators
  • MD5: b80f0c255910894927924b57bed06df0
  • Run description: UoGTrT1ColPRF is an adaption of ColbertPRF to the fairness task.