Skip to content

Runs - Fair Ranking 2019

fair_LambdaMART

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: fair_LambdaMART
  • Participant: IR-Cologne
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 56418f16c9df3597e04162ef549790cd
  • Run description: This submission functions as a baseline learning-to-rank model using the LambdaMART algorithm. Following the organizer's instructions, the training files have been cleaned from missing document ids. The following 10 features have been constructed: The length of the query, BM25 scores separately for the title, abstract, list of entities, venue, journal and author's names and finally, the year and the number of out and in-citations of the publication. Each query sequence and number was treated independently from the others.

fair_random

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: fair_random
  • Participant: IR-Cologne
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: ebfd60d3d47e7b4bb4ada5bf42165cc5
  • Run description: This submission presents a naive baseline that was created by randomly re-ranking the documents for each query number. Each query number and sequence was treated independently.

first

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: first
  • Participant: ICTNET
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 8/31/2019
  • Task: main
  • MD5: 23d1ae3577727676207052bcfddac545
  • Run description: bert_knrm

MacEwanBase

Participants | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: MacEwanBase
  • Participant: MacEwanSoB
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 14b2058b247fae61d9fa6f662a80a57f
  • Run description: Three sets of rankings are generated for search results for different fields. These rankings are merged using adjustable weights. Weights are adjusted as the system traverses the sequence. Different weighting start and end points are used for different sequences.

QUARTZ-e0.00001

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: QUARTZ-e0.00001
  • Participant: QUARTZ_ITN
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 9399afefac5ae9ef0d6d6d65ea3380d3
  • Run description: The rank of a document in the ranking is fair if the difference between the likelihood of fairness of the document author group frequency distribution and the likelihood of non-fairness of the document author group frequency distributionis less than 0.00001.

QUARTZ-e0.00010

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: QUARTZ-e0.00010
  • Participant: QUARTZ_ITN
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: dc5728ad8424a7a29eda304ace66d0a6
  • Run description: The rank of a document in the ranking is fair if the difference between the likelihood of fairness of the document author group frequency distribution and the likelihood of non-fairness of the document author group frequency distributionis less than 0.00010.

QUARTZ-e0.00100

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: QUARTZ-e0.00100
  • Participant: QUARTZ_ITN
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 7213a8c1a415ab862388e08cbc38aa76
  • Run description: The rank of a document in the ranking is fair if the difference between the likelihood of fairness of the document author group frequency distribution and the likelihood of non-fairness of the document author group frequency distributionis less than 0.00100.

QUARTZ-e0.00200

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: QUARTZ-e0.00200
  • Participant: QUARTZ_ITN
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 0a659ccb5db71be654502eb614e0447d
  • Run description: The rank of a document in the ranking is fair if the difference between the likelihood of fairness of the document author group frequency distribution and the likelihood of non-fairness of the document author group frequency distributionis less than 0.00200.

QUARTZ-e0.00500

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: QUARTZ-e0.00500
  • Participant: QUARTZ_ITN
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 2b7ab08726ff4237e9cefc65dd754fca
  • Run description: The rank of a document in the ranking is fair if the difference between the likelihood of fairness of the document author group frequency distribution and the likelihood of non-fairness of the document author group frequency distributionis less than 0.00200.

QUARTZ-e0.01000

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: QUARTZ-e0.01000
  • Participant: QUARTZ_ITN
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: f54fd03a2fcee1e34a20864c20789e44
  • Run description: The rank of a document in the ranking is fair if the difference between the likelihood of fairness of the document author group frequency distribution and the likelihood of non-fairness of the document author group frequency distributionis less than 0.00200.

uognleDivAAsp

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: uognleDivAAsp
  • Participant: uogTr
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 90934e8f1bc211b82b54a7f6ae671b79
  • Run description: This run builds upon the Terrier.org IR platform, a Divergence from Randomness weighting model is used to rank documents by relevance before applying a diversification approach as a fairness component. The fairness component diversifies over the authors in the ranking as singleton groups, each scored by their total number of citations. The relevance / diversification (fairness) trade-off is varied as queries are repeated.

uognleDivAJc

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: uognleDivAJc
  • Participant: uogTr
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 4b04ee701686a7bcff4598961445da16
  • Run description: This run builds upon the Terrier.org IR platform, a Divergence from Randomness weighting model is used to rank documents by relevance before applying a diversification approach as a fairness component. The fairness component diversifies over multiple aspects of the documents in the ranking, namely (1) mean paper citation count and (2) journal exposure within the collection. The relevance / diversification (fairness) trade-off is varied for repeat queries.

uognleMaxUtil

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: uognleMaxUtil
  • Participant: uogTr
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: 58ce834d6c2813ba3bbeba65400b07f9
  • Run description: This run simply consists in ranking, for each instance of a query in the sequence, the documents according to their relevance with respect to the query. No fairness is explicitly enforced. The ranking strategy is a late fusion of a DFR model (including query Expansion) and a simple LM model (with Dirichlet smoothing).

uognleSgbrFair

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: uognleSgbrFair
  • Participant: uogTr
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: c70e7b3184f33f30511628a13b7fc172
  • Run description: This run extends our base ranker (run uognleMaxUtil, purely based on relevance) by introducing an extra fairness-oriented component. This run is based on a single-query greedy brute-force ranking approach which enforces individual fairness (i.e., for every author). For each instance of a query in the sequence, the documents are initially pre-ordered based on a score function that combines (1) the estimated relevance of the document and (2) the discrepancy in deserved exposure for the authors of this document. Such discrepancy results from the rankings that were output for the previous occurrences of the same query. Given that the rankings are computed in a greedy/online way -- i.e., the rankings output for the previous query instances in the sequence are kept fixed -- this pre-ordering procedure helps compensate for the exposure that was granted in previous query instances. Then, the approach operates in a brute-force fashion on the top documents: it scores every ranking that permutes the top n documents from the pre-ordered list while keeping the remaining documents in their position. The scoring is based on an estimate of the tracks official measure which combines utility (here, derived from estimated relevance scores) and unfairness (here, considered at the individual, author level). The ranking that obtains the best score is retained and output for the query; the rankings computed utility and unfairness are stored for amortization in the future instances of this query in the sequence. This run emphasizes on fairness (as opposed to utility) in its ranking scoring scheme.

uognleSgbrUtil

Participants | Proceedings | Input | Summary (level) | Summary (hindex) | Appendix

  • Run ID: uognleSgbrUtil
  • Participant: uogTr
  • Track: Fair Ranking
  • Year: 2019
  • Submission: 9/2/2019
  • Task: main
  • MD5: c70e7b3184f33f30511628a13b7fc172
  • Run description: This run extends our base ranker (run uognleMaxUtil, purely based on relevance) by introducing an extra fairness-oriented component. This run is based on a single-query greedy brute-force ranking approach which enforces individual fairness (i.e., for every author). For each instance of a query in the sequence, the documents are initially pre-ordered based on a score function that combines (1) the estimated relevance of the document and (2) the discrepancy in deserved exposure for the authors of this document. Such discrepancy results from the rankings that were output for the previous occurrences of the same query. Given that the rankings are computed in a greedy/online way -- i.e., the rankings output for the previous query instances in the sequence are kept fixed -- this pre-ordering procedure helps compensate for the exposure that was granted in previous query instances. Then, the approach operates in a brute-force fashion on the top documents: it scores every ranking that permutes the top n documents from the pre-ordered list while keeping the remaining documents in their position. The scoring is based on an estimate of the tracks official measure which combines utility (here, derived from estimated relevance scores) and unfairness (here, considered at the individual, author level). The ranking that obtains the best score is retained and output for the query; the rankings computed utility and unfairness are stored for amortization in the future instances of this query in the sequence. This run emphasizes on utility (as opposed to fairness) in its ranking scoring scheme.