Skip to content

Runs - NeuCLIR 2022

CFDA_CLIP_dq

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_dq
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 79f8c15cd852e6e20c1c243f7b030384
  • Run description: query translation, monot5 with dual query reranking.

CFDA_CLIP_fas_clf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_fas_clf
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 7c043f32a810b1e59efe9290212e06b4
  • Run description: sparse retrieval, query translation, crosslingual fine-tuned reranker

CFDA_CLIP_fas_L

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_fas_L
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: a3351448a8dc1b21a16b90f547cc9682
  • Run description: query translation, sparse retrieval, t5 large reranking

CFDA_CLIP_rus_clf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_rus_clf
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 2c57b98cc9fcf111d523a5cf03af8768
  • Run description: sparse retrieval, query translation, crosslingual fine-tuned reranker

CFDA_CLIP_rus_dq

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_rus_dq
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: a3873cc1ec2cfcef4a8dd7ce77069f0f
  • Run description: sparse retrieval, query translation, reranking with dual query

CFDA_CLIP_rus_L

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_rus_L
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: cc9de1495f2fbac49937ced28e16f62b
  • Run description: query translation, sparse retrieval, t5 large ranker

CFDA_CLIP_zho_clf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_zho_clf
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 7016abbbfc3f5ddbc473a443bd0af7a1
  • Run description: sparse retrieval, query translation, crosslingual fine-tuned reranker

CFDA_CLIP_zho_dq

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_zho_dq
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 8cf04582800d30030774c4944dd4a72c
  • Run description: sparse retrieval, query translation, reranking with dual query

CFDA_CLIP_zho_L

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_zho_L
  • Participant: CFDA_CLIP
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: f33fe29faa2221acc2723e8dbd1aa649
  • Run description: sparse retrieval, query translation, T5 large reranking

coe22-bm25-d-dt-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-dt-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 5a46e279f1c9f9cabac110543c7d2e4d
  • Run description: English sparse retrieval was performed with BM25+RM3 with the descriptions written in English. Default values were used for both BM25 and RM3. Spacey was used for tokenization and stemming.

coe22-bm25-d-dt-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-dt-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: f58e1eff406db059d8888ae11ca015cf
  • Run description: English sparse retrieval was performed with BM25+RM3 with the descriptions written in English. Default values were used for both BM25 and RM3. Spacey was used for tokenization and stemming.

coe22-bm25-d-dt-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-dt-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: b6af9db407853d5730fdb8270cc6e14c
  • Run description: English sparse retrieval was performed with BM25+RM3 with the descriptions written in English. Default values were used for both BM25 and RM3. Spacey was used for tokenization and stemming.

coe22-bm25-d-ht-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-ht-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 3c7a58db3e49b0a18275f22865e2df93
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized with spacey and stemmed with parsivar.

coe22-bm25-d-ht-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-ht-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 8306058a2f6fd074ebf2044ee80730dd
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized and stemmed with spacey.

coe22-bm25-d-ht-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-ht-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 4749f2bcdb6ab7d8c84760643a20b161
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized with spacey.

coe22-bm25-d-mt-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-mt-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 9a29045884b78273197882db2a4796dc
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized with spacey and stemmed with parsivar.

coe22-bm25-d-mt-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-mt-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 4fde2b708fe275222729967aad2c7d09
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized and stemmed with spacey.

coe22-bm25-d-mt-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-d-mt-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 641d11a1bbdb44e65913572c3f9e411d
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized with spacey.

coe22-bm25-t-dt-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-dt-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 5cfef65213a464e58ed43df8f24fdf53
  • Run description: English sparse retrieval was performed with BM25+RM3 with the titles written in English as queries. Default values were used for both BM25 and RM3. Spacey was used for tokenization and stemming.

coe22-bm25-t-dt-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-dt-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 71244665b17a985740828b09b5ef56a9
  • Run description: English sparse retrieval was performed with BM25+RM3 with the titles written in English as queries. Default values were used for both BM25 and RM3. Spacey was used for tokenization and stemming.

coe22-bm25-t-dt-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-dt-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 4feaf86a83fce222d17a699e54161b66
  • Run description: English sparse retrieval was performed with BM25+RM3 with the titles written in English as queries. Default values were used for both BM25 and RM3. Spacey was used for tokenization and stemming.

coe22-bm25-t-ht-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-ht-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 9999fad6c0ca37dae5b0ca37e8ae0a8a
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized and stemmed with spacey.

coe22-bm25-t-ht-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-ht-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 982f5ba7d5f015e43767108dd94b78e3
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized with spacey.

coe22-bm25-t-mt-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-mt-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 557148fca161e97914f007ab7f01f0ee
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized with spacey and stemmed with parsivar.

coe22-bm25-t-mt-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-mt-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 297b4ce6f11419a6cbef47eb5d29d907
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized and stemmed with spacey.

coe22-bm25-t-mt-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-t-mt-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 0aef26ba6ef4575c27049c50dfe5448b
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized with spacey. Topic 128 retrieves no documents.

coe22-bm25-td-dt-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-dt-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 554e4519f30f0526a6aa120d62738f57
  • Run description: English sparse retrieval was performed with BM25+RM3 with the titles and descriptions written in English. This is the track provided baseline run.

coe22-bm25-td-dt-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-dt-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 3ea5866dad44efd5d4809e72ecc94e08
  • Run description: English sparse retrieval was performed with BM25+RM3 with the titles and descriptions written in English. This is the track provided baseline run.

coe22-bm25-td-dt-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-dt-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: ae24a8f32d13e81de9f3a403c403fdc5
  • Run description: English sparse retrieval was performed with BM25+RM3 with the titles and descriptions written in English. This is the track provided baseline run.

coe22-bm25-td-ht-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-ht-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 1eca72902c565fe92b591a4366d116ce
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized with spacey and stemmed with parsivar.

coe22-bm25-td-ht-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-ht-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 0d44be39d68505b9681f675439d7479f
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized and stemmed with spacey.

coe22-bm25-td-ht-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-ht-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 07f6b701d573d0d8e162ad4e8c0ad013
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/human queries were with tokenized with spacey.

coe22-bm25-td-mt-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-mt-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 59ad900812170592f590d1fa975f6bca
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized with spacey and stemmed with parsivar.

coe22-bm25-td-mt-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-mt-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: fb830ffcac9399fc3fa810ba96d3b70c
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized and stemmed with spacey.

coe22-bm25-td-mt-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-bm25-td-mt-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: ecef2f58bcc5d7062037d4e0f5cd3b1c
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/machine translated queries were with tokenized with spacey.

coe22-man-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-man-fas
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 493ae978188cb0a031fe504738887c0b
  • Run description: Monolingual sparse retrieval was performed with BM25. Top ranked documents were ones the annotator marked a relevant. If annotators identified at least one relevant document, they could use HiCAL to recommend more documents to judge. Lists were augmented with unexamined documents using a weighted round robin approach based on the number of relevant documents the annotator discovered with that query.

coe22-man-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-man-rus
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: ed758ff25ee0892cf012cf67ba3842f6
  • Run description: Monolingual sparse retrieval was performed with BM25. Top ranked documents were ones the annotator marked a relevant. If annotators identified at least one relevant document, they could use HiCAL to recommend more documents to judge. Lists were augmented with unexamined documents using a weighted round robin approach based on the number of relevant documents the annotator discovered with that query.

coe22-man-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-man-zho
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 340c127b6479b1c1c985df816bd106d9
  • Run description: Monolingual sparse retrieval was performed with BM25. Top ranked documents were ones the annotator marked a relevant. If annotators identified at least one relevant document, they could use HiCAL to recommend more documents to judge. Lists were augmented with unexamined documents using a weighted round robin approach based on the number of relevant documents the annotator discovered with that query.

coe22-mhq-fas_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-mhq-fas_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: c79f82f0ef7049d9d4a421ab5ce62eb5
  • Run description: ColBERT-X with translate-train that searches using manually created queries by the COE annotators. If multiple queries in the original manual search bring up relevant documents, the rank lists using those queries are fused based.

coe22-mhq-rus_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-mhq-rus_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: ebf684a56c1f869c04704fea242c25ea
  • Run description: ColBERT-X with translate-train that searches using manually created queries by the COE annotators. If multiple queries in the original manual search bring up relevant documents, the rank lists using those queries are fused based.

coe22-mhq-zho_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-mhq-zho_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 1504f4714ad302f9eb12d0f27befe9bb
  • Run description: ColBERT-X with translate-train that searches using manually created queries by the COE annotators. If multiple queries in the original manual search bring up relevant documents, the rank lists using those queries are fused based.

coe22-tdq-fas_colxmtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tdq-fas_colxmtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 68c107ed2ded3750dfd9f0bae74e980d
  • Run description: ColBERT-X with multilingual translate-train that searches using title+description queries.

coe22-tdq-fas_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tdq-fas_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: 1eef6146af4026f59ffeb35e2605f869
  • Run description: ColBERT-X with translate-train that searches using title+description queries.

coe22-tdq-rus_colxmtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tdq-rus_colxmtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 57a1c7379a243b7253fbd79c3a10859a
  • Run description: ColBERT-X with multilingual translate-train that searches using title+description queries.

coe22-tdq-rus_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tdq-rus_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 7ff89dfca1b00a31f9850f87128f6897
  • Run description: ColBERT-X with translate-train that searches using title+description queries.

coe22-tdq-zho_colxmtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tdq-zho_colxmtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: bfc2aec76626b5fec3af1bc89bf77747
  • Run description: ColBERT-X with multilingual translate-train that searches using title+description queries.

coe22-tdq-zho_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tdq-zho_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 668dc9c8d21e1d74e1e9491a637f0fbf
  • Run description: ColBERT-X with translate-train that searches using title+description queries.

coe22-tq-fas_colxmtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tq-fas_colxmtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: b24e93afb5646b7ddd8530b8b3304443
  • Run description: ColBERT-X with multilingual translate-train that searches using title queries.

coe22-tq-fas_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tq-fas_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: fas
  • MD5: e8d939274928dd2ed5007074578b6481
  • Run description: ColBERT-X with translate-train that searches using title queries.

coe22-tq-rus_colxmtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tq-rus_colxmtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: 70859436e8c823e458449f8dc52150dd
  • Run description: ColBERT-X with multilingual translate-train that searches using title queries.

coe22-tq-rus_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tq-rus_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: rus
  • MD5: f2ab35eaa138da9e1a2cb7b6ea79fe01
  • Run description: ColBERT-X with translate-train that searches using title queries.

coe22-tq-zho_colxmtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tq-zho_colxmtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: f2232afc3af6ef6f4653e86165d80fc5
  • Run description: ColBERT-X with multilingual translate-train that searches using title queries.

coe22-tq-zho_colxtt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: coe22-tq-zho_colxtt
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: manual
  • Task: zho
  • MD5: 365beaf27650f40e5e7921d7b53e9c37
  • Run description: ColBERT-X with translate-train that searches using title queries.

F4-PyTerrierPL2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: F4-PyTerrierPL2
  • Participant: F4
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: manual
  • Task: fas
  • MD5: fb6c47ec31b434d0c38236e783c4e975
  • Run description: A traditional information retrieval method. After some pre-processing of the topics and documents data, the search results are generated using the integrated weighting model "PL2" on the PyTerrier platform.

F4-PyTerrierPL2-ru

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: F4-PyTerrierPL2-ru
  • Participant: F4
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: manual
  • Task: rus
  • MD5: 9866d82e51bdd9966e2ba5e8f7f47883
  • Run description: A traditional information retrieval method. After some pre-processing of the Topics and Documents data, the search results are generated using the integrated weighting model "PL2" on the PyTerrier platform.

F4-PyTerrierPL2-zh

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: F4-PyTerrierPL2-zh
  • Participant: F4
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: manual
  • Task: zho
  • MD5: 2f3d528d34f55d5ff0e76efe27f70d8a
  • Run description: A traditional information retrieval method. After some pre-processing of the Topics and Documents data, the search results are generated using the integrated weighting model "PL2" on the PyTerrier platform.

fa_2t

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_2t
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: fc5fd503639f43666233f1f7573a081d
  • Run description: Sparse BM25

fa_2tr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_2tr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: abb0fb13f523ae8a98448a144644e531
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

fa_3rrf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_3rrf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 82c5aebf46fbfd43780884f2eba4389c
  • Run description: Sparse BM25

fa_3rrf2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_3rrf2
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 7723c71113c8ad8d9d10f596f5535145
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

fa_3rrfprf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_3rrfprf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 165a09fb665c8681ba2e4e773956528d
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

fa_dense-rrf.BM25.SPLADE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_dense-rrf.BM25.SPLADE
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 75cd64258ef7143642bab42e315e2914
  • Run description: For Fa: We RRF the runfiles of tags: dense-rrf.prf, BM25 Baseline run fa_3rrfprf and SPLADE run rocchio.fa.official_ht.dt, keeping the top1k. For Ru: We RRF the runfiles of tags: dense-rrf.prf, BM25 Baseline run ru_2rrfprf and SPLADE run rocchio.ru.official_ht.dt, keeping the top1k. For Zh: We RRF the runfiles of tags: dense-rrf.prf and BM25 Baseline run zh_4rrfprf, keeping the top1k.

fa_dense-rrf.prf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_dense-rrf.prf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: d0cc79f1bc2bf68e50ce59196ed70c9a
  • Run description: We RRF the runfiles of tags: xdpr.msmarco.official_ht.d.prf, xdpr.xor-hn-mmarco.EN-q.d.prf, xdpr.msmarco.2rrf-mt-q.all.prf and keep the top1k.

fa_dt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_dt
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: d657190e39bcbd8bd1cf6d07f9d0e75c
  • Run description: Sparse BM25

fa_dtr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_dtr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 6e4936f689fdf6e79e80fea4963577d5
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

fa_qt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_qt
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 6c33cc91828b9145981aebae84245193
  • Run description: Sparse BM25

fa_qtr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_qtr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 54da87d62891fe5bca6e2bc39a76ffdf
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

fa_xdpr.mm.2rrf-mtQ.all.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_xdpr.mm.2rrf-mtQ.all.R
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: b849f661904b3d28f8fe85e3f3615872
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R and fine-tuned on MS MARCO dataset for 40 epoch. The model then is applied on the NeuCLIR dataset in a zero-shot mannar. No sparse model is involved. In the inference we obtain {2,4} different runfiles using different version of translated queries, then RRF all runfiles and keep the top1k. We used Rocchio provided by Pyserini (config: --prf-depth 5 --rocchio-topk 5 --rocchio-alpha 0.4 --rocchio-beta 0.6)

fa_xdpr.ms.oht.d.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_xdpr.ms.oht.d.R
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 116a1c331c0c34560a0c135a9503456f
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R and fine-tuned on MS MARCO dataset for 40 epoch. The model then is applied on the NeuCLIR dataset in a zero-shot mannar. No sparse model is involved. We used Rocchio provided by Pyserini (config: --prf-depth 5 --rocchio-topk 5 --rocchio-alpha 0.4 --rocchio-beta 0.6)

fa_xdpr.xorHn-mm.EN.d.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: fa_xdpr.xorHn-mm.EN.d.R
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 9810acbe031970d2cde183b2b3782443
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R. The model is firstly trained on XOR-TyDi data, involving all languages, then fine-tuned on mMARCO dataset. We use the offical small training set of mMARCO, but we map the query id and document id into different languages, e.g., query in Chinese and Document in English. Note that we use all the languages in mMARCO, so the query and document might involve languages that's not in target language or English (e.g. Arabic) The model then is applied on the NeuCLIR dataset in a zero-shot mannar, where the relevance score is directly matched between English queries and target documents. No sparse model is involved.

hltcoe22tht

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: hltcoe22tht
  • Participant: hltcoe-jhu
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/22/2022
  • Type: manual
  • Task: fas
  • MD5: 879f2f49b44f8de4edf69aa44faae517
  • Run description: Sparse retrieval with BM25 with default settings and RM3 with default settings. Documents/Queries were with tokenized with spacey and stemmed with parsivar.

huaweimtl-fa-c-hybrid2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-fa-c-hybrid2
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 1452b8343664a3fa4b85dfe10777b380
  • Run description: Hybrid model that combines dense retrieval and several sparse models

huaweimtl-fa-c-hybrid3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-fa-c-hybrid3
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 155874b726146f7e337b0564acd2b8a8
  • Run description: Hybrid model that combines dense retrieval with several sparse models

huaweimtl-fa-m-hybrid1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-fa-m-hybrid1
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 2618f4af0f17d88399e546053b0c0f61
  • Run description: Hybrid model that combines dense retrieval and several sparse models

huaweimtl-ru-c-hybrid2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-ru-c-hybrid2
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: d36bb1138b685aa510f253ecd9ff6c07
  • Run description: Hybrid model that combines dense retrieval with several sparse models

huaweimtl-ru-c-hybrid3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-ru-c-hybrid3
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: c8534be4b716bf7b3edb6d06c4c8bd06
  • Run description: Hybrid model that combines dense retrieval with several sparse models

huaweimtl-ru-m-hybrid1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-ru-m-hybrid1
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 3aaf0c2ba505195193ddad328047abc8
  • Run description: Hybrid model that combines dense retrieval with several sparse models

huaweimtl-zh-c-hybrid2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-zh-c-hybrid2
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: b8b134babb43b4ab6c921561635a6d95
  • Run description: Hybrid model that combines dense retrieval with several sparse models

huaweimtl-zh-c-hybrid3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-zh-c-hybrid3
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 21b70594d063f85a51dd0421955f8698
  • Run description: Hybrid model that combines dense retrieval with several sparse models

huaweimtl-zh-m-hybrid1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: huaweimtl-zh-m-hybrid1
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: de1b10b0618da9697461b0837e7269bd
  • Run description: Hybrid model that combines dense retrieval with several sparse models

IDACCS-baseline

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-baseline
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: c7aa37f832ae41a62a2f680f0c588634
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-baseline_raranking

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-baseline_raranking
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: 10691abb9600c2e14865ae501b521a37
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-baseline_rrank_rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-baseline_rrank_rus
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: 295a7def807ad026323e36e9b4f4522d
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-baseline_rrank_zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-baseline_rrank_zho
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: e64a5881112de2dd5542c7751e8ec9e2
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-baseline_rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-baseline_rus
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: 170aeadcd78f586b90fe9b42e70677cf
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-baseline_zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-baseline_zho
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: a3ce95842fff14c157587da8016aefb0
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run1
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: 80d687f53f1b2dfd09a15e3c9f641826
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run1_reranking

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run1_reranking
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: 38bc95c1ea6684abd0cd1cf4e754f452
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run1_rrank_rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run1_rrank_rus
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: 394805a1bd190c1b812ebf578f14072d
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run1_rrank_zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run1_rrank_zho
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: 42ee18bce62f21d56f408c6e34ce3d55
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run1_rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run1_rus
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: c740241c5b9395b826963a036cc0bdb4
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run1_zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run1_zho
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: 07185af85ed24aaefd9ddaad309c58ec
  • Run description: mvlearn finetuned LaBSE model trained on msmarco; normalized

IDACCS-run2_fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run2_fas
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: 857052b92079d24e351c92ce33711173
  • Run description: mvlearn finetuned LaBSE model trained on msmarco;

IDACCS-run2_rrank_fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run2_rrank_fas
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: 9007c7cfde1993724382484d424ec22d
  • Run description: mvlearn finetuned LaBSE model trained on msmarco;

IDACCS-run2_rrank_rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run2_rrank_rus
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: 29b7109d604a3d937b455f20e86d9ae0
  • Run description: mvlearn finetuned LaBSE model trained on msmarco;

IDACCS-run2_rrank_zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run2_rrank_zho
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: a07355017e459ecdbdc1fb97a407cab4
  • Run description: mvlearn finetuned LaBSE model trained on msmarco;

IDACCS-run2_rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run2_rus
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: 1bf7b9faf30c1741a6b7a5af69ac4a41
  • Run description: mvlearn finetuned LaBSE model trained on msmarco;

IDACCS-run2_zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IDACCS-run2_zho
  • Participant: IDACCS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: 691c6cf2620684dbbe5cfce7a427879d
  • Run description: mvlearn finetuned LaBSE model trained on msmarco;

jhumc.fa4.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.fa4.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 00171e049dd736bd693c28abbb4b54d7
  • Run description: Document Translation. Non-neural, language model IR. Character 4-grams. Relevance feedback.

jhumc.fa5.td.ce.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.fa5.td.ce.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/29/2022
  • Type: manual
  • Task: fas
  • MD5: 439d7648abd902fdebd374b9b823226b
  • Run description: Document Translation. Non-neural, language model IR. Character 5-grams. Relevance feedback with Collection Enrichment.

jhumc.fa5.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.fa5.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 4d161cc2ec52b5517f5fc7ddd78a43a6
  • Run description: Document Translation. Non-neural, language model IR. Character 5-grams. Relevance feedback.

jhumc.fawords.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.fawords.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 7d705dccfc7f9022bf3e4f5711ffbe41
  • Run description: Document Translation. Non-neural, language model IR. Words. Relevance feedback.

jhumc.ru4.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.ru4.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 39933ea7d44d62148bde8223b83b3f4e
  • Run description: Document Translation. Non-neural, language model IR. Character 4-grams. Relevance feedback.

jhumc.ru5.td.ce.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.ru5.td.ce.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: cb8e9b1718cba74a23a2be787161198e
  • Run description: Document Translation. Non-neural, language model IR. Character 5-grams. Relevance feedback with Collection Enrichment.

jhumc.ru5.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.ru5.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: ee0a79e4a5be0f6c1ac3a9ea0d2ec2b0
  • Run description: Document Translation. Non-neural, language model IR. Character 5-grams. Relevance feedback.

jhumc.ruwords.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.ruwords.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 657c69f017c4e69a67a7ca2bcbc5d6fa
  • Run description: Document Translation. Non-neural, language model IR. Words. Relevance feedback.

jhumc.zh4.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.zh4.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: c3e870a9ffab82f409d149015da847c7
  • Run description: Document Translation. Non-neural, language model IR. Character 4-grams. Relevance feedback.

jhumc.zh5.td.ce.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.zh5.td.ce.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 8c592b08f2e5ba467c71daa9f2173112
  • Run description: Document Translation. Non-neural, language model IR. Character 5-grams. Relevance feedback with Collection Enrichment.

jhumc.zh5.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.zh5.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: e8aae0f218d6c6fdad604cd9bb334bc4
  • Run description: Document Translation. Non-neural, language model IR. Character 5-grams. Relevance feedback.

jhumc.zhwords.td.rf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: jhumc.zhwords.td.rf
  • Participant: jhu.mcnamee
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 7fea921004b94f10f02504841adc4ac0
  • Run description: Document Translation. Non-neural, language model IR. Words. Relevance feedback.

KASYS-run

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS-run
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: fas
  • MD5: 65e814ffc01ba9a3ed9f21c1345a27d7
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on English MS MARCO passgage We tried to reproduce ColBERT-X on test collection.

KASYS-run-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS-run-rus
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: rus
  • MD5: 099a893c6d5f3ff777c2c1d94ca2f0f5
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on English MS MARCO passgage We tried to reproduce ColBERT-X on test collection.

KASYS-run-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS-run-zho
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/25/2022
  • Type: auto
  • Task: zho
  • MD5: e45c950f11177a49c14b41afccfc4995
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on English MS MARCO passgage We tried to reproduce ColBERT-X on test collection.

KASYS_one_model-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS_one_model-fas
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 825c1a0baea0e9fb91ca6fd8a8aea3d7
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on neuMARCO three languages and English MS MARCO We add a language tag to each query.

KASYS_one_model-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS_one_model-rus
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 29d5f9551fa6e1b83baf87e6bbf5067a
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on neuMARCO three languages and English MS MARCO We add a language tag to each query.

KASYS_one_model-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS_one_model-zho
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 066708404b421d095f94cc44378e88e2
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on neuMARCO three languages and English MS MARCO We add a language tag to each query.

KASYS_onemodel-rerank-fas

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS_onemodel-rerank-fas
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 987df17cf3346bf61dbd5fce502b5f4d
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on neuMARCO three languages and English MS MARCO We add a language tag to each query. re-ranked the baselines provided by the track coordinators

KASYS_onemodel-rerank-rus

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS_onemodel-rerank-rus
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 6f83f709cd6f6359061ba333ebf06813
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on neuMARCO three languages and English MS MARCO We add a language tag to each query. re-ranked the baselines provided by the track coordinators

KASYS_onemodel-rerank-zho

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: KASYS_onemodel-rerank-zho
  • Participant: KASYS
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 2f03488068a0985965e559939a58da43
  • Run description: dense neural using XLM-RoBERTa and FAISS trained on neuMARCO three languages and English MS MARCO We add a language tag to each query. reranked the baselines provided by the track coordinators

NLE_fa_adhoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_fa_adhoc
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: f94ce8674a7f3fe88597f3920d38b277
  • Run description: Hybrid model without reranking. Ensemble of BM25 (sparse lexical)+SPLADE (sparse neural)+ColBERT (dense neural multirepresentation)

NLE_fa_adhoc_rr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_fa_adhoc_rr
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 3823fbbb940808571f2beca249b18389
  • Run description: Hybrid model with reranking. 1st stage: Ensemble of BM25 (sparse lexical)+ SPLADE mt queries (sparse neural) + SPLADE mt docs (sparse neural)+ColBERT (dense neural multirepresentation) 2nd stage: Ensemble of castorini/monoT5-3b on english queries and mt docs (one reranking the ensemble and the other just reranking SPLADE mt docs)

NLE_fa_mono

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_fa_mono
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 70b09dc8065cec976cc0edc87eed9dd1
  • Run description: Hybrid model without reranking. Ensemble of BM25 (sparse lexical)+SPLADE (sparse neural)+ColBERT (dense neural multirepresentation)

NLE_fa_mono_rr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_fa_mono_rr
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 46450de7a654c28317913ed6b98e1216
  • Run description: Hybrid model. 1st stage ensemble of BM25 (sparse lexical)+SPLADE (sparse neural)+ColBERT (dense neural multirepresentation) + 2nd stage reranking with XLMINFO and XLM-Roberta, Finally ensemble of SPLADE+COLBERT+XLMINFO+XLM-Roberta

NLE_ru_adhoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_ru_adhoc
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 131241c1d943bd2f10236af01c255d44
  • Run description: Hybrid model without reranking. 1st stage: A Ensemble of BM25 (sparse lexical)+ SPLADE mt queries (sparse neural) + ColBERT (dense neural multirepresentation)

NLE_ru_adhoc_rr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_ru_adhoc_rr
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 323d4eb8c4f4c42fa770fdbfb47c3f4a
  • Run description: Hybrid model with reranking. 1st stage: A Ensemble of BM25 (sparse lexical)+ SPLADE mt queries (sparse neural) + SPLADE mt docs (sparse neural)+ColBERT (dense neural multirepresentation) 2nd stage: Castorini/monoT5-3b reranking on english queries and mt docs (one reranking the ensemble and the other just reranking SPLADE mt docs) Final: Ensemble of all models

NLE_ru_mono

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_ru_mono
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 2394419ca8d8c9cc31f4b3951dedeb91
  • Run description: Hybrid model. 1st stage ensemble of BM25 (sparse lexical)+SPLADE (sparse neural)+ColBERT (dense neural multi representation) + 2nd stage reranking with XLMINFO and XLM-Roberta.

NLE_ru_mono_rr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLE_ru_mono_rr
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: d529af1357d69ea6fed38db75e5d6f41
  • Run description: Hybrid model with reranking. 1st stage ensemble of BM25 (sparse lexical)+SPLADE (sparse neural)+ColBERT (dense neural multi representation) + 2nd stage reranking with XLMINFO and XLM-Roberta, Finally ensemble of SPLADE+COLBERT+XLMINFO+XLM-Roberta

p1.fa.hoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p1.fa.hoc
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: f11174d3010d52671e9aebf3173d1a80
  • Run description: SPLADE on farsi queries translated by bing + mT5 reranker on bing description only

p1.ru.hoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p1.ru.hoc
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 73da26eaf39a83bbe272ace02a3b09e9
  • Run description: SPLADE on russian queries translated by bing + mT5 reranker on bing description only

p1.zh.hoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p1.zh.hoc
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 044a2e130404f0f441902759f7ab6b77
  • Run description: Anserini BM25 RRF with different 4 translations (not tainted with human translation) + mT5 reranker on English description and title

p2.fa.rerank

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p2.fa.rerank
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: e0f855df4af22ba9647a1dfc5e6b07bd
  • Run description: mT5 reranker on bing description only

p2.ru.rerank

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p2.ru.rerank
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 21f5e72c4c8b1d7ff4c92ca2af719e50
  • Run description: mT5 reranker on bing description only

p2.zh.rerank

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p2.zh.rerank
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 3ac173f74d789282c31aa7feb2a95d56
  • Run description: mT5 reranker on English description and title

p3.fa.mono

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p3.fa.mono
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 17c800467172aff0f2b7393d254892a6
  • Run description: SPLADE on farsi queries + mT5 reranker on description only

p3.ru.mono

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p3.ru.mono
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 2a67dc17c73569ccb9cee81777a7822e
  • Run description: SPLADE on russian queries + mT5 reranker on description only

p3.zh.mono

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p3.zh.mono
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 2e9c46ec15ff6a15462f85eaab46f4d1
  • Run description: Anserini BM25 RRF with different 2 translations (tainted with human translation) + mT5 reranker on human translation description and title

p4.fa.hoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p4.fa.hoc
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 710a1e1abf8d6dfe86bc4579ccd5471a
  • Run description: Anserini BM25 RRF with different 3 translations (not tainted with human translation) + mT5 reranker on Bing translation on description

p4.ru.hoc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: p4.ru.hoc
  • Participant: NM.unicamp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: f525e1458dde2a732781b09c3edec12e
  • Run description: Anserini BM25 RRF with different 2 translations (not tainted with human translation) + mT5 reranker on Bing translation on description

RietRandomRun

Results | Participants | Input | Summary | Appendix

  • Run ID: RietRandomRun
  • Participant: RIET
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: f187102c6f2ac6bc8cafdc52e25393a4
  • Run description: Random run

RietRandomRun2

Results | Participants | Input | Summary | Appendix

  • Run ID: RietRandomRun2
  • Participant: RIET
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: b2e234073286c3cf8c45e186ba966cf8
  • Run description: Random Run

RietRandomRun3

Results | Participants | Input | Summary | Appendix

  • Run ID: RietRandomRun3
  • Participant: RIET
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 4b9e5e991806d479e1111b1a6217a258
  • Run description: Random run

ru_2rrf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_2rrf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 138db94e76dbd2883080253e97c36023
  • Run description: Sparse BM25

ru_2rrf2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_2rrf2
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 12c4ff635d5ff33d2ae72e55a308129f
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

ru_2rrfprf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_2rrfprf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: f50cf57ef6bfd749160ffc65b3921b68
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

ru_2t

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_2t
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 39945e178f8c1ccd49445a6fd1cce53c
  • Run description: Sparse BM25

ru_2tr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_2tr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 4a04af59814f83b057678ea48fae7ae9
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

ru_dense-rrf.BM25.SPLADE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_dense-rrf.BM25.SPLADE
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: ecf2b91e1835c3fbc23c5faa5d1f5a8d
  • Run description: For Fa: We RRF the runfiles of tags: dense-rrf.prf, BM25 Baseline run fa_3rrfprf and SPLADE run rocchio.fa.official_ht.dt, keeping the top1k. For Ru: We RRF the runfiles of tags: dense-rrf.prf, BM25 Baseline run ru_2rrfprf and SPLADE run rocchio.ru.official_ht.dt, keeping the top1k. For Zh: We RRF the runfiles of tags: dense-rrf.prf and BM25 Baseline run zh_4rrfprf, keeping the top1k.

ru_dense-rrf.prf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_dense-rrf.prf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 258f4a450aef59391fd891246a4db649
  • Run description: We RRF the runfiles of tags: xdpr.msmarco.official_ht.d.prf, xdpr.xor-hn-mmarco.EN-q.d.prf, xdpr.msmarco.2rrf-mt-q.all.prf and keep the top1k.

ru_dt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_dt
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 930e4353c3baa5c5ed4b6757334fb3e7
  • Run description: Sparse BM25

ru_dtr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_dtr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 6547590df85fbd87f784aeba0f77ff39
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

ru_qt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_qt
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 01d5f44020932fe4c7863305f4247749
  • Run description: Sparse BM25

ru_qtr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_qtr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: c84759a61f6c811264886e956a415286
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

ru_xdpr.mm.2rrf-mtQ.all.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_xdpr.mm.2rrf-mtQ.all.R
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 1e8e95e11ce5fd5cb2da7a5138bf5f1c
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R and fine-tuned on MS MARCO dataset for 40 epoch. The model then is applied on the NeuCLIR dataset in a zero-shot mannar. No sparse model is involved. In the inference we obtain {2,4} different runfiles using different version of translated queries, then RRF all runfiles and keep the top1k. We used Rocchio provided by Pyserini (config: --prf-depth 5 --rocchio-topk 5 --rocchio-alpha 0.4 --rocchio-beta 0.6)

ru_xdpr.ms.oht.d.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_xdpr.ms.oht.d.R
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: d08fffdd60114f4c27c884cf4b174445
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R and fine-tuned on MS MARCO dataset for 40 epoch. The model then is applied on the NeuCLIR dataset in a zero-shot mannar. No sparse model is involved. We used Rocchio provided by Pyserini (config: --prf-depth 5 --rocchio-topk 5 --rocchio-alpha 0.4 --rocchio-beta 0.6)

ru_xdpr.xorHn-mm.EN.d.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ru_xdpr.xorHn-mm.EN.d.R
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: 104f8738924a3d3699e084bc7f1defa0
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R. The model is firstly trained on XOR-TyDi data, involving all languages, then fine-tuned on mMARCO dataset. We use the offical small training set of mMARCO, but we map the query id and document id into different languages, e.g., query in Chinese and Document in English. Note that we use all the languages in mMARCO, so the query and document might involve languages that's not in target language or English (e.g. Arabic) The model then is applied on the NeuCLIR dataset in a zero-shot mannar, where the relevance score is directly matched between English queries and target documents. No sparse model is involved.

splade_farsi_dt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: splade_farsi_dt
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: de6a84702a09d1a43557dd339da02d95
  • Run description: Splade model (sparse neural) trained in english msmarco (available at: https://huggingface.co/naver/splade-cocondenser-selfdistil)

splade_farsi_ht

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: splade_farsi_ht
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 52c67fc9ea1d5d6922ebbeb56b1eef94
  • Run description: Splade model with a "distilbert" sized transformer architecture which is first pretrained with farsi-neuMARCO documents and farsi-neuclir documents. The model is then MSMARCO with queries and documents translated to farsi using in-batch negatives

splade_farsi_mt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: splade_farsi_mt
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: fas
  • MD5: 5f9591c584999c65f27dde279c0e33b0
  • Run description: Splade model with a "distilbert" sized transformer architecture which is first pretrained with farsi-neuMARCO documents and farsi-neuclir documents. The model is then MSMARCO with queries and documents translated to farsi using in-batch negatives

splade_russian_dt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: splade_russian_dt
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 850b53bcd9e9af9403d0a5c707c2ee25
  • Run description: Splade model (sparse neural) trained in english msmarco (available at: https://huggingface.co/naver/splade-cocondenser-selfdistil)

splade_russian_ht

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: splade_russian_ht
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: bcd70224777af06c9181e0a13e011964
  • Run description: Splade model (sparse neural) with a "distilbert" sized transformer architecture which is first pretrained with russian-MMARCO documents, russian-MrTyDi and russian-neuclir documents. The model is then finetuned MSMARCO with queries and documents translated to russian using in-batch negatives

splade_russian_mt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: splade_russian_mt
  • Participant: NLE
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: rus
  • MD5: 3f8593212f61cad312ec541037f5bac3
  • Run description: Splade model (sparse neural) with a "distilbert" sized transformer architecture which is first pretrained with russian-MMARCO documents, russian-MrTyDi and russian-neuclir documents. The model is then finetuned MSMARCO with queries and documents translated to russian using in-batch negatives

umcp_hmm_fa

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umcp_hmm_fa
  • Participant: umcp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: fas
  • MD5: 1dadab2fa31a7f21e35ee611ce45e92d
  • Run description: This run uses a Probabilistic Structured Query (PSQ) implemented using an HMM.

umcp_hmm_ru

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umcp_hmm_ru
  • Participant: umcp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: rus
  • MD5: ade46a47cea76d64682a0e0c00aa0e9f
  • Run description: This run uses a Probabilistic Structured Query (PSQ) implemented with an HMM.

umcp_hmm_zh

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umcp_hmm_zh
  • Participant: umcp
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 4a9f1eb79d282ac33d14340a189bbe80
  • Run description: This run uses Probabilistic Structured Query (PSQ) implemented using an HMM model.

zh_2t

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_2t
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 53e6517556b0332f17c7c6d2899f7017
  • Run description: Sparse BM25 retrieval. Fusion of English queries retrieval on Translated Documents and Human translated queries on provided documents.

zh_2tr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_2tr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 3a8c1fc3a49d8b1d83031073e85ca505
  • Run description: Sparse BM25 retrieval + Rocchio Pseudo Relevance Feedback. Fusion of English queries retrieval on Translated Documents and Human translated queries on provided documents.

zh_4rrf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_4rrf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 71db7ff7920df4b489cb6ebfa9dabc1d
  • Run description: Sparse BM25 Reciprocal Rank Fusion of {Caiyun, Youdao, bing, Huawei} translated queries x {title, desc+title} (BM25)

zh_4rrf2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_4rrf2
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 9a337720f475f0b1f348f707daf652d2
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback)

zh_4rrfprf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_4rrfprf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: ca75f4799c0fe6959d61eae2c74dc696
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback) Recciprocal Rank Fusion of {Caiyun, Youdao, bing, Huawei, Human translations of topics} x {title, desc+title} (BM25+Rocchio)

zh_dense-rrf.BM25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_dense-rrf.BM25
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: c8c7857bb0c1e03e371cfed9033e7520
  • Run description: For Fa: We RRF the runfiles of tags: dense-rrf.prf, BM25 Baseline run fa_3rrfprf and SPLADE run rocchio.fa.official_ht.dt, keeping the top1k. For Ru: We RRF the runfiles of tags: dense-rrf.prf, BM25 Baseline run ru_2rrfprf and SPLADE run rocchio.ru.official_ht.dt, keeping the top1k. For Zh: We RRF the runfiles of tags: dense-rrf.prf and BM25 Baseline run zh_4rrfprf, keeping the top1k.

zh_dense-rrf.prf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_dense-rrf.prf
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 411b75723e66af72c5da63386630748f
  • Run description: We RRF the runfiles of tags: xdpr.msmarco.official_ht.d.prf, xdpr.xor-hn-mmarco.EN-q.d.prf, xdpr.msmarco.2rrf-mt-q.all.prf and keep the top1k.

zh_dt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_dt
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 02c6c40b4fdf5ee932459a8c55923da1
  • Run description: Sparse BM25 doc translation (query: English, docs: Sockeye-translated Chinese), description+title

zh_dtr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_dtr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 5384c142b4bc76b04faf9181e82ec875
  • Run description: Sparse BM25 + Rocchio (Pseudo Relevance Feedback) doc translation (query: English, docs: Sockeye-translated Chinese), description+title

zh_qt

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_qt
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 5fa551693b321b66d077d140538f4ef1
  • Run description: Sparse BM25. query translation (query: human-translated Chinese, docs: Chinese), description+title topics

zh_qtr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_qtr
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/26/2022
  • Type: auto
  • Task: zho
  • MD5: 231f07ac3bcc82b67f36ca24576d75e9
  • Run description: Sparse BM25 + Rocchio pseudo relevance feedback query translation (query: human-translated Chinese, docs: Chinese), description+title,

zh_xdpr.mm.4rrf-mtQ.all.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_xdpr.mm.4rrf-mtQ.all.R
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 6f0daad17462be1c66ff04054d2c7198
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R and fine-tuned on MS MARCO dataset for 40 epoch. The model then is applied on the NeuCLIR dataset in a zero-shot mannar. No sparse model is involved. In the inference we obtain {2,4} different runfiles using different version of translated queries, then RRF all runfiles and keep the top1k. We used Rocchio provided by Pyserini (config: --prf-depth 5 --rocchio-topk 5 --rocchio-alpha 0.4 --rocchio-beta 0.6)

zh_xdpr.ms.oht.d.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_xdpr.ms.oht.d.R
  • Participant: huaweimtl
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: 68e48b4badefe6771439c219b183c3c4
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R and fine-tuned on MS MARCO dataset for 40 epoch. The model then is applied on the NeuCLIR dataset in a zero-shot mannar. No sparse model is involved. We used Rocchio provided by Pyserini (config: --prf-depth 5 --rocchio-topk 5 --rocchio-alpha 0.4 --rocchio-beta 0.6)

zh_xdpr.xorHn-mm.EN.d.R

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: zh_xdpr.xorHn-mm.EN.d.R
  • Participant: h2oloo
  • Track: NeuCLIR
  • Year: 2022
  • Submission: 7/27/2022
  • Type: auto
  • Task: zho
  • MD5: e302fc3348305c84188b19d287f9e33e
  • Run description: We use the dense retrieval model DPR, where the model is initialized with XLM-R. The model is firstly trained on XOR-TyDi data, involving all languages, then fine-tuned on mMARCO dataset. We use the offical small training set of mMARCO, but we map the query id and document id into different languages, e.g., query in Chinese and Document in English. Note that we use all the languages in mMARCO, so the query and document might involve languages that's not in target language or English (e.g. Arabic) The model then is applied on the NeuCLIR dataset in a zero-shot mannar, where the relevance score is directly matched between English queries and target documents. No sparse model is involved.