Skip to content

Runs - AToMiC 2023

b_bm25

Participants

  • Run ID: b_bm25
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 115d45e5c2cbd03ed9ab1c2edc9c0197
  • Run description: baseline, bm25 anserini default parameters, simple text preprocessing

b_bm25_i2t

Participants

  • Run ID: b_bm25_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: ec6fc9abdb245247459f2eab18307d2f
  • Run description: bm25, simple preprocessing

b_clip_vitb32_laion

Participants

  • Run ID: b_clip_vitb32_laion
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 2293151cd40eabb913af0eb6e8facfe7
  • Run description: openclip, vitb32, dense

b_clip_vitb32_laion_i2t

Participants

  • Run ID: b_clip_vitb32_laion_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: 95617bb5ade17e309c7ef7f9fec7880d
  • Run description: clip, dense

b_clip_vitg14_laion

Participants

  • Run ID: b_clip_vitg14_laion
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 7a3b16578d76f16d0b2e060235a7c241
  • Run description: clip, dense

b_clip_vitg14_laion_i2t

Participants

  • Run ID: b_clip_vitg14_laion_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: e32742bc9ee6a5269804b62c7af8c6c4
  • Run description: clip, dense

b_clip_vith14_laion

Participants

  • Run ID: b_clip_vith14_laion
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 40c3912c3d11d3e03cf8180797e89359
  • Run description: clip, dense

b_clip_vith14_laion_i2t

Participants

  • Run ID: b_clip_vith14_laion_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: 12f5122c8799fe164db77898547da3ae
  • Run description: clip, dense

b_clip_vitl14_laion

Participants

  • Run ID: b_clip_vitl14_laion
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 87f68ebcad5a6eead02641ef2b13ee8b
  • Run description: clip, dense

b_clip_vitl14_laion_i2t

Participants

  • Run ID: b_clip_vitl14_laion_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: aa643c2061f9bc865a8692a3d233e85b
  • Run description: clip, dense

b_flava

Participants

  • Run ID: b_flava
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: b4c69914b937c6d5ab1dedc8e308376b
  • Run description: flava dense model, without reranking

b_flava_i2t

Participants

  • Run ID: b_flava_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: 3687edcbd09e078806173f1ef956ca5a
  • Run description: flava, dense

b_fsum_all

Participants

  • Run ID: b_fsum_all
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 33087675b2e88894ae4d07c7f2d4851a
  • Run description: bm25+splade-pp+flava-full+vit-l,h,g, wsum fusion

b_fsum_all_i2t

Participants

  • Run ID: b_fsum_all_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: 762c3019eb58bab38b775c4517caa81e
  • Run description: bm25+splade-pp+flava-full+vit-l,h,g, wsum fusion

b_splade_pp

Participants

  • Run ID: b_splade_pp
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 752ed95528a83808e524b2acac388607
  • Run description: splade++, ensemble-distill

b_splade_pp_i2t

Participants

  • Run ID: b_splade_pp_i2t
  • Participant: h2oloo
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: promote
  • MD5: e83a98f8e345de7a8aafb0a5e7ece56b
  • Run description: splade++, ensemble-distill

finetune_base

Participants

  • Run ID: finetune_base
  • Participant: uogTr
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/4/2023
  • Type: auto
  • Task: suggest
  • MD5: d0fc41a9a36e5aad62b8228061a18812
  • Run description: both supervised and unsupervised.

finetune_base_i2t

Participants

  • Run ID: finetune_base_i2t
  • Participant: uogTr
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/4/2023
  • Type: auto
  • Task: promote
  • MD5: 7d182af748695e0a51074932d2efb2cb
  • Run description: both supervised and unsupervised training method

finetune_large_i2t

Participants

  • Run ID: finetune_large_i2t
  • Participant: uogTr
  • Track: AToMiC
  • Year: 2023
  • Submission: 9/1/2023
  • Type: auto
  • Task: promote
  • MD5: b1b3aad68465f77dee08715ed0924766
  • Run description: both supervised and unsupervised.

finetune_large_t2i

Participants

  • Run ID: finetune_large_t2i
  • Participant: uogTr
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 5349a09dca9ef5eb249a9124f24f97a6
  • Run description: Large-size model using supervised and unsupervised training methods.

pretrain_base

Participants

  • Run ID: pretrain_base
  • Participant: uogTr
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/4/2023
  • Type: auto
  • Task: suggest
  • MD5: d50b878287afc38ba77bfa1096bd8b81
  • Run description: both supervised and unsupervised training.

pretrain_base_i2t

Participants

  • Run ID: pretrain_base_i2t
  • Participant: uogTr
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/5/2023
  • Type: auto
  • Task: promote
  • MD5: b3c8c37ccaa82312d239317bbed24da6
  • Run description: Both supervised and unsupervised learning.

UvA-IRLab

Participants

  • Run ID: UvA-IRLab
  • Participant: IRLab-Amsterdam
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/1/2023
  • Type: auto
  • Task: suggest
  • MD5: d92d9590a673e2dc987036f97dc2b153
  • Run description: For this run, we adapt a multimodal sparse retrieval model. The model uses CLIP as a backbone and features a masked language modeling (MLM) head and a multilayer perceptron (MLP) component.

UvA-IRLab-mlp-mlm-cap1

Participants

  • Run ID: UvA-IRLab-mlp-mlm-cap1
  • Participant: UAmsterdam
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 8235ad4226928d5c84c5652570bc9d5a
  • Run description: LSR model (MLP, MLM) on image captions (no image) - Query encoder: MLP distilbert-based-uncased - Caption encoder: MLP distilbert-based-uncased - In-batch negatives, batch_size = 64

UvA-IRLab-mlp-mlm-caption

Participants

  • Run ID: UvA-IRLab-mlp-mlm-caption
  • Participant: UAmsterdam
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 73d7fdec83b37a5310b17a1fb0da7b7e
  • Run description: LSR model (MLP, MLM) on image captions (no image) - Query encoder: MLP distilbert-based-uncased - Caption encoder: MLP distilbert-based-uncased - In-batch negatives, batch_size = 64

UvA-IRLab-mlp-mlm-images

Participants

  • Run ID: UvA-IRLab-mlp-mlm-images
  • Participant: UAmsterdam
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: df46ffaca84098f4b4a511f0126aede4
  • Run description: LSR model (MLP, MLM) on Images (no caption) - Query encoder: MLP distilbert-based-uncased - Image encoder: MLM CLIP - openai/clip-vit-base-patch32 - In-batch negatives, batch_size = 64

UvA-IRLab-mlp-mlm-img_cap

Participants

  • Run ID: UvA-IRLab-mlp-mlm-img_cap
  • Participant: UAmsterdam
  • Track: AToMiC
  • Year: 2023
  • Submission: 8/7/2023
  • Type: auto
  • Task: suggest
  • MD5: 380857ec2771a389d388f7fdf235a99a
  • Run description: LSR model (MLP, MLM) on image captions (no image) - Query encoder: MLP distilbert-based-uncased - Caption encoder: MLP distilbert-based-uncased - Image enoder: CLIP openai/clip-vit-base-patch32 - In-batch negatives, batch_size = 64