Skip to content

Runs - Conversational Assistance 2022

CNC_AD

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_AD
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 9de092b1acc72d6252b685eb91bea3c4
  • Run description: CNC_AD: automatic rewritten, convDPR, pointwise reranking

CNC_AD-C

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_AD-C
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 78fdcb526a51858b2630935fcfbec9ad
  • Run description: CNC_AD-C: conversational DPR. conversational reranking model

CNC_AS

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_AS
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: b171999cb8ef9f8ca8623cf25d44a33c
  • Run description: CNC_As: automatic rewritten, sparse doc retrieval, pointwise reranking

CNC_AS-C

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_AS-C
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 3b8d9bd5ac7693e2666ac6efac927bd6
  • Run description: CNC_AS-C: Automatic rewritten from t5ntr, conversational reranking model

CNC_cqg

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_cqg
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/21/2022
  • Type: automatic
  • Task: mixed
  • MD5: a901b3eabe5084304ad3b81adadc99af
  • Run description: T5 clarification question generation, Query likelihood Estimation

CNC_kwqlm2_cqg

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_kwqlm2_cqg
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/21/2022
  • Type: automatic
  • Task: mixed
  • MD5: bd671b87c3cbfd7ba5791efc54fd79f0
  • Run description: zero shot dense retrieval, T5 clarification question generation

CNC_kwqlm_cqg

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_kwqlm_cqg
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/21/2022
  • Type: automatic
  • Task: mixed
  • MD5: 79b73c5c721179ef16ec6309180787e5
  • Run description: Zero shot dense retrieval, T5 clarification question generation

CNC_MD-C

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_MD-C
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: manual
  • Task: primary
  • MD5: 3cbcd0c712d68c66cc7390615ca0cfdf
  • Run description: CNC_MD-C: manual rewritten, convDPR, conversational pointwise reranking

CNC_MS-C

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNC_MS-C
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: manual
  • Task: primary
  • MD5: 288f53ee589b676986be6ede34d0f148
  • Run description: CNC_MS-C: manual rewritten, spare doc retrieval, conversational pointwise reranking

CNR_run1

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNR_run1
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 4cc13a24c69634d05af5ac76a63fef8a
  • Run description: Indexing and querying with PyTerrier 0.7.1, based on Terrier 5.6, using traditional unsupervised sparse retrieval (e.g., DPH). The current user's utterance is enriched with topics extracted from the previous automatically rewritten utterance provided by CAsT. Only utterances are used for the expansion.

CNR_run2

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNR_run2
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: manual
  • Task: primary
  • MD5: 60e29809943cd7f23a5a787d1d924391
  • Run description: Indexing and querying with PyTerrier 0.7.1, based on Terrier 5.6, using traditional unsupervised sparse retrieval (e.g., DPH). The current user's utterance is enriched with topics extracted from the previous manually rewritten utterance provided by CAsT. Only utterances are used for the expansion.

CNR_run3

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNR_run3
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 3ec6ededaba73d422858eb7d9dcd4fc8
  • Run description: Indexing and querying with PyTerrier 0.7.1, based on Terrier 5.6, using traditional unsupervised sparse retrieval (e.g., DPH). The current user's utterance is enriched with terms extracted from the first sentence of the response to the previous utterance. Utterances plus their responses are used for the expansion.

CNR_run4

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: CNR_run4
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 3816da0dde3290e8551d8613c71e0c5d
  • Run description: Indexing and querying with PyTerrier 0.7.1, based on Terrier 5.6, using traditional unsupervised sparse retrieval (e.g., DPH). The current user's utterance is enriched with the top-5 frequent terms extracted from the response to the previous utterance. Utterances plus their responses are used for the expansion.

combine0.5

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: combine0.5
  • Participant: HEATWAVE
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 53860e6de86934d5a541873e244c652f
  • Run description: Current and previous utterances are concatenated and used for BM25 (top 3k). expanded query (trained with CANARD) used in monotoT5 reranker. Top 30 passages further reranked using duoT5. Score interpolated for top 30 using monot5 and duot5. answer span extracted using SQuAD2.

DEI-run1

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: DEI-run1
  • Participant: iiia-unipd
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/25/2022
  • Type: automatic
  • Task: primary
  • MD5: affe726be05e43cfdf1ae6248d8a3c1e
  • Run description: NeuralCoref to resolve coreferences, POS tagging based on previous utterances to carry out query expansion. BM25 for first stage ranking. BERT for query understanding and last re-ranking based on first stage rank

DEI-run2

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: DEI-run2
  • Participant: iiia-unipd
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/25/2022
  • Type: automatic
  • Task: primary
  • MD5: bac1cefef53f92525fd4197b9f19d34b
  • Run description: NeuralCoref to resolve coreferences, POS tagging based on previous utterances to carry out query expansion. BM25 for first stage ranking. BERT for query understanding and last re-ranking based on first stage rank

DEI-run4

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: DEI-run4
  • Participant: iiia-unipd
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/31/2022
  • Type: automatic
  • Task: primary
  • MD5: 9de4ef4068185d2a925d65b6b4036fa3
  • Run description: NeuralCoref to resolve coreferences, POS tagging based on previous utterances and BERT based QA to carry out query expansion. BM25 for first stage ranking. BERT for query understanding and last re-ranking based on first stage rank

DEI-run5.json

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: DEI-run5.json
  • Participant: iiia-unipd
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 60772ad21e46e7bdfc32d1f11acd48e5
  • Run description: NeuralCoref to resolve coreferences, POS tagging based on previous utterances and BERT based QA to carry out query expansion. LMD for first stage ranking. BERT for query understanding and last re-ranking based on first stage rank

duo_reranker

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: duo_reranker
  • Participant: HEATWAVE
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 0e845aecfad5caafc6f3c021dce87e11
  • Run description: Current and previous utterances are concatenated and used for BM25 (top 3k). expanded query (trained with CANARD) used in monotoT5 reranker. Top 30 passages further reranked using duoT5 answer span extracted using SQuAD2

gold

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: gold
  • Participant: HEATWAVE
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: manual
  • Task: primary
  • MD5: 2b67a38cee6c8826c9ea9cf92e2d1f92
  • Run description: BM25 used followed by monotoT5 reranker answer span extracted using SQuAD2

mi_task_0822_1

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: mi_task_0822_1
  • Participant: udel_fang
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/22/2022
  • Type: automatic
  • Task: mixed
  • MD5: 4e03e39f51d64685cf660021b59718aa
  • Run description: a simple rule-based algorithm to generate questions from templates. Basically, there are three types of questions: 1. reference ambiguity clarification question 2. ask for additional descriptive information for an essential noun in the original query. 3. system detected the original query was incomplete and ask the user to complete the query.

MLIA_DAC_splade

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: MLIA_DAC_splade
  • Participant: MLIA-DAC
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/22/2022
  • Type: automatic
  • Task: primary
  • MD5: a113eafc660af563f2e46a1ec59ec419
  • Run description: This is an end to end approach. We extend SPLADE, a sparse retrieval model for ad hoc information retrieval, to the conversational use case. We encode the concatenation of queries, and the concatenation of the current query and the previous answer, then aggregate these two embeddings to produce a contextualized query embedding. Matching with document embeddings proceed as in the original SPLADE.

monot5

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: monot5
  • Participant: HEATWAVE
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 6a37b3f5d04405867b3e31c2dfac22c3
  • Run description: Current and previous utterances are concatenated and used for BM25. expanded query (trained with QRECC) used in monotoT5 reranker answer span extracted using SQuAD2

splade_t5mm

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: splade_t5mm
  • Participant: MLIA-DAC
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/23/2022
  • Type: automatic
  • Task: primary
  • MD5: 811578814d246e0214283a74a349e495
  • Run description: We add a reranking step to the run MLIA_DAC_splade. Input is the concatenation of the current query, past queries, 10 keywords identified in the first stage sparse retrieval step, and the passage. We finetune MonoT5 to adapt to this input format, using MSEMargin loss.

splade_t5mm_ens

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: splade_t5mm_ens
  • Participant: MLIA-DAC
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/23/2022
  • Type: automatic
  • Task: primary
  • MD5: 7281e0d000912aff259060c3c5f725c6
  • Run description: We add a reranking step to the run MLIA_DAC_splade. Input is the concatenation of the current query, past queries, 10 keywords identified in the first stage sparse retrieval step, and the passage. We finetune an ensemble of 4 MonoT5 instances to adapt to this input format, using MSEMargin loss.

splade_t5mse

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: splade_t5mse
  • Participant: MLIA-DAC
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/23/2022
  • Type: automatic
  • Task: primary
  • MD5: dd7afb38523ee7b644a52267216818cd
  • Run description: We add a reranking step to the run MLIA_DAC_splade. Input is the concatenation of the current query, past queries, 10 keywords identified in the first stage sparse retrieval step, and the passage. We finetune MonoT5 to adapt to this input format, using Mean Squared Error loss.

udinfo_best2021

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: udinfo_best2021
  • Participant: udel_fang
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: b72cf1b871cb4f3867006b91f8169f95
  • Run description: 1st ranking on Bm25 and dense method, then monoT5 and duoT5 for 2nd ranking. Fusion: late_fusion.

udinfo_mi_b2021

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: udinfo_mi_b2021
  • Participant: udel_fang
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic-mi
  • Task: primary
  • MD5: 9cba3fed4f4c18c541167f0d1ffbd6ea
  • Run description: Perform fusion only on top 4 methods: 1. reranking(monoT5, duoT5) sparse Ntr 2. reranking(monoT5, duoT5) dense Ntr 3. dense Cqe 4. reranking(monoT5) sparse Hqe

udinfo_onlyd

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: udinfo_onlyd
  • Participant: udel_fang
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 1763d92be4afdf76d63d789cc34907e1
  • Run description: The run aims to compare the first automatic run: If the sparse retrieval is still needed considering dense retrieval to be introduced and tested to be promising in many papers.

udinfo_onlyd_mi

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: udinfo_onlyd_mi
  • Participant: udel_fang
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic-mi
  • Task: primary
  • MD5: e47bdbf4107ab7d3cb25831430349522
  • Run description: 1. The run aims to compare the first automatic run: "If the sparse retrieval is still needed considering dense retrieval to be introduced and tested to be promising in many papers?" 2. The run also aims to compare the performance gap between Ntr(MI) and Ntr(original), to answer the question "If we need MI?"

uis_cargoboat

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uis_cargoboat
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 547c949f4c3a5016465459415f69b2a9
  • Run description: The first-pass retrieval using BM25 with the parameters tuned on 2020 and 2021 CAsT datasets, is followed by mono T5 reranking and duo T5 reranking, which have been fine-tuned on MS MARCO. The sparse query rewriting is performed with a HuggingFace model fine-tuned on the CANARD dataset on queries pre-processed by an intent classifier. Previously rewritten utterances and the last canonical response are used as a context. The rewritten query is expanded using pseudo-relevance feedback.

uis_clearboat

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uis_clearboat
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/19/2022
  • Type: automatic
  • Task: mixed
  • MD5: 879a6b38f1e288acbdc539aca115ea16
  • Run description: We first fine-tune RoBERTa to filter out faulty clarifying questions. To this end, we set clarifying questions from ClariQ as a positive class and queries from previous CAsT editions as a negative class. Then, we rank the remaining clarifying questions with MPNet in a pairwise manner given a rewritten query. The rewritten query is generated with T5 fine-tuned on CANARD.

uis_duoboat

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uis_duoboat
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/26/2022
  • Type: automatic
  • Task: primary
  • MD5: 73789f6aa3f4c6bd4b13f24690623d69
  • Run description: The first-pass retrieval using BM25 with the parameters tuned on 2020 and 2021 CAsT datasets, is followed by mono T5 reranking and duo T5 reranking, which have been fine-tuned on MS MARCO. The query rewriting is performed with a HuggingFace model fine-tuned on the CANARD dataset. Previously rewritten utterances and the last canonical response are used as a context.

uis_mixedboat

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uis_mixedboat
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic-mi
  • Task: primary
  • MD5: c7ab8a4f908082388d6eeb9dba74a602
  • Run description: We first fine-tune RoBERTa to filter out faulty clarifying questions (based on ClariQ and previous CAsT editions). Then, we rank the remaining clarifying questions with MPNet in a pairwise manner given a query rewritten by T5 fine-tuned on CANARD. We classify answers into three classes: useless, useful answers, and useful questions. The classifier is trained on ClariQ. If the first class is predicted, we do nothing to the original query. If the second or third class is predicted, we append the answer or the question, respectively, to the query. Then, the expanded query is once again rewritten with the T5-based model. The first-pass retrieval using BM25 with the parameters tuned on 2020 and 2021 CAsT datasets with PRF, is followed by mono T5 reranking and duo T5 reranking, which have been fine-tuned on MS MARCO. The query rewriting is performed with a HuggingFace model fine-tuned on the CANARD dataset. Previously rewritten utterances and the last canonical response are used as a context.

uis_sparseboat

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uis_sparseboat
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/1/2022
  • Type: automatic
  • Task: primary
  • MD5: 8460ac64b11989f8830728951868704a
  • Run description: The first-pass retrieval using BM25 with the parameters tuned on 2020 and 2021 CAsT datasets, is followed by mono T5 reranking and duo T5 reranking, which have been fine-tuned on MS MARCO. The sparse query rewriting is performed with a HuggingFace model fine-tuned on the CANARD dataset. Previously rewritten utterances and the last canonical response are used as a context. The rewritten query is expanded using pseudo-relevance feedback.

uis_vagueboat

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uis_vagueboat
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/19/2022
  • Type: automatic
  • Task: mixed
  • MD5: c026b0d8cae2af8243d9271bd20c6266
  • Run description: We first perform query rewriting with T5 fine-tuned on CANARD. Then, we perform retrieval and reranking with BM25+DuoT5, followed by applying transformer-based topic model (top2vec) to extract subtopics in the top 100 documents. Finally, we construct a template-based clarifying question by appending up to three extracted subtopics to a template 'Are you interested in...' or similar.

uogTr-AT

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uogTr-AT
  • Participant: UoGTr
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/6/2022
  • Type: automatic
  • Task: primary
  • MD5: d47aa1ff09d6a1721a3e7b87a39301f0
  • Run description: This run uses T5QR trained on CANARD to rewrite user utterances and their context (previous questions + response text). In addition, we leverage a user feedback prediction model trained on CAsT last year to select the context for the T5QR model. For the retriever and reader, we use TCT-ColBERT then rescore and extract an answer by using monoQA.

uogTr-MI

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uogTr-MI
  • Participant: UoGTr
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/22/2022
  • Type: automatic
  • Task: mixed
  • MD5: ac65f634bfb449fd3f3474500c3befac
  • Run description: Our run uses a retrieve + generate models by leveraging the Multi-Task Learning (MTL) T5 model learned to generate the clarification question and select what turns require interaction simultaneously. For the question retrieval, we use py-terrier GTR as the retriever and then use the reranker T5 model to score the retrieved and generated questions. All models are fine-tuned on the ClariQ dataset.

uogTr-MI-HB

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uogTr-MI-HB
  • Participant: UoGTr
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/6/2022
  • Type: automatic-mi
  • Task: primary
  • MD5: 915918edb6d7abb10dc507802cc2bc68
  • Run description: This run leverage the question-response from MI sub-task as context for our T5QR model for rewriting user utterances. In addition, we leverage a user feedback prediction model trained on CAsT last year to filter out the context with a negative response (MI sub-task) before using it as the context for T5QR. For the retriever and reader, we use a hybrid retriever (DPH + TCT-ColBERT) and then rescore and extract an answer by using monoQA.

uogTr-MT

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: uogTr-MT
  • Participant: UoGTr
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/6/2022
  • Type: manual
  • Task: primary
  • MD5: 25aa3f3bdf5aaeaacc9fd236c53bcccf
  • Run description: This run uses provided manual rewritten utterances. For the retriever and reader, we use TCT-ColBERT then rescore and extract an answer by using monoQA.

UWCauto22

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: UWCauto22
  • Participant: WaterlooClarke
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/30/2022
  • Type: automatic
  • Task: primary
  • MD5: 1afe1aff8f276ee11d99fb1667e04311
  • Run description: please find more detail in the notebook paper

UWCcano22

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: UWCcano22
  • Participant: WaterlooClarke
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/30/2022
  • Type: automatic
  • Task: primary
  • MD5: f2e0f8edf02361761df3dcffbeb99fc4
  • Run description: please find more detail in the notebook paper

UWCmanual22

Results | Participants | Proceedings | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: UWCmanual22
  • Participant: WaterlooClarke
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 8/30/2022
  • Type: manual
  • Task: primary
  • MD5: 4060448b9e09c0d1e40828d7ba72183c
  • Run description: manual run using manual rewritten utterances.

V-Ryerson-run

Results | Participants | Input | Summary (strict) | Summary (lenient) | Appendix

  • Run ID: V-Ryerson-run
  • Participant: V-Ryerson
  • Track: Conversational Assistance
  • Year: 2022
  • Submission: 9/4/2022
  • Type: automatic
  • Task: primary
  • MD5: 9e58f5562597f96665b51e77f254368e
  • Run description: Pre-trained Neural Language model Roberta-large is used for ranking.