Skip to content

Runs - Conversational Assistance 2021

astypalaia256

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: astypalaia256
  • Participant: UAmsterdam
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: eed19f2283705f7fbda2934cd98bde1a
  • Run description: We use a token-level dense passage retrieval method, which is pretrained on a non-conversational retrieval task.

bm25_automatic

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: bm25_automatic
  • Participant: TKB48
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: f0dcd94c540300704e83c32a15c7c799
  • Run description: perform sparse retrieval using automatic rewritten utterances. using BM25 to retrieve top1000 results for each query.

CFDA_CLIP_ARUN1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_ARUN1
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 4a7cd0b2e3c192f765c412d797974662
  • Run description: automatic rewritten expansion, doc2query, BM25, t5 reranking

CFDA_CLIP_ARUN2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_ARUN2
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: ca151eb030fffc362f6319f5eeac4bc1
  • Run description: automatic rewritten expansion, doc2query, BM25, t5 reranking

CFDA_CLIP_MRUN1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_MRUN1
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 4186c43948d00e1e531b1a56b8865c35
  • Run description: Manual rewritten, automatic rewritten expansion, doc2query, BM25, t5 reranking

CFDA_CLIP_MRUN2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CFDA_CLIP_MRUN2
  • Participant: CFDA_CLIP
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: c2817505445bad9279ecc169be0532d7
  • Run description: Manual rewritten, automatic rewritten expansion, doc2query, BM25, t5 reranking

clarke-auto

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clarke-auto
  • Participant: WaterlooClarke
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: 55ed5366201b16b7876521506aa6c4eb
  • Run description: Our system consists of a T5-based query rewriter trained on the QReCC dataset and a passage retrieve&rerank pipeline. In the query reformulation stage, instead of using the CAsT Y3 provided automatic canonical results, we used the top passages retrieved by our system as the conversational context to reformulate next queries. In the retrieve&rerank stage, we used two different first-stage retrieval methods: 1) a tuned BM25 with queries expanded using the top-k retrieved passages from the C4 dataset, 2) the BERT-based dense retriever ANCE. We merged the retrieved passages into a single pool, then reranked this pool using a two-stage reranking pipeline with monoT5 and duoT5.

clarke-cc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clarke-cc
  • Participant: WaterlooClarke
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: 1bf71c8d242c72eb7c4c4e52c8ae6d49
  • Run description: Our system consists of a T5-based query rewriter trained on the QReCC dataset and a passage retrieve&rerank pipeline. We used two different first-stage retrieval methods: 1) a tuned BM25 with queries expanded using the top-k retrieved passages from the C4 dataset, 2) the BERT-based dense retriever ANCE. We merged the retrieved passages into a single pool, then reranked this pool using a two-stage reranking pipeline with monoT5 and duoT5.

clarke-manual

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clarke-manual
  • Participant: WaterlooClarke
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Type: manual
  • Task: primary
  • MD5: c260617e000e8ed57f10a1342c749d8f
  • Run description: For each manually rewritten utterances, we used two different first-stage passage retrieval methods: 1) a tuned BM25 with the queries expanded using top-k retrieved passages from the C4 dataset, 2) the BERT-based dense retriever ANCE. We merged the retrieved passages into a single pool, then reranked this pool using a two-stage reranking pipeline with monoT5 and duoT5.

CNR-run1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CNR-run1
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 56ec471b8ad5c753c5b2fa5343a69d31
  • Run description: This run uses raw utterances only. The approach automatically rewrites the utterance by adding the topics extracted from the first and previous utterances. The topics are extracted from utterances using Spacy noun chunks (objects or subjects). For indexing and querying, we used Anserini BM25 with RM3 query expansion. In particular, for the first-stage retrieval, we used BM25 with parameters b = 0.9 and k1 = 2.0, chosen after a fine-tuning on MSMARCO-docs collection for the retrieval task with 5,192 queries from the DEV set. The query expansion is done with 10 keywords taken from the top-10 results with the original query weight set to 0.5. For the passage re-ranking, we used the BERT-base model pre-trained on the MS-MARCO passage dataset.

CNR-run2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CNR-run2
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: a2d525b4ae76716d709cc1a3411e0c34
  • Run description: This run uses raw utterances and canonical responses. For this run, we used labels representing the dependencies between the current utterance and the previous utterances as well as their canonical responses. For each utterance, the approach enriches it with the topics extracted from the corresponding utterance and response of the previous turns. Such topics are extracted from utterances using Spacy noun chunks (objects or subjects). In case, the utterance has also a dependency on a previous response, the approach also adds the named entities extracted by TagMe (with threshold = 0.1) from the candidate response. For indexing and querying, we used Anserini BM25 with RM3 query expansion. In particular, for the first-stage retrieval, we used BM25 with parameters b = 0.9 and k1 = 2.0, chosen after a fine-tuning on MSMARCO-docs collection for the retrieval task with 5,192 queries from the DEV set. The query expansion is done with 10 keywords taken from the top-10 results with the original query weight set to 0.5. For the passage re-ranking, we used the BERT-base model pre-trained on the MS-MARCO passage dataset.

CNR-run3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CNR-run3
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 00f831900ce1709e6b993a2848e84576
  • Run description: This run uses raw utterances only. The approach automatically rewrites the utterance by adding the topics extracted from the previous automatically rewritten utterance. The topics are extracted from utterances using Spacy noun chunks (objects or subjects). For indexing and querying, we used Anserini BM25 with RM3 query expansion. In particular, for the first-stage retrieval, we used BM25 with parameters b = 0.9 and k1 = 2.0, chosen after a fine-tuning on MSMARCO-docs collection for the retrieval task with 5,192 queries from the DEV set. The query expansion is done with 10 keywords taken from the top-10 results with the original query weight set to 0.5. For the passage re-ranking, we used the BERT base model pre-trained on the MS-MARCO passage dataset.

CNR-run4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CNR-run4
  • Participant: CNR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: c162f2de4254b1639fa230652a5d7421
  • Run description: This run uses raw utterances and canonical responses. For this run, we used labels representing the dependencies between the current utterance and the previous utterances as well as their canonical responses. For each utterance, the approach enriches it with the topics extracted from the corresponding utterance and response of the previous turns. Such topics are extracted from previous automatically rewritten utterances using Spacy noun chunks (objects or subjects). In case, the utterance has a dependency on a previous response, named entities, extracted using TagMe with threshold = 0.1, are added to the utterance. For indexing and querying, we used Anserini BM25 with RM3 query expansion. In particular, for the first-stage retrieval, we used BM25 with parameters b = 0.9 and k1 = 2.0, chosen after a fine-tuning on MSMARCO-docs collection for the retrieval task with 5,192 queries from the DEV set. The query expansion is done with 10 keywords taken from the top-10 results with the original query weight set to 0.5. For the passage re-ranking, we used the BERT base model pre-trained on the MS-MARCO passage dataset.

cqe

Results | Participants | Input | Summary | Appendix

  • Run ID: cqe
  • Participant: h2oloo
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: b66d1d8ba18d5f836de50fcf5ebfad95
  • Run description: This is a conversational dense retrieval systems, CQE, which combines TCT-ColBERT and UniCOIL. We use historical queries and the last two filtered (by heuristic) responses as context. The input query for the fourth utterance, for example, is Q1 | Q2 | R2 | Q3 | R3 | Q4.

cqe-t5

Results | Participants | Input | Summary | Appendix

  • Run ID: cqe-t5
  • Participant: h2oloo
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: e3e1c667d78780c032c86d294de0756d
  • Run description: This is the fusion of two dense retrieval systems. The first one, CQE, is a conversational dense retriever, which combines TCT-ColBERT and UniCOIL. We use historical queries and the last two filtered (by heuristic) responses as context. The input query for the fourth utterance, for example, is Q1 | Q2 | R2 | Q3 | R3 | Q4. The second one, T5, is a query rewriter and then search on a single query dense retriever, which combines TCT-ColBERT and UniCOIL. For T5 rewriting, we use the same input as our conversational dense retriever for comparison.

dense_manual

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: dense_manual
  • Participant: TKB48
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: ef31538a0a9c471089ed3b999694ccba
  • Run description: perform dense retrieval using manual rewritten utterances. using bi-encoder including DPRDocumentEncoder for encoding documents into dense vector and contruct faiss index, DPRQueryEncoder for encoding queries into dense vetor and perform dense retrieval on constructed Faiss index.

DPH-auto-rye

Results | Participants | Input | Summary | Appendix

  • Run ID: DPH-auto-rye
  • Participant: V-Ryerson
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: e964406956cfec7ce84877423b96aa8e
  • Run description: We found the soft cos similarity between every two queries (current query and the previous ones) if their similarity is more than 0.36 we put them similar and the query would be the current query plus the previous ones and the previous passage. (0.36 is coming from a few experiments as the stop words are removed and stemmed)

DPH-manual-rye

Results | Participants | Input | Summary | Appendix

  • Run ID: DPH-manual-rye
  • Participant: V-Ryerson
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 03395e25b16ee693f3f06d4f623d7171
  • Run description: We found the soft cos similarity between every two queries (current query and the previous ones) if their similarity is more than 0.36 we put them similar and the query would be the current query plus the previous ones and the previous passage. (0.36 is coming from a few experiments as the stop words are removed and stemmed) and also use fasttext to find the semantic relation and doing query expansion.

HBKU_CQR-HC

Results | Participants | Input | Summary | Appendix

  • Run ID: HBKU_CQR-HC
  • Participant: HBKU
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: 437d569a5ebab44e8b8f6b2acffdc77e
  • Run description: Conversation turns are reformulated using T5 trained on CANARD. Historical turns are used as conversation context in the reformulation step. The returned passages are re-ranked using MonoT5 for top 1000 passages followed by duoT5 for top 100 passages.

HBKU_CQR_POS

Results | Participants | Input | Summary | Appendix

  • Run ID: HBKU_CQR_POS
  • Participant: HBKU
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: ecca4e10aafcd6a44fa0f1857b7edfd1
  • Run description: Conversation turns are reformulated using T5. Both historical turns and passages are used as conversation context. previous historical passages are excluded if after sentiment analysis, at least one sentence expresses negative sentiment in the current turn. One sentence is selected from the passage using BERT for next sentence prediction. This sentence is included in the conversation context. The returned passages are re-ranked using MonoT5 for top 1000 passages followed by duoT5 for top 100 passages.

HBKU_CQR_TC

Results | Participants | Input | Summary | Appendix

  • Run ID: HBKU_CQR_TC
  • Participant: HBKU
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: ced06d3816a90fbe8ccb3788e7c40a81
  • Run description: This run combines retrieval using three versions of the same turn. The first is T5 reformulated turn with only historical turns as context, the second is T5 reformulated turn using both historical turns and passages, and the other is reformulated using a BERT term classification model trained on the OR-Conv-QA dataset. 1000 passages are retrieved for every version of turn, combined, and then passages are re-ranked using monoT5 to get top 1000 passages. After that duoT5 is used to re-rank top 100 passages.

HBKU_CQRHC_BM25

Results | Participants | Input | Summary | Appendix

  • Run ID: HBKU_CQRHC_BM25
  • Participant: HBKU
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: b3a15bdf3b1191fdb064d740725551b5
  • Run description: Conversation turns are reformulated using T5 in two ways. The first reformulation uses historical turns as context only, and the other uses both historical turns and passages. Since explicit feedback is given by user, sentiment analysis is performed first before including passages in reformulation. For every sentence in the turn, if at least one sentence is negative, the historical passage is replaced by another passage retrieved with turns reformulated with historical turns only. One sentence from the passage is included as context for T5 reformulation. Sentence is selected using BERT for next sentence prediction. After this step, we have two versions of every turn; one reformulated with passages and the other without. To choose between these two versions during retrieval. The BM25 score of the top retrieved passage is compared as a clarity score for the two turn reformulation versions. The highest scoring turn is selected for retrieval. Then MonoT5 is used to re-rank top 1000 passages followed by duoT5 for top 100 passages.

historyonly

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: historyonly
  • Participant: UAmsterdam
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: a187cb88f56a849bcffad665cc679b05
  • Run description: We use a token-level dense passage retrieval method, which is pretrained on a non-conversational retrieval task. This run does not use canonical responses, only the raw utterances. Due to time implications, this does not include the KILT collection

historyonlyKILT

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: historyonlyKILT
  • Participant: UAmsterdam
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: b0dd386eaadbadf21febcda5bd75df57
  • Run description: We use a token-level dense passage retrieval method, which is pretrained on a non-conversational retrieval task. This run uses only the previous history and ignores the canonical responses. This run includes all collections (KILT, WAPO, MARCO)

hybrid_manual

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: hybrid_manual
  • Participant: TKB48
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 44a609063cbd1f7c09c5daf347f89f85
  • Run description: perform sparse-dense hybrid retrieval using manual rewritten utterances. using DPRQueryEncoder to encode each query into dense vector and perform retrieving on Faiss index, mixted with results from BM25 to get the final search results.

IITD-RAW_U_T5_1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IITD-RAW_U_T5_1
  • Participant: IITD-DBAI
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: c8c01e3880b83f8c6c83086fb47f1217
  • Run description: It is a two-stage process. In the first stage the document extraction work on the query expansion using HQE and PQE ( Words from the previous terms based on the IDF values ) parallelly we are using query rewriting using T5. After the retrieval and extracted queries, we are ranking the passages with the reformulated queries using T5 reranker. Extraction is dependent upon the value of K1 and b of BM25.

IITD-RAW_U_T5_2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IITD-RAW_U_T5_2
  • Participant: IITD-DBAI
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: 768d594f2aaa9281a9f4862807377aba
  • Run description: It is a two-stage process. In the first stage the document extraction work on the query expansion using HQE and PQE ( Words from the previous terms based on the IDF values ) parallelly we are using query rewriting using T5. After the retrieval and extracted queries, we are ranking the passages with the reformulated queries using T5 reranker. Extraction is dependent upon the value of K1 and b of BM25.

LTI-entity-g

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LTI-entity-g
  • Participant: CMU-LTI
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Type: manual
  • Task: primary
  • MD5: 0fd1bb764134e60958b20f651227bfa3
  • Run description: This run uses manual queries and top passages to generate an entity graph. A T5 base reranker, finetuned on MSMarco, is used to rerank the top 20 documents with entity centrality. We keep the entities of the previous 3 queries for context.

LTI-rewriter-5q

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LTI-rewriter-5q
  • Participant: CMU-LTI
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: 792a3fb67d2d2e49af5ce9d28d1f92ed
  • Run description: This run uses a T5 rewriter model to rewrite at most 5 different queries given previous conversational queries and each of the top 5 passages retrieved by the automatic baseline for the previous query. We then concatenate all unique generated queries together. We run a t5-base reranker to rerank the top 1000 documents.

LTI-rewriter-g

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LTI-rewriter-g
  • Participant: CMU-LTI
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: c0a40c1a15d7ec23e347f59da0a81d7c
  • Run description: This run uses a T5 rewriter model to rewrite the queries and a T5 term-classification model to expand the queries. We use all previous rewritten queries and the last 3 canonical passages as our context for both T5 rewriting and T5 term-classification models. We run t5-base reranker, finetuned on MSMarco to rerank the top 1000 documents. An entity graph is generated over the queries and top 20 documents to rerank the top 20 documents using entity centrality.

LTI-rewriter-tc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LTI-rewriter-tc
  • Participant: CMU-LTI
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: c4d9c8812341c8a2c54302d2ecd04428
  • Run description: This run uses a T5 rewriter model to rewrite the queries and a T5 term-classification model to expand the rewritten queries. We use all previous rewritten queries and the last 3 canonical passages as our context for both T5 rewriter and T5 term-classification models. We run t5-base reranker to rerank the top 1000 documents.

mono-duo-rerank

Results | Participants | Input | Summary | Appendix

  • Run ID: mono-duo-rerank
  • Participant: h2oloo
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 2b52496f53de1f990ebea5bc5811fd33
  • Run description: This run is to rerank our best dense retrieval system (cqe-t5) with mono-duo T5 3B to see the effectiveness of reranking.

Rewritt5_monot5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Rewritt5_monot5
  • Participant: MLIA-LIP6
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 857caacb3ac6222b559fff74667a5226
  • Run description: this run usses T5 model with previous rewritten queries previous passages followed by a two stage retrieval with BM25 and monoT5

RUIR1_TURN-FT

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RUIR1_TURN-FT
  • Participant: RUIR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 96af21797d48b2e695c2c06f5acfab78
  • Run description: Our method is a hybrid between monoBERT and E-BERT, where entities are linked by REL (Radboud Entity Linker). For this run, the method does not use conversation history as an input. This model is finetuned on MS-MARCO and CAsT Y2 manual utterances.

RUIR2_TURN

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RUIR2_TURN
  • Participant: RUIR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: f5e9f140f1e906715f2fe86f54dcdda2
  • Run description: Our method is a hybrid between monoBERT and E-BERT, where entities are linked by REL (Radboud Entity Linker). For this run, the method does not use conversation history as an input. This model is finetuned on MS-MARCO.

RUIR4_HIST

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: RUIR4_HIST
  • Participant: RUIR
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 496c561205446116a5f1e746aeecfa26
  • Run description: Our method is a hybrid between monoBERT and E-BERT, where entities are linked by REL (Radboud Entity Linker). For this run, the method uses conversation history in addition to the current user utterance. This model is finetuned on MS-MARCO.

sparse_manual

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: sparse_manual
  • Participant: TKB48
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 55e6d734cbf920ad3affec4ff9acc08a
  • Run description: perform sparse retrieval using manual rewritten utterances. using BM25 to retrieve top1000 results for each query.

t5

Results | Participants | Input | Summary | Appendix

  • Run ID: t5
  • Participant: h2oloo
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: 480387b21a1222e936e5b0b326605734
  • Run description: We use, T5, as our query rewriter and then search on a single-query dense retriever, which combines TCT-ColBERT and UniCOIL. For T5 rewriting, we use historical queries and the last two filtered (by heuristic) responses as context. The input query for the fourth utterance, for example, is Q1 | Q2 | R2 | Q3 | R3 | Q4.

t5_doc2query

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: t5_doc2query
  • Participant: MLIA-LIP6
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 3fa3a4221afa1e2be3a49de5f5fa2f0b
  • Run description: this run usses T5 model with previous queries and doc2query of previous passages followed by a two stage retrieval with BM25 and monoT5

t5_monot5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: t5_monot5
  • Participant: MLIA-LIP6
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 8c76efd5bf84b42c70119ca886b86b33
  • Run description: this run usses T5 model with previous queries and previous passages followed by a two stage retrieval with BM25 and monoT5

t5colbert

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: t5colbert
  • Participant: MLIA-LIP6
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: cea5beb4f76096d45c63a8d347aea4af
  • Run description: it's an end-to-end model that merges t5 for rewriting and colebrt for reranking

UiS_raft

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UiS_raft
  • Participant: UiS
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: 44a641eba3c3dd0d6326813e6eaf7fe0
  • Run description: First-pass retrieval using BM25 with default parameters, followed by T5 reranking, which has been fine-tuned on MS MARCO. Both steps use the manual query rewrites provided by organizers. No external data or conversational context is utilized.

umd2021_run1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umd2021_run1
  • Participant: UMD
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/17/2021
  • Task: primary
  • MD5: 40e421fb4e1d87986fb957bdc0db895d
  • Run description: T5 query rewriter with concatenation of all queries, and at most last three canonical passage context. First-stage BM25 search with passage index, and Passage ranking with model monoBERT (https://huggingface.co/castorini/monobert-large-msmarco-finetune-only)

umd2021_run2doc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umd2021_run2doc
  • Participant: UMD
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: a9c790ad275b3b225590120f81f6aab2
  • Run description: T5 query rewriter with the concatenation of all queries, and at most last three canonical passage context. First-stage BM25 search with on augmented doc index (i.e., with url, title), and passage ranking with the model monoBERT.

umd2021_run3rrf

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umd2021_run3rrf
  • Participant: UMD
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/18/2021
  • Task: primary
  • MD5: ef1132c7c4e1da3829f8a37a8b5c6042
  • Run description: Reciprocal rank fusion with k = 60 over three intermediate runs: (1) our run 1; (2) run 2; (3) official baseline results y3_automatic_results_1000.v1.0.run.

umd2021_run4den

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umd2021_run4den
  • Participant: UMD
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 12e9e2bc830487e0aef250b749d45ce3
  • Run description: Full corpus sparse retrieval combined with 200-shard dense retrieval (768-d, encoder castorini/tct_colbert-v2-hnp-msmarco, on the segmented passage index with maxP for doc score). This obtains a first-stage doc ranking of depth 50 per shard. Passage reranking used the castorini/monobert-large-msmarco-finetune-only model to get a final ranking of depth 50 per shard. CombMax combination of the 200 shard results with run2 (full-corpus doc bm25 + passage reranking run) gives a final ranking.

uogTrADT

Results | Participants | Input | Summary | Appendix

  • Run ID: uogTrADT
  • Participant: uogTr
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: be1de46ca141cf7d96344e18ae4318f4
  • Run description: This method uses the provided automatically rewritten utterances, DPH QE (BO1), reranking by monoT5

uogTrMDT

Results | Participants | Input | Summary | Appendix

  • Run ID: uogTrMDT
  • Participant: uogTr
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Type: manual
  • Task: primary
  • MD5: ae0035dfa97b8f870f676fc13cccb184
  • Run description: The method uses the provided manually rewritten utterances, DPH QE (BO1), reranking by monoT5

uogTrTCT

Results | Participants | Input | Summary | Appendix

  • Run ID: uogTrTCT
  • Participant: uogTr
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 16ff51c3d7a75904421539fbbf2ddb63
  • Run description: generative query rewriting by T5, ConvDR, reranking by mono T5

uogTrTDT

Results | Participants | Input | Summary | Appendix

  • Run ID: uogTrTDT
  • Participant: uogTr
  • Track: Conversational Assistance
  • Year: 2021
  • Submission: 8/19/2021
  • Task: primary
  • MD5: 11fe7776d0f5e8c18a36956e6689308b
  • Run description: This method uses generative query rewriting by T5, DPH QE (BO1), reranking by monoT5. To rewrite the query, this method summarises the text from the previous automatic canonical result for the topic and uses it as conversational context.