Runs - Conversational Assistance 2019¶
bertrr_rel_1st¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bertrr_rel_1st
- Participant: USI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
ae085804cc20b7100225212a91dabf6c
- Run description: In this run, each question is expanded by selection of "relevant" previous questions along with the first question in the conversation. The "relevant" questions are labelled by three human assessors over training queries to train a model for predicting the relevant question(s) from the test set of questions. Passage Retrieval is performed using an open source ad-hoc search engine (Galago). Results are afterwards re-ranked using a BERT-based model.
bertrr_rel_q¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: bertrr_rel_q
- Participant: USI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
9e4e99c55ee2c906d401423ecfcdbcd9
- Run description: In this run, each question is expanded by selection of "relevant" previous questions. The "relevant" questions are labelled by three human assessors over training queries to train a model for predicting the relevant question(s) from the test set of questions. Passage Retrieval is performed using an open source ad-hoc search engine (Galago). Results are afterwards re-ranked using BERT.
BM25_BERT_FC¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BM25_BERT_FC
- Participant: RUIR
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/14/2019
- Type: automatic
- Task: primary
- MD5:
c8e7e771142f3d7c57087972ebce4f76
- Run description: In this run, we only used the MS MARCO collection to retrieve from. We indexed it in Anserini and retrieved 1000 articles with BM25. We did not retrieve them based on ONLY the query, but also the previous turns. We then reranked the query/passage combo's with BERT. We used a pretrained sequence classification model that was finetuned on MS MARCO.
BM25_BERT_RANKF¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: BM25_BERT_RANKF
- Participant: RUIR
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
c4185e258e54ba01c5c88543ebb12c8d
- Run description: In this run, we only used the MS MARCO collection to retrieve passages from. We indexed it in Anserini and retrieved 1000 articles with BM25. We did not retrieve them based on ONLY the query, but also the previous turns. We then reranked the query/passage combo's with BERT. We used a pretrained sequence classification model that was finetuned on MS MARCO. We reranked three times: one time with only the query to answer as input, one time with the query to answer and the previous utterance as input and one time with the query to answer and the second to last utterance as input. We combined the scores of these runs by taking the max and ranking those max scores afterwards.
CFDA_CLIP_RUN1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CFDA_CLIP_RUN1
- Participant: CFDA_CLIP
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
f43d37936ee49af6ee4a729e4cbe0058
- Run description: This run uses MSMARCO as indexing corpus only. All answers are retrieved by BM25 with raw utterances and titles from MSMARCO. And the reranking model is BERT finetuned on MSMARCO dataset, the query for which is coreference resolved by CAsT provided.
CFDA_CLIP_RUN6¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CFDA_CLIP_RUN6
- Participant: CFDA_CLIP
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
e28ccf56df0e0e6209f91d40d5194be2
- Run description: 1. use coreference resolved query + BM25 + RM3 to retrieve docs 2. use MSMARCO finetuned BERT to rerank
CFDA_CLIP_RUN7¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CFDA_CLIP_RUN7
- Participant: CFDA_CLIP
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
93ad9b409af6742bcaf56dba77b52c49
- Run description: 1. use doc2query to expand MSMARCO corpus 2. Retrieve docs by BM25 3. Rerank by MSMARCO finetuned BERT
CFDA_CLIP_RUN8¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: CFDA_CLIP_RUN8
- Participant: CFDA_CLIP
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
3a9b9c6415c44b5f5bd3dd85380f5ed8
- Run description: 1. Doc2query model to expand MSMARCO corpus 2. Use previous turns to expand keywords for each turn's query word 3. Retrieve by BM25 4. Rerank by MSMARCO finetuned BERT 5. Use previous turn's answer to expand document candidates 6. Final rerank by MSMARCO finetuned BERT
clacBase¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: clacBase
- Participant: WaterlooClarke
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
a285175aeb583d63e005d341ad2763f3
- Run description: BM25 with PRF after query re-writing
clacBaseRerank¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: clacBaseRerank
- Participant: WaterlooClarke
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
bdc9258e7760e9c0b68022e2f2b80292
- Run description: BM25 with PRF after re-writing, followed by re-ranking with BERT
clacMagic¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: clacMagic
- Participant: WaterlooClarke
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
f814088f3b9ea1b52126e409c0b101df
- Run description: BM25 with PRF after re-writing
clacMagicRerank¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: clacMagicRerank
- Participant: WaterlooClarke
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
e70f7c2db551fe11af5727bab95de780
- Run description: BM25 with PRF after re-writing, followed by re-ranking with BERT
combination¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: combination
- Participant: ADAPT-DCU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: manual
- Task: primary
- MD5:
c9ad573fe4a942a2242dab5c7602f5b1
- Run description: This is the baseline run, where we perform careful NLP based query extraction using spacy library to perform passage retrieval. We used the officially provided expanded questions. We had separate indexes and performed results combination across different data index.
coref_cshift¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: coref_cshift
- Participant: CMU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
95168668e1de3288c3bede7fdcc5ca4f
- Run description: Uses BERT attention features for coreference resolution, and identifies context shift using KL Divergence between top retrieved documents for each turn in the conversation. No query expansion used for this run.
coref_shift_qe¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: coref_shift_qe
- Participant: CMU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
d47cceda0acfa47bf6b59f7aa990b3fd
- Run description: Uses BERT attention features for coreference resolution, and identifies context shift using KL Divergence between top retrieved documents for each turn in the conversation. Retrieval is done using Indri with Query Expansion.
datasetreorder¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: datasetreorder
- Participant: ADAPT-DCU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: manual
- Task: primary
- MD5:
38a350ffd53a640083903f6442bb30e8
- Run description: Re-ranking results obtained using 3 different datasets. Combining output results in a sequential fusion based approach.
ECNUICA_BERT¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ECNUICA_BERT
- Participant: ECNU-ICA
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
fdeedd0a3474b801cb45ebd6979c6a2a
- Run description: This run use entity linking and keywords algorithm do Neural Language Understanding. Than use BERT pretrained model(Fine-tune with MRPC corpus) compute the relevance between question and answers.
ECNUICA_MIX¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ECNUICA_MIX
- Participant: ECNU-ICA
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
a480b3bd7aead9bbebc7d65e905bf7e2
- Run description: This example consist entity linking, keywords extraction and BERT reranking. BERT was fine-tune on MSRP corpus. The final rank was produced by the mix of BERT score and TFIDF score.
ECNUICA_ORI¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ECNUICA_ORI
- Participant: ECNU-ICA
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
0d7be301c7287e4487685648f7605ba5
- Run description: A simple example use only simple entity linking and keywords extraction, passage ranking by tfidf score. AllenNLP was be used to do coreference resolve.
ensemble¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ensemble
- Participant: CMU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
ba14f473ca5855c97dfadee6d178aacb
- Run description: Uses three different retrieval results and combines them (an ensemble system). (1) Uses BERT attention features for coreference resolution, and identifies context shift using KL Divergence between top retrieved documents for each turn in the conversation. (2) All the identified context is used for the second system. (3) Uses a heuristic based context resolution system. Retrieval is done using Indri.
galago_rel_1st¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: galago_rel_1st
- Participant: USI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
e0716dfcd7b89da9a7148c0a8fd1d68e
- Run description: In this run, each question is expanded by selection of "relevant" previous questions along with the first question in the conversation. The "relevant" questions are labelled by three human assessors over training queries to train a model for predicting the relevant question(s) from the test set of questions. Passage Retrieval is performed using an open source ad-hoc search engine (Galago).
galago_rel_q¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: galago_rel_q
- Participant: USI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
5937726b677673c0ca1ff9a44c0c4e95
- Run description: In this run, each question is expanded by selection of "relevant" previous questions. The "relevant" questions are labelled by three human assessors over training queries to train a model for predicting the relevant question(s) from the test set of questions. Passage Retrieval is performed using an open source ad-hoc search engine (Galago).
h2oloo_RUN2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: h2oloo_RUN2
- Participant: h2oloo
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
3429c01c77295d06157a2fed609fdb68
- Run description: 1st stage: retrieve 1000 candidate passages by query plus topic title key word matching with BM25 2nd stage: rerank the candidate passages by Bert large model trained on Ms Marco passage rerank dataset and while inference, the current query is combined with some keywords in previous turns
h2oloo_RUN3¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: h2oloo_RUN3
- Participant: h2oloo
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
963863ba5da668aba98bf7d471a24c52
- Run description: 1st stage: retrieve 1000 candidate passages by query plus topic title key word matching with BM25 2nd stage: rerank the candidate passages by Bert large model trained on Ms Marco passage rerank dataset and while inference, the annotated query is used.
h2oloo_RUN4¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: h2oloo_RUN4
- Participant: h2oloo
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
77e6f4cec0505d911014c3c62dad2664
- Run description: 1st stage: retrieve 1000 candidate passages by query plus topic title key word matching with BM25 and RM3 2nd stage: rerank the candidate passages by Bert large model trained on Ms Marco passage rerank dataset and while inference, the annotated query is used.
h2oloo_RUN5¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: h2oloo_RUN5
- Participant: h2oloo
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
24465750652ba3c1d839530264b7b708
- Run description: 1st stage: retrieve 1000 candidate passages by query plus some auto selected keywords in previous turns with BM25 2nd stage: rerank the candidate passages by Bert large model trained on Ms Marco passage rerank dataset and while inference, the annotated query is used.
humanbert¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: humanbert
- Participant: ATeam
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: automatic
- Task: primary
- MD5:
9f525b2fa36a114bbf811cd8ea6e6429
- Run description: Using the annotations of the evaluation queries provided + Anserini + BERT
ict_wrfml¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ict_wrfml
- Participant: ICTNET
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
97fa40420de9aad6164f701f922d2367
- Run description: We use elastic_search to rerank the baseline results provided by trec cast.
ilps-bert-feat1¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ilps-bert-feat1
- Participant: UAmsterdam
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/16/2019
- Type: automatic
- Task: primary
- MD5:
0802d3fb297a4ef64bf281e4131cb798
- Run description: A linear combination of VanillaBERT, our unsupervised ranker (LM with Dirichlet smoothing + RM3 and query expansion to represent the conversation topic up to the current turn). VanillaBERT was fine-tuned with 100K triples from the MS MARCO passage ranking dataset.
ilps-bert-feat2¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ilps-bert-feat2
- Participant: UAmsterdam
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/16/2019
- Type: automatic
- Task: primary
- MD5:
5a3cdf08680f30b4741b4f4e6cc7768b
- Run description: A linear combination of VanillaBERT, our unsupervised ranker (LM with Dirichlet smoothing + RM3 and query expansion to represent the conversation topic up to the current turn). VanillaBERT was fine-tuned with 100K triples from the MS MARCO passage ranking dataset. This is the same model as ilps-bert-feat1 with different hyperparameters.
ilps-bert-featq¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ilps-bert-featq
- Participant: UAmsterdam
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
8a5c9483cf2fbab1349b06628c1e32b5
- Run description: A linear combination of VanillaBERT, our unsupervised ranker (LM with Dirichlet smoothing + RM3 and query expansion to represent the conversation topic up to the current turn). VanillaBERT was fine-tuned with 100K triples from the MS MARCO passage ranking dataset. The whole model was pre-trained with automatically constructed sequential queries from the QuAC (Question Answering in Context) dataset.
ilps-lm-rm3-dt¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ilps-lm-rm3-dt
- Participant: UvA.ILPS
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/16/2019
- Type: automatic
- Task: primary
- MD5:
ac91c5bab25f8bc46ab0892e898fa5ca
- Run description: Unsupervised ranker using LM with Dirichlet smoothing + RM3. Queries are expanded with an automatically extracted set of words that represent the conversation topic up to the current turn.
manual_indri¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: manual_indri
- Participant: CMU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: manual
- Task: primary
- MD5:
154e85cbfed43665f10dbd034d9de7ae
- Run description: Annotated (manually modified) test queries were used as input to Indri for retrieval.
MPgate¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: MPgate
- Participant: RALI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
1eaf2c684c98fc5c1d16e8f43c56f1ae
- Run description: This model first trains single-turn matching module by MatchPyramid using MSMARCO passage ranking dataset, afterwards the interaction patterns of each turn are aggregated through an attentive aggregation module which is trained on the CAsT Y1 training set.
mpi-d5_cqw¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpi-d5_cqw
- Participant: mpi-inf-d5
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
09e756a99ae0f3f432a33f0206eae0d6
- Run description: We formulated the objective of maximizing the passage score for a query as a combination of similarity and coherence. We first build a word-cooccurrence network from MS MARCO corpus, where words are nodes and there is an edge between two nodes if they co-occur in the same passage in a statistically signifcant way, within a context window. We use NPMI (normalized pointwise mutual information) as a measure of this word association significance. This information is stored as edge weight. Word Embeddings are used to model the similarity between words in the query and words in the passages. This information is stored as node weight for similarity matches above a threshold. Edge weights between words are considered if they are similar to words in the query and co-occur in a context window. Our method uses indri to retrieve a candidate set for reranking. The current, the previous and the first query are considered. Our final score consists of a combination of indri rank, node and edge scores.
mpi-d5_igraph¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpi-d5_igraph
- Participant: mpi-inf-d5
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
c4deed6c792b65031b571225adbb7c92
- Run description: We formulated the objective of maximizing the passage score for a query as a combination of similarity and coherence. We first build a word-cooccurrence network from MS MARCO corpus, where words are nodes and there is an edge between two nodes if they co-occur in the same passage in a statistically signifcant way, within a context window. We use NPMI (normalized pointwise mutual information) as a measure of this word association significance. This information is stored as edge weight. Word Embeddings are used to model the similarity between words in the query and words in the passages. This information is stored as node weight for similarity matches above a threshold. Edge weights between words are considered if they are similar to words in the query and co-occur in a context window. Our method uses indri to retrieve a candidate set for reranking. The current and the first query are considered. Our final score consists of a combination of indri rank, node and edge scores.
mpi-d5_intu¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpi-d5_intu
- Participant: mpi-inf-d5
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
e1056d5f73825215a7bdf473f84784e0
- Run description: We formulated the objective of maximizing the passage score for a query as a combination of similarity and coherence. We first build a word-cooccurrence network from MS MARCO corpus, where words are nodes and there is an edge between two nodes if they co-occur in the same passage in a statistically signifcant way, within a context window. We use NPMI (normalized pointwise mutual information) as a measure of this word association significance. This information is stored as edge weight. Word Embeddings are used to model the similarity between words in the query and words in the passages. This information is stored as node weight for similarity matches above a threshold. Edge weights between words are considered if they are similar to words in the query and co-occur in a context window. Our method uses indri to retrieve a candidate set for reranking. The current and the first query are considered. Our final score consists of a combination of indri rank, node and edge scores.
mpi-d5_union¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpi-d5_union
- Participant: mpi-inf-d5
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
ab0978a9f47aa2245cdd5e77dc3121a4
- Run description: We formulated the objective of maximizing the passage score for a query as a combination of similarity and coherence. We first build a word-cooccurrence network from MS MARCO corpus, where words are nodes and there is an edge between two nodes if they co-occur in the same passage in a statistically signifcant way, within a context window. We use NPMI (normalized pointwise mutual information) as a measure of this word association significance. This information is stored as edge weight. Word Embeddings are used to model the similarity between words in the query and words in the passages. This information is stored as node weight for similarity matches above a threshold. Edge weights between words are considered if they are similar to words in the query and co-occur in a context window. Our method uses the union of passages, retrieved by three different indri baselines, for reranking. The current and the first query are considered. Our final score consists of a combination of node and edge scores.
mpi_base¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpi_base
- Participant: mpii
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/14/2019
- Type: automatic
- Task: primary
- MD5:
e2dc5d1a0537bf9f70f169911fff0895
- Run description: baseline with QE
mpi_bert¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: mpi_bert
- Participant: mpii
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/14/2019
- Type: automatic
- Task: primary
- MD5:
c3d50deddb3b026c8d70908a8985257a
- Run description: BERT re-ranking baseline with QE
MPmlp¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: MPmlp
- Participant: RALI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
2a41ab5be894499a6083291d6099c866
- Run description: This model first trains single-turn matching module by MatchPyramid using MSMARCO passage ranking dataset, afterwards the interaction patterns of each turn are aggregated through an aggregation module which is trained on the CAsT Y1 training set.
pg2bert¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: pg2bert
- Participant: ATeam
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: automatic
- Task: primary
- MD5:
7d6977862428e0d709ff609a2aac053a
- Run description: Pointer-generator model for question rewriting + Anserini + BERT
pgbert¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: pgbert
- Participant: ATeam
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: automatic
- Task: primary
- MD5:
917e99a1b9f100de035d95a1086e0981
- Run description: Generative model for question rewriting + Anserini + BERT
rerankingorder¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rerankingorder
- Participant: ADAPT-DCU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: manual
- Task: primary
- MD5:
e7f4e2341bd2b54f11b0038e676b651d
- Run description: This is the effective re-ranking run, where we perform careful NLP based query extraction using spacy library to perform passage retrieval. We used the officially provided expanded questions. We had separate indexes and performed results combination across different data index. We used the best results from different datasets and combine them sequentially based on the heuristics for modelling query effetcively using our NLP pipeline.
RUCIR-run1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUCIR-run1
- Participant: RUCIR
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
214a9371d343a48e91ca4d5fd3ab9b66
- Run description: We use MS MARCO Passage Ranking dataset as training dataset and extract 10 features to train learning to rank model (LambdaMART). As for the test dataset, we use the manual resolved annotations to extract features.
RUCIR-run2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUCIR-run2
- Participant: RUCIR
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
90d23c0ddc2e1c07195a3ede8d15cb78
- Run description: We use two types of query: original query and the AllenNLP query which is given to generate new query. Send the new query to indri to get the result.
RUCIR-run3¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUCIR-run3
- Participant: RUCIR
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
989165a8a2559ad8a84c90ad8c22747f
- Run description: There are five part of the model. 1. memory network built on previous query and positive document 2. similarity between the representation of current query and document 3. similarity between the representation of current query and the first sentence of the document 4. some statistic feature of previous query, current query and document 5. attentive KNRM model result on current query and document
RUCIR-run4¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: RUCIR-run4
- Participant: RUCIR
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
cb8466d826da897b339520cab48e4253
- Run description: We use original query and retrieved documents to generate new query. Send the new query to indri to get the result.
SMNgate¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: SMNgate
- Participant: RALI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/16/2019
- Type: automatic
- Task: primary
- MD5:
86173a0960bc1905962ba2e16a5bed5b
- Run description: This model first trains single-turn matching module by MatchPyramid using MSMARCO passage ranking dataset, afterwards the interaction patterns of each turn are aggregated through an aggregation module which is trained on the CAsT Y1 training set.
SMNmlp¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: SMNmlp
- Participant: RALI
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/15/2019
- Type: automatic
- Task: primary
- MD5:
870a8d959e7b6c28aca85f6cb7aaaeb7
- Run description: This model first trains single-turn matching module by Sequential Matching Network (SMN) using MSMARCO passage ranking dataset, afterwards the interaction patterns of each turn are aggregated through a MLP aggregation module which is trained on the CAsT Y1 training set.
topicturnsort¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: topicturnsort
- Participant: ADAPT-DCU
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: manual
- Task: primary
- MD5:
07256b10cd4ecaef4f7c9754b9ea16d7
- Run description: In this model, we had three separate indexes for CAR, WAPO and MARCO dataset. We searched for queries using three datasets separately. We merged the retrieval results obtained from BM25 models on three different datasets. We performed document reranking using the percentage of potentially relevant documents returned. We used the expanded queries provided by the task organizers for passage retrieval.
UDInfoC_BL¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UDInfoC_BL
- Participant: udel_fang
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/16/2019
- Type: automatic
- Task: primary
- MD5:
8920063c8fdadb7c58c05cc55fa454c1
- Run description: Baseline method using Indri
UDInfoC_TS¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UDInfoC_TS
- Participant: udel_fang
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/17/2019
- Type: automatic
- Task: primary
- MD5:
3b9adaab4c41314eb5b5116a6cc026d4
- Run description: Transfer learning on bert model
UDInfoC_TS_2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UDInfoC_TS_2
- Participant: udel_fang
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
f637c03137b4a50094558b8e7a303fb6
- Run description: Another transfer learning based on bert model
ug_1stprev3_sdm¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ug_1stprev3_sdm
- Participant: uogTr
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
94636ef379552e028c29f45f2f7df19a
- Run description: This run is a probabilistic run using the Galago search engine. The first turn and previous three turns are used as context. For each, a sequential dependence model query is generated and combined together (manually selected weights). Stopping is performed using the Indri stopword list (modified) and stemming via the krovetz stemmer.
ug_cedr_rerank¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ug_cedr_rerank
- Participant: uogTr
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: automatic
- Task: primary
- MD5:
73ba7c873354af8d4c3835c883716134
- Run description: This run is a reranking of a pool of results from the top 50 from ug_cont_lin, ug_1stprev3_sdm, and a RM3 feedback run on ug_1stprev3_sdm. It uses the CEDR deep learning model (BERT derived) trained on the MARCO data.
ug_cont_lin¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ug_cont_lin
- Participant: uogTr
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
1ef6b9b5a4e20f3d46053ed340df22f7
- Run description: The run pools together three runs (ug_1stprev3_sdm, an all context bow run, and a ug_1stprev3_sdm with RM3) and reranks them using a linear ranklib model that uses coordinate ascent. It uses six features that are variants of SDM over the context. The model is optimized on the CAsT Y1 training data.
ug_cur_sdm¶
Results
| Participants
| Input
| Summary
| Appendix
- Run ID: ug_cur_sdm
- Participant: uogTr
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: manual
- Task: primary
- MD5:
7bba9a44c3be6c25e7e73ba58bd568b0
- Run description: This run uses manually rewritten queries. It performs SDM over a Galago corpus and performs stopping (indri 418 word list) and stemming (krovetz stemmer).
UMASS_DMN_V1¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMASS_DMN_V1
- Participant: UMass
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
9f7c4bf434a15101164e5b352b727291
- Run description: This is a deep learning model trained on MS MARCO Conversational Session dataset. The model does not use co-reference resolution.
UMASS_DMN_V2¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMASS_DMN_V2
- Participant: UMass
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/19/2019
- Type: automatic
- Task: primary
- MD5:
897753c060c80ea92582305785ddc390
- Run description: This is a deep learning model trained on MS MARCO Conversational Session dataset. This model uses the provided annotated evaluation dataset with co-reference resolution.
UNH-trema-ecn¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNH-trema-ecn
- Participant: TREMA-UNH
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
4e96c117f687af971bc80767620a8949
- Run description: This method uses an entity and passage run as input. For every entity retrieved for a query, find all frequently co-occurring entities with this entity. For every passage mentioning the given entity, the score of the passage for the query-entity pair is equal to the sum of the frequencies of the frequently co-occurring entities in the passage. Then marginalize over the entities to obtain a passage ranking for the query.
UNH-trema-ent¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNH-trema-ent
- Participant: TREMA-UNH
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
eba81fae00d5ebed4bf893284e8e485b
- Run description: This method uses an entity and passage run as input. For every passage retrieved for the query, its score for a query-entity pair is equal to the the number of retrieved entities (for the query) in the passage. This passage score for a query-entity pair where the entities come from the list of entities in the passage. So for every entity in the passage, the score of the passage for the query-entity pair is the same. Then we marginalize over the entities to get a passage ranking for the query.
unh-trema-relco¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: unh-trema-relco
- Participant: TREMA-UNH
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: automatic
- Task: primary
- MD5:
bedbd354887c1d13aba03f274f0dfd73
- Run description: We first retrieve query-relevant TREC CAR feedback passages using BM25 retrieval model. We then create the candidate entity list of all the entities mentioned in the feedback passages. We create entity-pair of every entity with every other entity present in the candidate entity list. We then check the presence of every entity-pair in the feedback passage, if it is present then the rank of the passage is considered as scoring factor of the entity-pair. The score of every entity is calculated by taking average of entity-pair. Top k entities are selected and if the entity is present in the feedback passage then the score of entity is added with the score of passage, to get the final score of the passage.
VESBERT¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: VESBERT
- Participant: VES
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: manual
- Task: primary
- MD5:
61b59029d64b84e79363ad1913ca782a
- Run description: This run implemented using Lucene4ir. the questions passed from a coreference resolution module for each topic and the results passed from a bert re-ranker trained on marco dataset.
VESBERT1000¶
Results
| Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: VESBERT1000
- Participant: VES
- Track: Conversational Assistance
- Year: 2019
- Submission: 8/18/2019
- Type: manual
- Task: primary
- MD5:
c91368d4e48190287b1faca404de2b69
- Run description: This run has 1000 documents per query and uses lucene, bert and coreference resolution