Runs - Complex Answer Retrieval 2019¶
Bert-ConvKNRM¶
Participants
| Proceedings
| Appendix
- Run ID: Bert-ConvKNRM
- Participant: ICTNET
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/29/2019
- Type: automatic
- Task: passages
- Run description: Document Expansion by Query Prediction -> BM25 -> BERT -> ConvKNRM
Bert-ConvKNRM-50¶
Participants
| Proceedings
| Appendix
- Run ID: Bert-ConvKNRM-50
- Participant: ICTNET
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/31/2019
- Type: automatic
- Task: passages
- Run description: Document Expansion by Query Prediction -> BM25 (top 50) -> BERT -> ConvKNRM
Bert-DRMMTKS¶
Participants
| Proceedings
| Appendix
- Run ID: Bert-DRMMTKS
- Participant: ICTNET
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/29/2019
- Type: automatic
- Task: passages
- Run description: Document Expansion by Query Prediction -> BM25 -> BERT -> DRMMTKS
bm25-populated¶
- Run ID: bm25-populated
- Participant: Smith
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: Produced with the y3_convert_ranking_to_ordering.py provided by the organizers (removing duplicate passages). Uses the bm25 ranking
dangnt-nlp¶
- Run ID: dangnt-nlp
- Participant: DANGNT-NLP
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/28/2019
- Type: automatic
- Task: passages
- Run description: The process has two phases: 1/ Retrieval using Anserini with input benchmarkY3test.public.tar.gz, output: benchmarkY3.run 2/ Reranking using BERT large model pretrained with input benchmarkY3.run output: benchmarkY3_dangnt_nlp.run which is our run file.
ECNU_BM25¶
- Run ID: ECNU_BM25
- Participant: ECNU
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: single BM25 retrieval using Anserini
ECNU_BM25_1¶
- Run ID: ECNU_BM25_1
- Participant: ECNU
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: just BM25 using Anserini
ECNU_ReRank1¶
- Run ID: ECNU_ReRank1
- Participant: ECNU
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: rerank BM25 result produced by Anserini BM25: topic and paragraph -> biLSTM -> self-attention layer -> biLSTM -> a trained score function -> score
ICT-BM25¶
Participants
| Proceedings
| Appendix
- Run ID: ICT-BM25
- Participant: ICTNET
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/28/2019
- Type: automatic
- Task: passages
- Run description: D2Q + BM25
ICT-DRMMTKS¶
Participants
| Proceedings
| Appendix
- Run ID: ICT-DRMMTKS
- Participant: ICTNET
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/29/2019
- Type: automatic
- Task: passages
- Run description: Document Expansion by Query Prediction -> BM25 -> DRMMTKS
IRIT_run1¶
Participants
| Proceedings
| Appendix
- Run ID: IRIT_run1
- Participant: IRIT
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/30/2019
- Type: automatic
- Task: passages
- Run description: 1. Indexing using Terrier 2. Retrieving the relevant documents for each query (query = page title + heading) with Terrier weighting models then CombMNZ combination 3. In a round-robin fashion, the top document of each part is selected until it reaches the paragraph budget. A paragraph only appears once per outline
IRIT_run2¶
Participants
| Proceedings
| Appendix
- Run ID: IRIT_run2
- Participant: IRIT
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/30/2019
- Type: automatic
- Task: passages
- Run description: 1. Indexing using Terrier 2. Retrieving the relevant documents for each query (query = page title + heading) with Terrier weighting models then CombMNZ combination 3. Function of the score, the ranking for the heading, number of paragraphs already returned for the heading
IRIT_run3¶
Participants
| Proceedings
| Appendix
- Run ID: IRIT_run3
- Participant: IRIT
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 7/30/2019
- Type: automatic
- Task: passages
- Run description: 1. Indexing using Terrier 2. Retrieving the relevant documents for each query (query = page title + heading) with Terrier weighting models then CombMNZ combination 3. Function of the score, the ranking for the heading, number of paragraphs already returned for the heading
neural¶
Participants
| Proceedings
| Appendix
- Run ID: neural
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: This method is a two layer neural network. The first layer takes as input passages embedded by ELMo, along with relevance scores for these passages generated by taking the inverse rank score of 28 other passage rankings given the query (ranked according to methods such as BM25 and query likelihood). These are page-level runs (title-only, and title + all sections in page), and these runs can be found at: http://trec-car.cs.unh.edu/inputruns/. The final input for the first layer is a query vector embedded using ELMo as well. These vectors are combined using a linear layer plus an activation function. The output of this layer is fed into a second linear layer. The final output is a logit score for each retrieved passage. Softmax with binary cross entropy is used as the loss function, predicting the relevance of a passage given a query by using the page-level qrels in Y1 train and Y1 test as labels. The passages are then ordered according to their logit score and then the top 20 passages per query are obtained
ReRnak2_BERT¶
- Run ID: ReRnak2_BERT
- Participant: ECNU
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: use BERT [CLS] to score and rerank the results produced by Anserini BM25
ReRnak3_BERT¶
- Run ID: ReRnak3_BERT
- Participant: ECNU
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: use Bert [CLS] to score and re-rank results produced by Anserini Bm25
UNH-bm25-ecmpsg¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-bm25-ecmpsg
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: Produced with the y3_convert_ranking_to_ordering.py provided by the organizers (removing duplicate passages). Uses the bm25 with ecm-psg expansion on the paragraphs from the paragraphCorpus.
UNH-bm25-rm¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-bm25-rm
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: automatic
- Task: passages
- Run description: Produced with the y3_convert_ranking_to_ordering.py provided by the organizers (removing duplicate passages). Uses the bm25 with rm expansion on the paragraphs from the paragraphCorpus.
UNH-bm25-stem¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-bm25-stem
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: manual
- Task: passages
- Run description: Candidate pool of 20 passages are created for each article from the input run. Then these 20 passages are reranked based on average BM25 similarity score of each passage pair to produce the ordering.
UNH-dl100¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-dl100
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: manual
- Task: passages
- Run description: The method models similarity between passages using siamese network architecture with 3 dense layers each of size 100. Pretrained ELMo embeddings are used to represent each paragraph for training. It is trained on benchmarkY1 train dataset. The trained similarity metric is used to rerank top 20 passages from input run foe each article to produce the ordering.
UNH-dl300¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-dl300
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: manual
- Task: passages
- Run description: The method models similarity between passages using siamese network architecture with 3 dense layers each of size 300. Pretrained ELMo embeddings are used to represent each paragraph for training. It is trained on benchmarkY1 train dataset. The trained similarity metric is used to rerank top 20 passages from input run foe each article to produce the ordering.
UNH-ecn¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-ecn
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: manual
- Task: passages
- Run description: We start with an entity ranking and passage ranking. For every query-entity pair in entity ranking, we first make a entity context document(ECD) consisting of passages (from the passage ranking) mentioning the entity. We find co-occurring entities with this entity in the ECD and rank the co-occurring entities by the frequency of their appearance. Then we score a passage in the ECD by summing over the co-occurrence score of entities in the passage.
UNH-qee¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-qee
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: manual
- Task: passages
- Run description: We begin with an entity and a passage run. For every query-entity pair we concatenate all passages (from the passage ranking) mentioning the entity and create an entity context document(ECD). We find the frequency of all entities in the ECD and rank the entities by this score. We take the top 20 entities from this ranking to expand the query and retrieve passages with the expanded query.
UNH-tfidf-lem¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-tfidf-lem
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: manual
- Task: passages
- Run description: For each article top 20 passages are retrieved from the input run and reranked using TFIDF cosine similarity. Starting with the first passage in the original ranking we find the most similar passage from the candidate pool using TFIDF with cosine similarity. Then we find the most similar passage of second passage among rest of the passages. We repeat this until we fill up all 20 slots. We use the lemmatized version of passages.
UNH-tfidf-ptsim¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-tfidf-ptsim
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: manual
- Task: passages
- Run description: For each article top 20 passages are retrieved from the input run and reranked using TFIDF cosine similarity. Starting with the first passage in the original ranking we find the most similar passage from the candidate pool using TFIDF with cosine similarity. Then we find the most similar passage of second passage among rest of the passages. We repeat this until we fill up all 20 slots.
UNH-tfidf-stem¶
Participants
| Proceedings
| Appendix
- Run ID: UNH-tfidf-stem
- Participant: TREMA-UNH
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/1/2019
- Type: manual
- Task: passages
- Run description: For each article we retrieve 20 passages to make candidate pool. Then we rerank them using TFIDF cosine similarity of the stemmed passages.
UvABM25RM3¶
Participants
| Proceedings
| Appendix
- Run ID: UvABM25RM3
- Participant: UAmsterdam
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: automatic
- Task: passages
- Run description: BM25+RM3
UvABottomUp1¶
Participants
| Proceedings
| Appendix
- Run ID: UvABottomUp1
- Participant: UAmsterdam
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: automatic
- Task: passages
- Run description: Use BM25+RM3 to choose paragraphs to be populated.
UvABottomUp2¶
Participants
| Proceedings
| Appendix
- Run ID: UvABottomUp2
- Participant: UAmsterdam
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: automatic
- Task: passages
- Run description: BM25+RM3/ order preserved
UvABottomUpChangeOrder¶
Participants
| Proceedings
| Appendix
- Run ID: UvABottomUpChangeOrder
- Participant: UAmsterdam
- Track: Complex Answer Retrieval
- Year: 2019
- Submission: 8/2/2019
- Type: automatic
- Task: passages
- Run description: BM25+RM3/order changed