Run description: First, we use lucene to select candidate paragraphs, and then we use bm25 score and word matching as features to training a ranking model with Ranklib.
Run description: Simple Document Classifier using avg word embeddings in the documents as document vector and last hidden state of an LSTM as query vector. 2-layer feed forward neural net to select which documents are relevant given a query.
Run description: ranks entities by degree centrality on a sub-graph that is extracted as follows: 1) edges in KG are associated with paragraph-long text 2) A BM25 model is used to retrieve edges in response to the query 3) edges are weighted according to their frequency 4) PageRank on this weighted graph is used to rank entities Support paragraphs are taken from the the paragraph associated with the entity's highest ranking edge
Run description: ranks entities by degree centrality on a sub-graph that is extracted as follows: 1) edges in KG are associated with paragraph-long text 2) A BM25 model is used to retrieve edges in response to the query 3) edges are weighted according to their reciprocal rank 4) DegreeCentrality on this weighted graph is used to rank entities Support paragraphs are taken from the the paragraph associated with the entity's highest ranking edge
Run description: ranks entities by degree centrality on a sub-graph that is extracted as follows: 1) edges in KG are associated with paragraph-long text 2) A BM25 model is used to retrieve edges in response to the query 3) edges are weighted according to their frequency 4) seed nodes are retrieved from an entity index of unprocessedtraing using Bm25 5) PersonalizedPageRank on this weighted graph with seed nodes is used to rank entities Support paragraphs are taken from the the paragraph associated with the entity's highest ranking edge
Run description: BM25 run using the concatenation of heading, parent headings and page title as keyword query. In addition query expansion with two sources: 1) the query is entity linked with TagMe. Terms from the first paragraph of the entity are used for expansion (like RM3) 2) if the same heading is contained in another article, then expansion terms from these sections will be used for expansion (like RM3). (This method is in spirit of the WikiKreator system) Balancing parameters are manually adjusted on test200
Run description: 1. Initial Search 1.1 Parse the article title and secition names to use in part of query for Lucuene search using BM25 1.2 Use entities extracted from Dbpedia long abstracts data set from Dbpedia spotlight tagging to augment query 2. Re-Ranking 2.1 use Ranklib's Ada Rank implementation to re-rank paragraphs
Run description: 1. Initial Search 1.1 Parse the article title and secition names to use in part of query for Lucuene search using BM25 1.2 Use entities extracted from Dbpedia long abstracts data set from Dbpedia spotlight tagging to augment query 2. Re-Ranking 2.1 use neural learning to rank model to re-rank paragraphs from Initial Search
Run description: 1. Initial Search 1.1 Parse the article title and secition names to use in part of query for Lucuene search using BM25 1.2 Use entities extracted from Dbpedia long abstracts data set from Dbpedia spotlight tagging to augment query 2. Re-Ranking 2.1 use neural learning to rank model to re-rank paragraphs from Initial Search