Skip to content

Proceedings - LiveQA 2015

Overview of the TREC 2015 LiveQA Track

Eugene Agichtein, David Carmel, Dan Pelleg, Yuval Pinter, Donna Harman

Abstract

The automated question answering (QA) track, one of the most popular tracks in TREC for many years, has focused on the task of providing automatic answers for human questions. The track primarily dealt with factual questions, and the answers provided by participants were extracted from a corpus of News articles. While the task evolved to model increasingly realistic information needs, addressing question series, list questions, and even interactive feedback, a major limitation remained: the questions did not directly come from real users, in real time. The LiveQA track, conducted for the rst time this year, focused on real-time question answering for real-user questions. Real user questions, i.e., fresh questions submitted on the Yahoo Answers (YA) site that have not yet been answered, were sent to the participant systems, which provided an answer in real time. Returned answers were judged by TREC editors on a 4-level Likert scale.

Bibtex
@inproceedings{DBLP:conf/trec/AgichteinCPPH15,
    author = {Eugene Agichtein and David Carmel and Dan Pelleg and Yuval Pinter and Donna Harman},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {Overview of the {TREC} 2015 LiveQA Track},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/Overview-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/AgichteinCPPH15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

RMIT at the TREC 2015 LiveQA Track

Ruey-Cheng Chen, J. Shane Culpepper, Tadele Tedla Damessie, Timothy Jones, Ahmed Mourad, Kevin Ong, Falk Scholer, Evi Yulianti

Abstract

This paper describes the four systems RMIT fielded for the TREC 2015 LiveQA task and the associated experiments. The challenge results show that the base run RMIT-0 has achieved an above-average performance, but other attempted improvements have all resulted in decreased retrieval effectiveness.

Bibtex
@inproceedings{DBLP:conf/trec/ChenCD0MOSY15,
    author = {Ruey{-}Cheng Chen and J. Shane Culpepper and Tadele Tedla Damessie and Timothy Jones and Ahmed Mourad and Kevin Ong and Falk Scholer and Evi Yulianti},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {{RMIT} at the {TREC} 2015 LiveQA Track},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/RMIT-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/ChenCD0MOSY15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

NudtMDP at TREC 2015 LiveQA Track

Yuanping Nie, Jiuming Huang, Zongsheng Xie, Hai Li, Pengfei Zhang, Yan Jia

Abstract

In this paper, we describe a web-based online question answering system which has been evaluated in TREC 2015 Live QA task. Automatic question answering is a classic widely learned technology. TREC have host 8 times QA tracks since 1999. However, the TREC results show that there is still a long way to solve the questions perfectly. LiveQA is kind of questions means asked by 'real users'. Most liveQAs are non-factoid questions and it is much more challenge to answer the liveQAs than factoid QAs. We build a question answering system to find the answers from web data. The system has two channels, one use search engine to obtain the answers and the other focus on community question answering websites. We finally submit 3 runs in the ocial test. Two of our runs can perform much better than the avarge scores.

Bibtex
@inproceedings{DBLP:conf/trec/NieHXLZJ15,
    author = {Yuanping Nie and Jiuming Huang and Zongsheng Xie and Hai Li and Pengfei Zhang and Yan Jia},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {NudtMDP at {TREC} 2015 LiveQA Track},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/NUDTMDP-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/NieHXLZJ15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Ranking Answers and Web Passages for Non-factoid Question Answering: Emory University at TREC LiveQA

Denis Savenkov

Abstract

This paper describes a question answering system built by a team from Emory University to participate in TREC LiveQA'15 shared task. The goal of this task was to automatically answer questions posted to Yahoo! Answers community question answering website in real-time. My system combines candidates extracted from answers to similar questions previously posted to Yahoo! Answers and web passages from documents retrieved using web search. The candidates are ranked by a trained linear model and the top candidate is returned as the final answer. The ranking model is trained on question and answer (QnA) pairs from Yahoo! Answers archive using pairwise ranking criterion. Candidates are represented with a set of features, which includes statistics about candidate text, question term matches and retrieval scores, associations between question and candidate text terms and the score returned by a Long Short-Term Memory (LSTM) neural network model. Our system ranked top 5 by answer precision, and took 7th place according to the average answer score. In this paper I will describe our approach in detail, present the results and analysis of the system.

Bibtex
@inproceedings{DBLP:conf/trec/Savenkov15,
    author = {Denis Savenkov},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {Ranking Answers and Web Passages for Non-factoid Question Answering: Emory University at {TREC} LiveQA},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/emory-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Savenkov15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Question/Answer Matching for Yahoo! Answers Using a Corpus-Based Extracted Ngram-based Mapping

Stalin Varanasi, Günter Neumann

Abstract

This report describes the work done by the QA group of the Multilingual Technologies Lab at DFKI for the 2015 edition of the TREC LiveQA track. We describe the system, issues faced and the approaches followed considering the time lines of the track.

Bibtex
@inproceedings{DBLP:conf/trec/VaranasiN15,
    author = {Stalin Varanasi and G{\"{u}}nter Neumann},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {Question/Answer Matching for Yahoo! Answers Using a Corpus-Based Extracted Ngram-based Mapping},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/dfkiqa-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/VaranasiN15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

WaterlooClarke: TREC 2015 LiveQA Track

Alexandra Vtyurina, Ankita Dey, Bahareh Sarrafzadeh, Charles L. A. Clarke

Abstract

The goal of the LiveQA track is to automatically provide answers to questions posted by real people. Previous question answering tracks included factoid questions, list questions and complex questions[3]. Presented in 2015 for the first time the LiveQA track gave the participants an opportunity to answer questions posed by real people, as opposed to manually configured ones in the previous tasks. The questions for the task were harvested from Yahoo! Answers1 - a community question answering website. Each question was broadcasted to all registered systems. The participants had to supposed to provide an answer to every given question within a timeframe of 60 seconds. The answers were judged by human NIST assessors after the evaluation was over.

Bibtex
@inproceedings{DBLP:conf/trec/VtyurinaDSC15,
    author = {Alexandra Vtyurina and Ankita Dey and Bahareh Sarrafzadeh and Charles L. A. Clarke},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {WaterlooClarke: {TREC} 2015 LiveQA Track},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/WaterlooClarke-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/VtyurinaDSC15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

CMU OAQA at TREC 2015 LiveQA: Discovering the Right Answer with Clues

Di Wang, Eric Nyberg

Abstract

In this paper, we present CMU's automatic, web-based, real-time question answering (QA) system that was evaluated in the TREC 2015 LiveQA Challenge. This system answers real-user questions freshly submitted to the Yahoo! Answers website that have not been previously answered by humans. Given the title and body of the question, we generated multiple sets of keyword queries and retrieved a collection of web pages based on those queries. Then we extracted answer candidates from web pages in the form of answer passages and their associated clue. Finally, we combined both IR- and NLP-based relevance models to rank and select answer candidates. In the TREC 2015 LiveQA evaluations, human assessors gave our system an average score of 1.081 on a three-point scale, the highest average score achieved by a system in the competition (the second-best score was .677, and the average score was .465 for the 21 systems evaluated).

Bibtex
@inproceedings{DBLP:conf/trec/WangN15,
    author = {Di Wang and Eric Nyberg},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {{CMU} {OAQA} at {TREC} 2015 LiveQA: Discovering the Right Answer with Clues},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/oaqa-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/WangN15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Leverage Web-based Answer Retrieval and Hierarchical Answer Selection to Improve the Performance of Live Question Answering

GuoShun Wu, Man Lan

Abstract

This paper reports on East Normal China University's participation in the TREC 2015 LiveQA track. An overview is presented to introduce our community question answer system and discuss the technologies. This year, the Trec LiveQA track expands the traditional QA track, focusing on “live” question answering for the real-user questions. At this challenge, we built a real-time community question answer system. Our results are presented at the end of the paper.

Bibtex
@inproceedings{DBLP:conf/trec/WuL15,
    author = {GuoShun Wu and Man Lan},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {Leverage Web-based Answer Retrieval and Hierarchical Answer Selection to Improve the Performance of Live Question Answering},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/ecnucs-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/WuL15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

ECNU at TREC 2015: LiveQA Track

Weiqian Zhang, Weijie An, Jinchao Ma, Yan Yang, Qinmin Hu, Liang He

Abstract

This paper reports on East Normal China University's participation in the TREC 2015 LiveQA track. An overview is presented to introduce our community question answer system and discuss the technologies. This year, the Trec LiveQA track expands the traditional QA track, focusing on “live” question answering for the real-user questions. At this challenge, we built a real-time community question answer system. Our results are presented at the end of the paper.

Bibtex
@inproceedings{DBLP:conf/trec/ZhangAMYHH15,
    author = {Weiqian Zhang and Weijie An and Jinchao Ma and Yan Yang and Qinmin Hu and Liang He},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {{ECNU} at {TREC} 2015: LiveQA Track},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/ECNU-QA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/ZhangAMYHH15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

CLIP at TREC 2015: Microblog and LiveQA

Mossaab Bagdouri, Douglas W. Oard

Abstract

The Computational Linguistics and Information Processing lab at the University of Maryland participated in two TREC tracks this year. The Microblog Real-Time Filtering and the LiveQA tasks both involve information processing in real time. We submitted nine runs in total, achieving relatively good results. This paper describes the architecture and configuration of the systems behind those runs.

Bibtex
@inproceedings{DBLP:conf/trec/BagdouriO15,
    author = {Mossaab Bagdouri and Douglas W. Oard},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {{CLIP} at {TREC} 2015: Microblog and LiveQA},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/CLIP-MBQA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/BagdouriO15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

ADAPT.DCU at TREC LiveQA: A Sentence Retrieval based Approach to Live Question Answering

Dasha Bogdanova, Debasis Ganguly, Jennifer Foster, Ali Hosseinzadeh Vahid

Abstract

This paper describes the work done by ADAPT centre at Dublin City University towards automatically answering questions for the TREC LiveQA track. The system is based on a sentence retrieval approach. In particular, we first use the title of a new question as a query so as to retrieve a ranked list of conceptually similar questions from an index of previously asked on “Yahoo! Answers”. We then extract the best matching sentences from the answers of the retrieved questions. In order to construct the final answer, we combine these sentences with the best answer of the top ranked (most similar to the query) question. When no pre-existing questions with sufficient similarity with the new one can be retrieved from the index, we output an answer from a candidate set of pre-generated answers based on the domain of the question.

Bibtex
@inproceedings{DBLP:conf/trec/BogdanovaGFV15,
    author = {Dasha Bogdanova and Debasis Ganguly and Jennifer Foster and Ali Hosseinzadeh Vahid},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {{ADAPT.DCU} at {TREC} LiveQA: {A} Sentence Retrieval based Approach to Live Question Answering},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/ADAPT.DCU-QA.pdf},
    timestamp = {Thu, 21 Jan 2021 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/BogdanovaGFV15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

QU at TREC-2015: Building Real-Time Systems for Tweet Filtering and Question Answering

Reem Suwaileh, Maram Hasanain, Marwan Torki, Tamer Elsayed

Abstract

This paper presents our participation in the microblog and LiveQA tracks in TREC-2015. Both tracks required building a “real-time” system that monitors a data stream and responds to users' information needs in real-time. For the microblog track, given a set of users' interest profiles, we developed two online filtering systems that recommend “relevant” and “novel” tweets from a tweet stream for each profile. Both systems simulate real scenarios: filtered tweets are sent as push notifications on a mobile phone or as a periodic email digest. We study the e↵ect of using a static versus dynamic relevance thresholds to control the relevancy of filtered output to interest profiles. We also experiment with di↵erent profile expansion strategies that account for potential topic drift. Our results show that the baseline run of the push notifications scenario that uses a static threshold with light profile expansion achieved the best results. Similarly, in the email digest scenario, the baseline run that used a shorter representation of the interest profiles without any expansion was the best run. For the LiveQA track, the system was required to answer a stream of around 1000 real-time questions from Yahoo! Answers. We adopted a very simple approach that searched an archived Yahoo! Answers QA dataset for similar questions to the asked ones and retrieved back their answers

Bibtex
@inproceedings{DBLP:conf/trec/SuwailehHTE15,
    author = {Reem Suwaileh and Maram Hasanain and Marwan Torki and Tamer Elsayed},
    editor = {Ellen M. Voorhees and Angela Ellis},
    title = {{QU} at {TREC-2015:} Building Real-Time Systems for Tweet Filtering and Question Answering},
    booktitle = {Proceedings of The Twenty-Fourth Text REtrieval Conference, {TREC} 2015, Gaithersburg, Maryland, USA, November 17-20, 2015},
    series = {{NIST} Special Publication},
    volume = {500-319},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2015},
    url = {http://trec.nist.gov/pubs/trec24/papers/QU-MBQA.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/SuwailehHTE15.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}