Skip to content

Proceedings - Question Answering 2006

Overview of the TREC 2006 Question Answering Track 99

Hoa Trang Dang, Jimmy Lin, Diane Kelly

Abstract

The TREC 2006 question answering (QA) track contained two tasks: the main task and the complex, interactive question answering (ciQA) task. As in 2005, the main task consisted of series of factoid, list, and “Other” questions organized around a set of targets; in contrast to previous years, the evaluation of factoid and list responses distinguished between answers that were globally correct (with respect to the document collection), and those that were only locally correct (with respect to the supporting document). The ciQA task provided a framework for participants to investigate interaction in the context of complex information needs, and was a blend of the TREC 2005 QA relationship task and the TREC 2005 HARD track. Multiple assessors were used to judge the importance of information nuggets used to evaluate the responses to ciQA and “Other” questions, resulting in an evaluation that is more stable and discriminative than one that uses only a single assessor to judge nugget importance.

Bibtex
@inproceedings{DBLP:conf/trec/DangLK06,
    author = {Hoa Trang Dang and Jimmy Lin and Diane Kelly},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Overview of the {TREC} 2006 Question Answering Track 99},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/QA06.OVERVIEW.pdf},
    timestamp = {Fri, 27 Aug 2021 01:00:00 +0200},
    biburl = {https://dblp.org/rec/conf/trec/DangLK06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Contextual Information and Assessor Characteristics in Complex Question Answering

Cindy Azzopardi, Leif Azzopardi, Mark Baillie, Ralf Bierig, Emma Nicol, Ian Ruthven, Scott Sweeney

Abstract

The ciqa track investigates the role of interaction in answering complex questions: questions that relate two or more entities by some specified relationship. In our submission to the first ciqa track we were interested in the interplay between groups of variables: variables describing the question creators, the questions asked and the presentation of answers to the questions. We used two interaction forms - html questionnaires completed before answer assessment - to gain contextual information from the answer assessors to better understand what factors influence assessors when judging retrieved answers to complex questions. Our results indicate the importance of understanding the assessor's personal relationship to the question - their existing topical knowledge for example - and also the presentation of the answers - contextual information about the answer to aid in the assessment of the answer.

Bibtex
@inproceedings{DBLP:conf/trec/AzzopardiABBNRS06,
    author = {Cindy Azzopardi and Leif Azzopardi and Mark Baillie and Ralf Bierig and Emma Nicol and Ian Ruthven and Scott Sweeney},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Contextual Information and Assessor Characteristics in Complex Question Answering},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/ustrathclyde.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/AzzopardiABBNRS06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

TREC 2006 Q&A Factoid: TI Experience

Satish Balantrapu, Monis Khan, Ayyappa Nagubandi

Abstract

This is the first attempt of Artificial Intelligence Division of TrulyIntelligent Technologies Pvt. Ltd. at the TREC2006 Question Answering track. As any Question Answering (QA) system typically involves Question Analysis, Document Retrieval, Answer Extraction and Answer Ranking and this being our first attempt and with certain constraints of time and resources, we developed some modules of our QA system in line with already existing approaches, for example document retrieval, pattern based answer extraction and web boosting. But there are areas where we tried our ideas and got quite encouraging results particularly, Question Analysis, Constraint based Answer Extraction and Answer Ranking which are described in this paper.

Bibtex
@inproceedings{DBLP:conf/trec/BalantrapuKN06,
    author = {Satish Balantrapu and Monis Khan and Ayyappa Nagubandi},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {{TREC} 2006 Q{\&}A Factoid: {TI} Experience},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/truly-intelligent.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/BalantrapuKN06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

The "La Sapienza" Question Answering System at TREC 2006

Johan Bos

Abstract

This report describes the system developed at the University of Rome “La Sapienza” for the TREC-2006 question answering evaluation exercise. The backbone of this QA system is linguistically-principled: Combinatory Categorial Grammar is used to generate syntactic analyses of questions and potential answer snippets, and Discourse Representation Theory is employed as formalism to match the meanings of questions and answers. The key idea of the La Sapienza system is to use semantics to prune answer candidates, thereby exploiting lexical resources such as WordNet and Nom- Lex to facilitate the selection of answers. The system performed reasonably well at TREC-2006: in the per-series evaluation it performed slightly above the median accuracy score of all participating systems.

Bibtex
@inproceedings{DBLP:conf/trec/Bos06,
    author = {Johan Bos},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {The "La Sapienza" Question Answering System at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/urome.bos.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Bos06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

MITRE's Qanda at TREC 15

John D. Burger

Abstract

Qanda is MITRE's TREC-style question answering system. In recent years, we have been able to apply only a small effort to the TREC QA activity, approximately six person-weeks this year. (Accordingly, much of this discussion is plagiarized from prior system descriptions.) We made a number of small improvements to the system this year, including expanding our use of Wordnet. The system's information retrieval wrapper now performs iterative query relaxation in order to improve document retrieval. We also experimented with an ad hoc means of “boosting” the maximum entropy model used to score candidate answers in order to improve its ranking ability.

Bibtex
@inproceedings{DBLP:conf/trec/Burger06,
    author = {John D. Burger},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {MITRE's Qanda at {TREC} 15},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/mitre.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Burger06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

LexiClone Lexical Cloning Systems

Ilya S. Geller

Abstract

In this article I substantiate my position that a human being is a point of accumulation - that is, an object. And based on this assumption I provide a foundation for my ontological justification of Differential Linguistics: I then introduce the understanding that 'becoming better and the best' is what motivates an object to movement (and change). Then I link this position with Egoism, and to achieve an understanding of what Egoism is I find it necessary to bring in the foundations I had previously elaborated for the New Mechanics and Differential Philosophy of Cynicism. Then I affirm that an object seeks information in order to 'become better and the best', and I show that information is required egoistically and that the finding of information is made possible by asking two classes of questions: factoid and definition questions.

Bibtex
@inproceedings{DBLP:conf/trec/Geller06,
    author = {Ilya S. Geller},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {LexiClone Lexical Cloning Systems},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/lexiclone.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Geller06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

The University of Sheffield's TREC 2006 Q&A Experiments

Mark A. Greenwood, Mark Stevenson, Robert J. Gaizauskas

Abstract

As a natural language processing group (NLP) our original approach to question answering was linguistically motivated culminating in the development of the QA-LaSIE system (Humphreys et al., 1999). In its original form QA-LaSIE would only propose answers which were linked via syntactic/semantic relations to the information missing from the question (for example “Who released the Internet worm?” is missing a person). While the answers proposed by the system were often correct, the system was frequently unable to suggest any answer. The next version of the system loosened the requirement for a link between question and answer which improved performance (Scott and Gaizauskas, 2000). There are still a number of open questions from the development of the QA-LaSIE system: does the use of parsing and discourse interpretation to determine links between questions and proposed answers result in better performance than simpler systems which adopt a shallower approach? Is it simply that the performance of our parser is below the level at which it could contribute to question answering? Are there questions which can only be answered using deep linguistic techniques? With the continued development of a second QA system at Sheffield which uses shallower techniques (Gaizauskas et al., 2005) we believe that we are now in a position to investigate these and related questions. Our entries to the 2006 TREC QA evaluation are designed to help us answer some of these questions and to investigate further the possible benefits of linguistic processing over shallower techniques. The remainder of this paper is organised as follows. Firstly the framework in which our systems are developed is described in section 2 along with the QA system components. Section 3 describes the configurations and aims of our evaluation runs. Section 4 discusses the official evaluation results of our submitted runs in relation to the research questions outlined above.

Bibtex
@inproceedings{DBLP:conf/trec/GreenwoodSG06,
    author = {Mark A. Greenwood and Mark Stevenson and Robert J. Gaizauskas},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {The University of Sheffield's {TREC} 2006 Q{\&}A Experiments},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/usheffield.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/GreenwoodSG06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Tianwang at TREC 2006 QA Track

Jing He, Yuan Liu

Abstract

This paper describes the architecture and implementation of Tianwang QA system2006, which works for the TREC QA Main task this year. The main improvement is: 1. add one well founded knowledge source from Web - Wikipedia, and employ some natural language processing technologies to extract high quality answers; 2. design and implement a new translation algorithm in query generation. The results show that fine organized knowledge source is effective in answering all three types of questions. And such query generation algorithm can be benefit from both Frequent Asked Questions on Web and past TREC QA data.

Bibtex
@inproceedings{DBLP:conf/trec/HeL06,
    author = {Jing He and Yuan Liu},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Tianwang at {TREC} 2006 {QA} Track},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/pekingu.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/HeL06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Question Answering with LCC's CHAUCER at TREC 2006

Andrew Hickl, John Williams, Jeremy Bensley, Kirk Roberts, Ying Shi, Bryan Rink

Abstract

CHAUCER is a Q/A system developed for (a) combining several strategies for modeling the target of a series of questions and (b) optimizing the extraction of answers. Targets were modeled by (1) topic signatures; (2) semantic types; (3) lexico-semantic patterns; (4) frame dependencies; and (5) predictive questions. Several strategies for answer extraction were also tried. The best-performing strategy was based on the use of textual entailment.

Bibtex
@inproceedings{DBLP:conf/trec/HicklWBRSR06,
    author = {Andrew Hickl and John Williams and Jeremy Bensley and Kirk Roberts and Ying Shi and Bryan Rink},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Question Answering with LCC's {CHAUCER} at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/lcc-harabagiu.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/HicklWBRSR06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

The University of Washington's UWclmaQA System

Dan Jinguji, William D. Lewis, Efthimis N. Efthimiadis, Joshua Minor, Albert Bertram, Shauna Eggers, Joshua Johanson, Brian Nisonger, Ping Yu, Zhengbo Zhou

Abstract

The University of Washington's U WCLM AQA is an open-architecture Question Answering system, built around open source tools unified into one system design using customized enhancements. The goal was to develop an end-to-end QA system that could be easily modified by switching out tools as needed. Central to the system is Lucene, which we use for document retrieval. Various other tools are used, such the GoogleAPI for web boosting, fnTBL Chunker for text chunking, Lingua::Stem for stemming, Lingpipe for Named Entity Recognition, etc. We also developed several in-house evaluation tools for gauging our progress at each major milestone (e.g., document classification, document retrieval, passage retrieval, etc.) and statistical classifiers were developed that we use for various classification tasks.

Bibtex
@inproceedings{DBLP:conf/trec/JingujiLEMBEJNYZ06,
    author = {Dan Jinguji and William D. Lewis and Efthimis N. Efthimiadis and Joshua Minor and Albert Bertram and Shauna Eggers and Joshua Johanson and Brian Nisonger and Ping Yu and Zhengbo Zhou},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {The University of Washington's UWclmaQA System},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/uwashington.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/JingujiLEMBEJNYZ06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Experiments at the University of Edinburgh for the TREC 2006 QA Track

Michael Kaißer, Silke Scheible, Bonnie L. Webber

Abstract

We describe experiments carried out at the University of Edinburgh for our TREC 2006 QA participation. Our main effort was to develop an approach to QA that is based on frame semantics. Two algorithms were implemented to this end, building on the lexical resources FrameNet, PropBank and VerbNet. The first algorithm uses the resources to generate potential answer templates to a given question, which are then used to pose exact, quoted queries to a web search engine and confirm which of the results contain an actual answer to the question. The second algorithm bases search queries on key words only, but it can recognize answers in a wider range of syntactic variants in its candidate sentence analysis stage. We discuss both approaches when applied to each of the resources and a combination of these. We also describe how-in a later step-the found answer candidates are mapped to the AQUAINT corpus and how we answered other questions.

Bibtex
@inproceedings{DBLP:conf/trec/KaisserSW06,
    author = {Michael Kai{\ss}er and Silke Scheible and Bonnie L. Webber},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Experiments at the University of Edinburgh for the {TREC} 2006 {QA} Track},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/uedinburgh.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/KaisserSW06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Question Answering Experiments and Resources

Boris Katz, Gregory Marton, Sue Felshin, Daniel Loreto, Ben Lu, Federico Mora, Özlem Uzuner, Michael McGraw-Herdeg, Natalie Cheung, Alexey Radul, Yuan Kui Shen, Yuan Luo, Gabriel Zaccak

Abstract

MIT CSAIL's entries for the TREC Question Answering track (Voorhees, 2006) explored the effects of new document retrieval and duplicate removal strategies for 'list' and 'other' questions, established a baseline for other systems in the interactive task, and focused on question analysis and paraphrasing, rather than incorporation of external knowledge, in the factoid task. Many of the individual subsystems are largely unchanged from last year. We found that document retrieval strategy has an influence on performance in the different kinds of tasks later in the pipeline. Our other changes from last year did not immediately yield clear lessons. We present a question analysis data set and interannotator agreement indicators for the ciQA task that we hope will spur further evaluation.

Bibtex
@inproceedings{DBLP:conf/trec/KatzMFLLMUMCRSLZ06,
    author = {Boris Katz and Gregory Marton and Sue Felshin and Daniel Loreto and Ben Lu and Federico Mora and {\"{O}}zlem Uzuner and Michael McGraw{-}Herdeg and Natalie Cheung and Alexey Radul and Yuan Kui Shen and Yuan Luo and Gabriel Zaccak},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Question Answering Experiments and Resources},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/mit.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/KatzMFLLMUMCRSLZ06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

DalTREC 2006 QA System Jellyfish: Regular Expressions Mark-and-Match Approach to Question Answering

Vlado Keselj, Tony Abou-Assaleh, Nick Cercone

Abstract

We present a question-answering system Jellyfish. Our approach is based on marking and matching steps that are implemented using the methodology of cascaded regular-expression rewriting. We present the system architecture and evaluate the system using the TREC 2004, 2005, and 2006 datasets. TREC 2004 was used as a training dataset, while TREC 2005 and TREC 2006 were used as testing dataset. The robustness of our approach is demonstrated in the results.

Bibtex
@inproceedings{DBLP:conf/trec/KeseljAC06,
    author = {Vlado Keselj and Tony Abou{-}Assaleh and Nick Cercone},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {DalTREC 2006 {QA} System Jellyfish: Regular Expressions Mark-and-Match Approach to Question Answering},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/dalhousieu.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/KeseljAC06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Concordia University at the TREC 15 QA Track

Leila Kosseim, Alex Beaudoin, Abolfazl Keighobadi Lamjiri, Majid Razmara

Abstract

In this paper, we describe the system we used for the TREC Question Answering Track. For factoid and list questions two different approaches were exploited: A redundancy-based approach using a modified version of aranea and a parse-tree based unifier. The modified version of aranea essentially uses Google snippets for extracting answers and then projects them to aquaint. The parse-tree based unifier is a linguistic-based approach that chunks candidate sentences syntactically and uses a heuristic measure to compute the similarity of each chunk in a candidate to its counterpart in the question. To answer other types of questions, our system extracts from Wikipedia articles a list of interest-marking terms related to the topic and uses them to extract and score sentences from the aquaint document collection using various interest-marking triggers. We submitted 3 runs using different variations of the system. In the factoid run, the average of our 3 runs is 0.202, for the list, we achieved an average of 0.084, and for the “Other”, we achieved an average F-score of 0.192.

Bibtex
@inproceedings{DBLP:conf/trec/KosseimBKR06,
    author = {Leila Kosseim and Alex Beaudoin and Abolfazl Keighobadi Lamjiri and Majid Razmara},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Concordia University at the {TREC} 15 {QA} Track},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/concordiau.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/KosseimBKR06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

UMass at TREC ciQA 2006

Giridhar Kumaran, James Allan

Abstract

The characteristics of the ciQA Track namely the short templated queries and the scope for user interaction were the motivating factors for our interest in participating in the track. Templated queries represent a new paradigm of information-seeking more suited for specialized tasks. While work has been done in document retrieval for templated queries as part of the Global Autonomous Language Exploitation1 (GALE) program, the retrieval of snippets of information in lieu of documents was an interesting challenge. We also utilized the opportunity to try a suite of minimally interactive techniques, some of which helped and some did not. We believe we have a reasonable understanding of why some approaches worked while other failed, and contend that more experimentation and analysis is necessary to tease out various interaction effects between the suite of approaches we tried.

Bibtex
@inproceedings{DBLP:conf/trec/KumaranA06,
    author = {Giridhar Kumaran and James Allan},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {UMass at {TREC} ciQA 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/umass.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/KumaranA06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Using Semantic Relations with World Knowledge for Question Answering

Ka Kan Lo, Wai Lam

Abstract

Two research directions are to be explored in realizing our group's TREC QA system in 2006. The first one is to investigate the possibilities of applying linguistically sophisticated grammatical framework in tackling the real-world natural language processing task such as question answering. The other is to exploit the possible world's entities and relations as described in online encyclopedia in adding redundancy and hidden relations as those contained in the TREC corpus where the entities and relations are only implicitly mentioned and related. Our focus is on the factoid and list question as these two types of questions benefit greatly from our proposed method. We do include an experimental component in handling the ”other” question type.

Bibtex
@inproceedings{DBLP:conf/trec/LoL06,
    author = {Ka Kan Lo and Wai Lam},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Using Semantic Relations with World Knowledge for Question Answering},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/cuhk.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/LoL06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

A Temporally-Enhanced PowerAnswer in TREC 2006

Dan I. Moldovan, Mitchell Bowden, Marta Tatu

Abstract

This paper reports on Language Computer Corporation's participation in the Question Answering track at TREC 2006. An overview of the PowerAnswer 3 question answering system and a description of new features added to meet the challenges of this year's evaluation are provided. Emphasis is given to temporal constraints in questions and how this affected the outcome of the systems in the task. LCC's results in the evaluation are presented at the end of the paper.

Bibtex
@inproceedings{DBLP:conf/trec/MoldovanBT06,
    author = {Dan I. Moldovan and Mitchell Bowden and Marta Tatu},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {A Temporally-Enhanced PowerAnswer in {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/lcc-moldovan.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/MoldovanBT06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

AnswerFinder at TREC 2006

Diego Mollá, Menno van Zaanen, Luiz Augusto Sangoi Pizzato

Abstract

This article describes the AnswerFinder question answering system and its participation in the TREC 2006 question answering competition. This year there have been several improvements in the AnswerFinder system, although most of them in the implementation sphere. The actual functionality used this year is almost exactly the same as last year, but many bugs are fixed and the efficiency of the system has improved much. This allows for more extensive parameter tuning. Here we will also present an error analysis of the current AnswerFinder system.

Bibtex
@inproceedings{DBLP:conf/trec/MollaZP06,
    author = {Diego Moll{\'{a}} and Menno van Zaanen and Luiz Augusto Sangoi Pizzato},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {AnswerFinder at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/macquarieu.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/MollaZP06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Reconstructing DIOGENE: ITC-irst at TREC 2006

Matteo Negri, Milen Kouylekov, Bernardo Magnini, Bonaventura Coppola

Abstract

Our participation in the TREC 2006 QA task is the first step on the way of developing a new and improved DIOGENE system. The leading principles of this re-engineering activity are: i) to create a modular architecture, based on a pipeline of modules which share common I/O formats, open to the insertion/substitution of new components; ii) to allow for the capability of configuring the settings of the different modules with external configuration files; iii) to provide the capability of performing fine-grained evaluation cycles over the individual processing modules which compose a QA system. Another long-term objective of our work on QA, is to make the core components of the system freely available to the QA community for research purposes. This paper overviews the work done up to date to achieve these objectives, focusing on the description of a prototype module designed to handle the anaphoric questions often contained into TREC QA series. Preliminar evaluation results of the new module are presented, together with those achieved by DIOGENE at TREC 2006.

Bibtex
@inproceedings{DBLP:conf/trec/NegriKMC06,
    author = {Matteo Negri and Milen Kouylekov and Bernardo Magnini and Bonaventura Coppola},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Reconstructing {DIOGENE:} ITC-irst at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/itc-irst.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/NegriKMC06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Douglas W. Oard, Tamer Elsayed, Jianqiang Wang, Yejun Wu, Pengyi Zhang, Eileen G. Abels, Jimmy Lin, Dagobert Soergel

Abstract

In TREC 2006, teams from the University of Maryland participated in the Blog track, the Expert Search task of the Enterprise track, the Complex Interactive Question Answering task of the Question Answering track, and the Legal track. This paper reports our results.

Bibtex
@inproceedings{DBLP:conf/trec/OardEWWZALS06,
    author = {Douglas W. Oard and Tamer Elsayed and Jianqiang Wang and Yejun Wu and Pengyi Zhang and Eileen G. Abels and Jimmy Lin and Dagobert Soergel},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {{TREC} 2006 at Maryland: Blog, Enterprise, Legal and {QA} Tracks},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/umd.blog.ent.legal.qa.final.pdf},
    timestamp = {Fri, 27 Aug 2021 01:00:00 +0200},
    biburl = {https://dblp.org/rec/conf/trec/OardEWWZALS06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

The Ephyra QA System at TREC 2006

Nico Schlaefer, P. Gieselman, Guido Sautter

Abstract

The Ephyra QA system has been developed as a flexible open-domain QA framework. This framework allows us to combine several techniques for question analysis and answer extraction and to incorporate multiple knowledge bases to best fit the requirements of the TREC QA track, in which we participated this year for the first time. The techniques used include pattern learning and matching, answer type analysis and redundancy elimination through filters. In this paper, we give an overview of the Ephyra system as used within TREC 2006 and analyze the system's performance in the QA track.

Bibtex
@inproceedings{DBLP:conf/trec/SchlaeferGS06,
    author = {Nico Schlaefer and P. Gieselman and Guido Sautter},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {The Ephyra {QA} System at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/ukarlsruhe-cmu.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/SchlaeferGS06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

QACTIS Enhancements in TREC QA 2006

Patrick Schone, Gary M. Ciany, R. Cutts, Paul McNamee, James Mayfield, Tom Smith

Abstract

The QACTIS system has been tested in previous years at the TREC Question Answering Evaluations. This paper describes new enhancements to the system specific to TREC-2006, including basic improvements and thresholding experiments, filtered and Internet-supported pseudo-relevance feedback for information retrieval, and emerging statistics-driven question-answering. For contrast, we also compare our TREC-2006 system performance to that of our top systems from TREC-2004 and TREC-2005 applied to this year's data. Lastly, we analyze evaluator-declared unsupportedness of factoids and nugget decisions of “other” questions to understand major negative changes in performance for these categories over last year.

Bibtex
@inproceedings{DBLP:conf/trec/SchoneCCMMS06,
    author = {Patrick Schone and Gary M. Ciany and R. Cutts and Paul McNamee and James Mayfield and Tom Smith},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {{QACTIS} Enhancements in {TREC} {QA} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/dod.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/SchoneCCMMS06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

The Alyssa System at TREC 2006: A Statistically-Inspired Question Answering System

Dan Shen, Jochen L. Leidner, Andreas Merkel, Dietrich Klakow

Abstract

We present our new statistically-inspired open-domain Q&A research system that allows to carry out a wide range of experiments easily and flexibly by modifying a central file containing an experimental “recipe” that controls the activation and parameter selection of a range of widely-used and custom-built components. Based on this, we report our experiments for the TREC 2006 question answering track, where we used a cascade of LM-based document retrieval, LM-based sentence extraction, MaxEnt-based answer extraction over a dependency relation representation followed by a fusion process that uses linear interpolation to integrate evidence from various data streams to detect answers to factoid questions more accurately than the median of all participants.

Bibtex
@inproceedings{DBLP:conf/trec/ShenLMK06,
    author = {Dan Shen and Jochen L. Leidner and Andreas Merkel and Dietrich Klakow},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {The Alyssa System at {TREC} 2006: {A} Statistically-Inspired Question Answering System},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/saarlandu.qa.final.pdf},
    timestamp = {Mon, 17 Apr 2023 01:00:00 +0200},
    biburl = {https://dblp.org/rec/conf/trec/ShenLMK06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Question Answering Using the DLT System at TREC 2006

Richard F. E. Sutcliffe, Kieran White, Igal Gabbay, Michael Mulcahy

Abstract

This article summarises our participation in the Question Answering (QA) Track at TREC 2006. Section 2 outlines the architecture of our system. Section 3 describes the changes made for this year. Section 4 summarises the results of our submitted runs while Section 5 presents conclusions and proposes further steps.

Bibtex
@inproceedings{DBLP:conf/trec/SutcliffeWGM06,
    author = {Richard F. E. Sutcliffe and Kieran White and Igal Gabbay and Michael Mulcahy},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Question Answering Using the {DLT} System at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/ulimerick.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/SutcliffeWGM06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Question Answering by Diggery at TREC 2006

Stephen Tomlinson

Abstract

Diggery is a research and software project, investigating the extraction of concepts from well-written documents, with the idea of automating factoid search. The project is in its early to middle phases, and all information presented herein should be taken in light that this research is based on young software using new algorithms. In January 2006, after significant tuning, the software could answer a few simple questions from small texts. Six months later, in July 2006, the first real exercise of the software on a non-trivially sized corpora was made for the TREC QA submission, and the software answered a few questions correctly. For this submission, only factoid questions were attempted.

Bibtex
@inproceedings{DBLP:conf/trec/Tomlinson06a,
    author = {Stephen Tomlinson},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Question Answering by Diggery at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/tomlinson.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/Tomlinson06a.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

Identifying Relationships Between Entities in Text for Complex Interactive Question Answering Task

Olga Vechtomova, Murat Karamuftuoglu

Abstract

In this paper we describe our participation in the Complex Interactive Question Answering (ciQA) task of the QA track. We investigated the use of lexical cohesive ties (called lexical bonds) between sentences containing different question entities in finding information about relationships between these entities. We also investigated the role of clarification forms in assisting the system in finding answers to complex questions. The rest of the paper is organised as follows: in section 2 we present our approach to calculating lexical bonds between sentences containing different entities, section 3 contains the detailed description of our systems, in section 4 we present the results, and section 5 contains discussions.

Bibtex
@inproceedings{DBLP:conf/trec/VechtomovaK06,
    author = {Olga Vechtomova and Murat Karamuftuoglu},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {Identifying Relationships Between Entities in Text for Complex Interactive Question Answering Task},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/uwaterloo.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/VechtomovaK06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

TREC 2006 Question Answering Experiments at Tokyo Institute of Technology

Edward W. D. Whittaker, Josef R. Novak, Pierre Chatain, Sadaoki Furui

Abstract

In this paper we describe Tokyo Institute of Technology's speech group's second attempt at the TREC2006 question answering (QA) track. Keeping the same theoretical QA model as for the TREC2005 task this year we investigated combinations of variations of models focusing once again on the factoid QA task. An experimental run combining translated answers from separate English, French and Spanish systems proved inconclusive. However, our best combination of all component models gave us a factoid performance of 25.1% (placing us 9th and well above the median of the 30 participating systems of 18.6%) and an overall performance including the results from the list and other question tasks of 11.6% (which was somewhat below the median of 13.4%).

Bibtex
@inproceedings{DBLP:conf/trec/WhittakerNCF06,
    author = {Edward W. D. Whittaker and Josef R. Novak and Pierre Chatain and Sadaoki Furui},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {{TREC} 2006 Question Answering Experiments at Tokyo Institute of Technology},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/tokyo-it.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/WhittakerNCF06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

ILQUA at TREC 2006

Min Wu, Tomek Strzalkowski

Abstract

This year, we made changes to the pas- sage/sentence retrieval component of ILQUA in handling factoid and list questions. All the other components remain same. ILQUA is an IE-driven QA system. To answer “Factoid” and “List” questions, we apply our answer extraction methods on NE-tagged passages or sentences. The answer extraction methods adopted here are surface text pattern matching, n-gram proximity search, and syntactic dependency matching. Although surface text pattern matching has been applied in some previous TREC QA systems, the patterns used in ILQUA are better since they are automatically generated by a supervised learning system and represented in a format of regular expressions which contain multiple question terms. In addition to surface pattern matching, we also adopt n-gram proximity search and syntactic dependency matching. N-grams of question terms are matched around every named entity in the candidate sentences or passages and a list of named entities are generated as answer candidate. These named entities then go through a multi-level syntactic dependency matching component until a final answer is generated. To answer “Other” questions, we parsed the answer sentences of “Other” questions in previous main task and built syntactic patterns combined with semantic features. These patterns are later applied to the parsed candidate sentences to extract answers of “Other” questions. Figure 1 shows the diagram of the ILQUA architecture.

Bibtex
@inproceedings{DBLP:conf/trec/WuS06,
    author = {Min Wu and Tomek Strzalkowski},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {{ILQUA} at {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/ualbany.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/WuS06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

InsunQA06 on QA Track of TREC 2006

Yuming Zhao, Zhiming Xu, Peng Li, Yi Guan

Abstract

This is the second time that our group takes part in the QA track of TREC. We developed a question-answering system, named InsunQA06, based on our Insun05QA system, and with InsunQA06 we participated in the Main Task, which submitted answers to three types of questions: factoid questions, list questions and others questions. The structure of InsunQA06 is similar with the structure of Insun05QA. Towards Insun05QA, the main difference of InsunQA06 is that new methods are developed and used in answer extraction module, for factoid and “others” questions. And external knowledge such as knowledge from Internet plays more important role in answer extraction. Besides that, we accomplished our documents retrieval module based on Indri, instead of SMART in InsunQA06. In Section 2, the structure of our InsunQA06 system will be describe, the details of the new methods that we adopted to process the factoid, and “others” questions will separately be described in Section 3, and our results in TREC2006 will be presented in Section 4.

Bibtex
@inproceedings{DBLP:conf/trec/ZhaoXLG06,
    author = {Yuming Zhao and Zhiming Xu and Peng Li and Yi Guan},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {InsunQA06 on {QA} Track of {TREC} 2006},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/hit.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/ZhaoXLG06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}

FDUQA on TREC 2006 QA Track

Yaqian Zhou, Xiaofeng Yuan, Junkuo Cao, Xuanjing Huang, Lide Wu

Abstract

In this year's QA Track, we participant in the main task and do not take part in the ciQA task. The main task is essentially the same as the single task from 2004, in that the test set consists of a set of question series where each series asks for information regarding a particular target. In order to better answer the questions in the series, we try to improve our anaphora resolution within question series. For factoid questions, we use the system that submits the RUN-A in TREC 2005[Wu et al. 2005]. Therefore we won't describe the factoid system in this paper. For list questions, we get a lot of improvements, the most important of which are answer type classification, document searching, answer ranking and answer filtering. For definition question, we still focus on utilizing the existing definitions in the Web knowledge bases. And also applied the method of relative terms extraction to extract reliable information associated with target for getting web definition directly by question target is becoming a bottleneck. In the following, Section 2 will describe question series anaphora resolution. Section 3,and 4 will describe our algorithms for list and definition questions separately. Section 5 will present our results in TREC 2006.

Bibtex
@inproceedings{DBLP:conf/trec/ZhouYCHW06,
    author = {Yaqian Zhou and Xiaofeng Yuan and Junkuo Cao and Xuanjing Huang and Lide Wu},
    editor = {Ellen M. Voorhees and Lori P. Buckland},
    title = {{FDUQA} on {TREC} 2006 {QA} Track},
    booktitle = {Proceedings of the Fifteenth Text REtrieval Conference, {TREC} 2006, Gaithersburg, Maryland, USA, November 14-17, 2006},
    series = {{NIST} Special Publication},
    volume = {500-272},
    publisher = {National Institute of Standards and Technology {(NIST)}},
    year = {2006},
    url = {http://trec.nist.gov/pubs/trec15/papers/fudan-zhou.qa.final.pdf},
    timestamp = {Thu, 12 Mar 2020 00:00:00 +0100},
    biburl = {https://dblp.org/rec/conf/trec/ZhouYCHW06.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
}