Runs - Question Answering 2007¶
asked07a¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: asked07a
- Participant: tokyo-inst-tech.whittaker
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
asked07b¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: asked07b
- Participant: tokyo-inst-tech.whittaker
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
asked07c¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: asked07c
- Participant: tokyo-inst-tech.whittaker
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
csail1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail1
- Participant: mit-csail.katz
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
csail2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail2
- Participant: mit-csail.katz
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
csail3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail3
- Participant: mit-csail.katz
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
Dal07n¶
Participants
| Input
| Summary
| Appendix
- Run ID: Dal07n
- Participant: dalhousieu.keselj
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
Dal07p¶
Participants
| Input
| Summary
| Appendix
- Run ID: Dal07p
- Participant: dalhousieu.keselj
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
Dal07t¶
Participants
| Input
| Summary
| Appendix
- Run ID: Dal07t
- Participant: dalhousieu.keselj
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
DrexelRun1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: DrexelRun1
- Participant: drexelu.han
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
DrexelRun2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: DrexelRun2
- Participant: drexelu.han
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
DrexelRun3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: DrexelRun3
- Participant: drexelu.han
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
eduFsc04¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: eduFsc04
- Participant: fitchburg-state.taylor
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
eduFsc05¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: eduFsc05
- Participant: fitchburg-state.taylor
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
Ephyra1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Ephyra1
- Participant: ukarlsruhe-cmu.schlaefer
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
Ephyra2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Ephyra2
- Participant: ukarlsruhe-cmu.schlaefer
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
Ephyra3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Ephyra3
- Participant: ukarlsruhe-cmu.schlaefer
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
FDUQAT16A¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDUQAT16A
- Participant: fudanu.wu
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
FDUQAT16B¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDUQAT16B
- Participant: fudanu.wu
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
FDUQAT16C¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDUQAT16C
- Participant: fudanu.wu
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
iiitqa07¶
Participants
| Input
| Summary
| Appendix
- Run ID: iiitqa07
- Participant: iiit-hyderbad
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
IITDIBM2007F¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IITDIBM2007F
- Participant: iit-delhi.saxena
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
IITDIBM2007S¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IITDIBM2007S
- Participant: iit-delhi.saxena
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
IITDIBM2007T¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: IITDIBM2007T
- Participant: iit-delhi.saxena
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
ILQUA1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ILQUA1
- Participant: suny-albany.wu
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
ILQUA2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ILQUA2
- Participant: suny-albany.wu
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
Intellexer7A¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Intellexer7A
- Participant: effectivesoft
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
Intellexer7B¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Intellexer7B
- Participant: effectivesoft
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
Intellexer7C¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Intellexer7C
- Participant: effectivesoft
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
LCCFerret¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: LCCFerret
- Participant: lcc.chaucer
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
lsv2007a¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lsv2007a
- Participant: saarlandu.shen
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
lsv2007b¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lsv2007b
- Participant: saarlandu.shen
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
lsv2007c¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lsv2007c
- Participant: saarlandu.shen
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
LymbaPA07¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: LymbaPA07
- Participant: lymba.clark
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
MITRE2007A¶
Participants
| Input
| Summary
| Appendix
- Run ID: MITRE2007A
- Participant: mitre.burger
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
MITRE2007B¶
Participants
| Input
| Summary
| Appendix
- Run ID: MITRE2007B
- Participant: mitre.burger
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
MITRE2007C¶
Participants
| Input
| Summary
| Appendix
- Run ID: MITRE2007C
- Participant: mitre.burger
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
MSUciQAfCol¶
Participants
| Input
| Summary
| Appendix
- Run ID: MSUciQAfCol
- Participant: mich-stateu.chai
- Track: Question Answering
- Year: 2007
- Submission: 8/28/2007
- Type: automatic
- Task: ciqa_final
- Run description: These final run results were created based on relevancy feedback collected using the interaction form MSUCOL. The selected filters were incorporated in the modified Rocchio to expand queries for answer re-ranking.
MSUciQAfInt¶
Participants
| Input
| Summary
| Appendix
- Run ID: MSUciQAfInt
- Participant: mich-stateu.chai
- Track: Question Answering
- Year: 2007
- Submission: 8/28/2007
- Type: automatic
- Task: ciqa_final
- Run description: These final run results were created based on relevancy feedback collected using the interaction form MSUINT. Modified Rocchio was used to expand queries for answer re-ranking.
MSUciQAiHeu¶
Participants
| Input
| Summary
| Appendix
- Run ID: MSUciQAiHeu
- Participant: mich-stateu.chai
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: The initial results were produced based on heurisitcs derived from topics released in 2005 and 2006.
MSUciQAiLrn¶
Participants
| Input
| Summary
| Appendix
- Run ID: MSUciQAiLrn
- Participant: mich-stateu.chai
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: The initial results were produced based on classifiers trained from topics released in 2006.
MSUCOL¶
Participants
| Input
| Summary
| Appendix
- Run ID: MSUCOL
- Participant: mich-stateu.chai
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: The interaction is based on automatically identified filters for browsing and feedback
MSUINT¶
Participants
| Input
| Summary
| Appendix
- Run ID: MSUINT
- Participant: mich-stateu.chai
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: The interaction is based on traditional relevancy feedback
pircs07qa1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: pircs07qa1
- Participant: queens-college-cuny.kwok
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
pircs07qa2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: pircs07qa2
- Participant: queens-college-cuny.kwok
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
pircs07qa3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: pircs07qa3
- Participant: queens-college-cuny.kwok
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
pronto07run1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: pronto07run1
- Participant: uroma.bos
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
pronto07run2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: pronto07run2
- Participant: uroma.bos
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
pronto07run3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: pronto07run3
- Participant: uroma.bos
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
QASCU1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QASCU1
- Participant: concordiau.kosseim
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
QASCU2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QASCU2
- Participant: concordiau.kosseim
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
QASCU3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QASCU3
- Participant: concordiau.kosseim
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
QUANTA¶
Participants
| Input
| Summary
| Appendix
- Run ID: QUANTA
- Participant: tsinghuau.zhang
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
rmitrun1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rmitrun1
- Participant: rmitu.scholer
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: For each topic, we use those words inside brackets from question template as query, retrieve top 20 documents by using Indri search engine, then parse these documents into sentences, and rank sentences according to sentence's score. A sentence's score is a combination of longest span of matched query words, the number of matched query words, and the number of matched distinct query words.
rmitrun2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rmitrun2
- Participant: rmitu.scholer
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: For each question, we use those words inside brackets from question template and some words from question narrative as query. Essentially we include all words from narrative field except the introduction part ( such as "The analyst would like to know of"). We then retrieve top 20 documents by using Indri search engine, parse these documents into sentences, and rank sentences according to their scores. A sentence is scored based on a combination of longest span of matched query words, the number of matched query words, and the number of matched distinct query words.
rmitrun3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rmitrun3
- Participant: rmitu.scholer
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: In this interaction form, assessors are required to compare two alternative answer lists, select their preferred answer list and fill in a questionnaire about why they choose one answer list over another.
rmitrun4¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rmitrun4
- Participant: rmitu.scholer
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: In this interaction form, assessors are required to 1) make a relevance judgement for each anwer from the answer list; and 2) fill in a questionnaire about the quality of the answer list and the level of difficulty in finding answer to that question.
rmitrun5¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rmitrun5
- Participant: rmitu.scholer
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_final
- Run description: For each question, we use the following querying strategies to get an initial set of 20 candidate documents 1) ALL phrases enclosed inside the brackets of the question template; 2) ALL words enclosed inside brackets of the question template; 3) ANY phrases enclosed inside brackets of the question template; We then do the sentence parsing and ranking as in our initial runs. PS. Our baseline for this run is rmitrun1 and rmitrun2.
rmitrun6¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: rmitrun6
- Participant: rmitu.scholer
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_final
- Run description: In one of our interactive runs, assessors judged the relevance of the top ten ranked sentences for each question. With this relevance information, we are able to expend the original query according to the following preference 1) up to five sentences that were assessed as "definitely an answer"; 2) up to five sentences that were assessed as "not sure, need to read the document"; 3) up to five sentences that were assessed as "definitely not an answer". We then do the usual sentence parsing and ranking as we did in other runs. PS. Our interface form 2 (rmitrun4) combined two baselines (rmitrun1 and rmitrun2), so the baselines for this run are rmitrun1 and rmitrun2.
sicka¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: sicka
- Participant: ustrathclyde.ruthven
- Track: Question Answering
- Year: 2007
- Submission: 8/10/2007
- Type: manual
- Task: ciqa_baseline
- Run description: Using Lemur/Indri and KL divergence retrieval algorithm named entities in questions were submitted as queries; from the documents returned, each was examined for relevance to the question; the most relevant parts were extracted including context neccessary to interpret the answer.
sicka2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: sicka2
- Participant: ustrathclyde.ruthven
- Track: Question Answering
- Year: 2007
- Submission: 8/27/2007
- Type: manual
- Task: ciqa_final
- Run description: Same as baseline run sicka.
strath¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: strath
- Participant: ustrathclyde.ruthven
- Track: Question Answering
- Year: 2007
- Submission: 8/11/2007
- Type: manual
- Task: ciqa_urlfile
- Run description: 3 styles of interaction form one presenting answers of different styles and two to estimate assessor's preferences for information presentation (1 per assessor)
strath2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: strath2
- Participant: ustrathclyde.ruthven
- Track: Question Answering
- Year: 2007
- Submission: 8/15/2007
- Type: manual
- Task: ciqa_urlfile
- Run description: NO INFO GIVEN
uams07atch¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uams07atch
- Participant: uamsterdam.deRijke
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
uams07main¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uams07main
- Participant: uamsterdam.deRijke
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
uams07nwrr¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uams07nwrr
- Participant: uamsterdam.deRijke
- Track: Question Answering
- Year: 2007
- Submission: 7/31/2007
- Task: main
UMass1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMass1
- Participant: umass.allan
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: Basic retrieval system allowing the user to search for relevant documents and manually record answers.
UMass2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMass2
- Participant: umass.allan
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: Basic retrieval system allowing the user to search for relevant documents and manually record answers.
UMassBaseAut¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMassBaseAut
- Participant: umass.allan
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: For each topic, retrieved top 10 documents, and then for each of these documents, returned at most the top 2 sentences as answers. A maximum of 20 answers per topic. This roughly represents the information we will display for the user initially on our interactive system.
UMassIntA¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMassIntA
- Participant: umass.allan
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_final
- Run description: This run consists of the answers provided by the assessors during their interactions during UMass1 and UMass2 (10 minutes of interaction). Both interaction "forms" presented the same live IR system, which retained its state between interactions. Assessors directly entered answers to the questions. During one day of interaction, the assessors experience network slowdowns, which likely hurt their ability to find and record answers.
UMassIntM¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMassIntM
- Participant: umass.allan
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: manual
- Task: ciqa_final
- Run description: This is a manual run constructed by having one of the UMass researchers use our interactive system (UMass1 and Umass2, which are the same system) without a time limit to find answers. For questions where the researcher found over 7000 characters of answers, the researcher edited the answers to fit within the limit. Most answers are in the form of snippets of text extracted from source documents.
UMD07iMASCa¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMD07iMASCa
- Participant: umd-collegepark.oard
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: The automatic baseline run for the CIQA2007 data.
UMD07iMASCaU¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMD07iMASCaU
- Participant: umd-collegepark.oard
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: Intelligent sentence selection as initial choices with humans interactively selecting the best syntactically trimmed version for the topic.
UMD07iMASCb¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMD07iMASCb
- Participant: umd-collegepark.oard
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_final
- Run description: For every topic, answers that were created by the assesors were automatically enhanced by our compression-based summarization system until the resulting answers reached 250 words.
UMD07MMRa¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMD07MMRa
- Participant: umd-collegepark.oard
- Track: Question Answering
- Year: 2007
- Submission: 8/12/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: Implementation of maximal marginal relevance.
UMD07MMRaURL¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMD07MMRaURL
- Participant: umd-collegepark.oard
- Track: Question Answering
- Year: 2007
- Submission: 8/12/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: interactive maximal marginal relevance, where at each step the user picks the best sentence to add to a growing answer
UMD07MMRb¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMD07MMRb
- Participant: umd-collegepark.oard
- Track: Question Answering
- Year: 2007
- Submission: 8/21/2007
- Type: automatic
- Task: ciqa_final
- Run description: Started with user input; padded with automatic results until quota of characters has been filled.
UNCYA1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNCYA1
- Participant: unorth-carolina.kelly
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: This interaction form presents sets of sentences to assessors for feedback.
UNCYA2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNCYA2
- Participant: unorth-carolina.kelly
- Track: Question Answering
- Year: 2007
- Submission: 8/13/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: This interaction form presents a set of open-ended questions to assessors.
UNCYABL30¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNCYABL30
- Participant: unorth-carolina.kelly
- Track: Question Answering
- Year: 2007
- Submission: 8/14/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: Used Lemur 4.5 to retrieve documents using KL-Divergence and some smoothing. Sentences were parsed from these documents and sent to another retrieval module that used a statistical translation model to identify the top 40 sentences for each topic.
UNCYAEX1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNCYAEX1
- Participant: unorth-carolina.kelly
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: Replacement for prior baseline
UNCYAEX2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UNCYAEX2
- Participant: unorth-carolina.kelly
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_final
- Run description: Uses feedback from both interaction forms (UNCYA1 and UNCYA2) -- positive sentence feedback and positive feedback from open-ended questions.
UofL¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UofL
- Participant: ulethbridge.chali
- Track: Question Answering
- Year: 2007
- Submission: 7/30/2007
- Task: main
UWfinalMAN¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWfinalMAN
- Participant: uwaterloo-olga
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: manual
- Task: ciqa_final
- Run description: Since there was a network outage, 5 of our own human assistants used the same interaction forms and the most common article amoung the five was selected as relevant. This run parallels UWfinalWIKI, but with our own humans, thus manual.
UWfinalWIKI¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWfinalWIKI
- Participant: uwaterloo-olga
- Track: Question Answering
- Year: 2007
- Submission: 8/26/2007
- Type: automatic
- Task: ciqa_final
- Run description: Similar to UWinitWIKI, in that a facet expansion based on wikipedia articles is performed, however, articles were selected by asessors rather than an automatic algorithm. However, only 13 of the 30 topics were interected with on account of a network outage at UW on the last day of interactions.
UWfinLINK¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWfinLINK
- Participant: uwaterloo-olga
- Track: Question Answering
- Year: 2007
- Submission: 8/6/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: Asks assessor to pick concepts as being "vital", "okay", or "not relevant" based on concepts parsed out as wikipedia articles from the top 100 nuggets returned using a system similar to the one described by Vechtomova for CiQA 2006.
UWfinWIKI¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWfinWIKI
- Participant: uwaterloo-olga
- Track: Question Answering
- Year: 2007
- Submission: 8/6/2007
- Type: automatic
- Task: ciqa_urlfile
- Run description: Interaction form which asks assesor to pick the most relevant wikipedia article for each facet in each topic.
UWinitBASE¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWinitBASE
- Participant: uwaterloo-olga
- Track: Question Answering
- Year: 2007
- Submission: 8/6/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: This run is a based on the system presented by Vechtomova in CiQA 2006 with only slight tuning changes.
UWinitWIKI¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWinitWIKI
- Participant: uwaterloo-olga
- Track: Question Answering
- Year: 2007
- Submission: 8/6/2007
- Type: automatic
- Task: ciqa_baseline
- Run description: Similar to the system presented by Vechtomova in CiQA 2006, this run attempts to resolve all the facets in the topic to a corresponding wikipedia article, then uses the link structure around it to find high quality query expansion terms to assist in ranking nuggets.