Runs - Question Answering 2006¶
asked06a¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: asked06a
- Participant: tokyo-it.whittaker
- Track: Question Answering
- Year: 2006
- Submission: 8/1/2006
- Task: main
asked06b¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: asked06b
- Participant: tokyo-it.whittaker
- Track: Question Answering
- Year: 2006
- Submission: 8/1/2006
- Task: main
asked06c¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: asked06c
- Participant: tokyo-it.whittaker
- Track: Question Answering
- Year: 2006
- Submission: 8/1/2006
- Task: main
clr06ci1¶
Participants
| Input
| Summary
| Appendix
- Run ID: clr06ci1
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: The template values were used directly as the search terms in a Lucene search of the AQUAINT corpus, except for template 2 where the first template value was excluded. Otherwise, template values were slightly modified to remove the words "or", "and", and any quotation marks (in a Perl script). The top 25 resulting documents were then parsed and processed into an XML representation and the questions were then posed against these documents. This run is based on an early version of the XML creation routines.
clr06ci1r¶
Participants
| Input
| Summary
| Appendix
- Run ID: clr06ci1r
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: automatic
- Task: ciqa_final
- Run description: Based on the interaction form results, answers judged by the assessors were eliminated, with the result that answers previously generated during the baseline were merely moved up in the ranking. Minimal manual intervention was used in the baseline run.
clr06ci2¶
Participants
| Input
| Summary
| Appendix
- Run ID: clr06ci2
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: The template values were used directly as the search terms in a Lucene search of the AQUAINT corpus, except for template 2 where the first template value was excluded. Otherwise, template values were slightly modified to remove the words "or", "and", and any quotation marks (in a Perl script). The top 25 resulting documents were then parsed and processed into an XML representation and the questions were then posed against these documents. This run is based on a later version of the XML creation routines.
clr06ci2r¶
Participants
| Input
| Summary
| Appendix
- Run ID: clr06ci2r
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: automatic
- Task: ciqa_final
- Run description: Based on the interaction form results, answers judged by the assessors were eliminated, with the result that answers previously generated during the baseline were merely moved up in the ranking. Minimal manual intervention was used in the baseline run.
clr06m¶
Participants
| Input
| Summary
| Appendix
- Run ID: clr06m
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
CLR1¶
Participants
| Input
| Summary
| Appendix
- Run ID: CLR1
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: The interaction forms request the assessor to rate the relevance of as many sentences as possible from the ones submitted as run 'clr06ci1', which was essentially generated automatically.
CLR2¶
Participants
| Input
| Summary
| Appendix
- Run ID: CLR2
- Participant: clresearch
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: The interaction forms request the assessor to rate the relevance of as many sentences as possible from the ones submitted as run 'clr06ci2', which was essentially generated automatically.
csail01¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail01
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
csail02¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail02
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
csail03¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail03
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
csail1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csail1
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: Relationship task from last year with best known settings ./relation-processor.pl --sois /dev/null --morph --use-syns --scr-square --use-default-lucene-query --df-beta 5 --pr-beta 7
csaili1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csaili1
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: our top results, with checkboxes to see which ones are good, and a text box at the end in case the assessor wants to tell us anything else.
csaili2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csaili2
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: Interaction at the word level what words or phrases are most useful in an answer. The closest thing we could do to paper and hilighter.
csailif1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csailif1
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: automatic
- Task: ciqa_final
- Run description: This run selects only the checked answers from form one (the entire-question checkboxes), and fills up to 7000 characters with answers not previously supplied. The assumption is that unchecked answers do not contain nuggets, and that we would like to have more answers assessed rather than maximizing our precision.
csailif2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: csailif2
- Participant: mit.katz
- Track: Question Answering
- Year: 2006
- Submission: 9/6/2006
- Type: automatic
- Task: ciqa_final
- Run description: Same process as csailf1, but against responses selected word-by-word
cuhkqaepisto¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: cuhkqaepisto
- Participant: chineseu-hongkong.kan
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
Dal06e¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Dal06e
- Participant: dalhousieu.keselj
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
Dal06m¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Dal06m
- Participant: dalhousieu.keselj
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
Dal06p¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Dal06p
- Participant: dalhousieu.keselj
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
DLT06QA01¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: DLT06QA01
- Participant: ulimerick.sutcliffe
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Task: main
DLT06QA02¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: DLT06QA02
- Participant: ulimerick.sutcliffe
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Task: main
ed06qar1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ed06qar1
- Participant: uedinburgh.webber
- Track: Question Answering
- Year: 2006
- Submission: 7/29/2006
- Task: main
ed06qar2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ed06qar2
- Participant: uedinburgh.webber
- Track: Question Answering
- Year: 2006
- Submission: 7/29/2006
- Task: main
ed06qar3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ed06qar3
- Participant: uedinburgh.webber
- Track: Question Answering
- Year: 2006
- Submission: 7/29/2006
- Task: main
FDUQAT15A¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDUQAT15A
- Participant: fudan.wu
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
FDUQAT15B¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDUQAT15B
- Participant: fudan.wu
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
FDUQAT15C¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: FDUQAT15C
- Participant: fudan.wu
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
ILQUA1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ILQUA1
- Participant: ualbany.min
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
InsunQA06¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: InsunQA06
- Participant: harbin.zhao
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
irstqa06¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: irstqa06
- Participant: itc-irst.negri
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
ISL1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ISL1
- Participant: ukarlsruhe-cmu.schlaefer
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
ISL2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ISL2
- Participant: ukarlsruhe-cmu.schlaefer
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
ISL3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: ISL3
- Participant: ukarlsruhe-cmu.schlaefer
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
LCCFerret¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: LCCFerret
- Participant: lcc.harabagiu
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lccPA06¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lccPA06
- Participant: lcc.moldovan
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lexiclone06¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lexiclone06
- Participant: lexiclone.geller
- Track: Question Answering
- Year: 2006
- Submission: 7/28/2006
- Task: main
lf10w10g5¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lf10w10g5
- Participant: macquarieu.molla
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lf10w10g5l5¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lf10w10g5l5
- Participant: macquarieu.molla
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lf10w20g5l5¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lf10w20g5l5
- Participant: macquarieu.molla
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lsv2006a¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lsv2006a
- Participant: saarlandu.leidner
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lsv2006b¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lsv2006b
- Participant: saarlandu.leidner
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
lsv2006c¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: lsv2006c
- Participant: saarlandu.leidner
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
MITRE2006A¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MITRE2006A
- Participant: mitre-corp.burger
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
MITRE2006C¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MITRE2006C
- Participant: mitre-corp.burger
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
MITRE2006D¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: MITRE2006D
- Participant: mitre-corp.burger
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
NUSCHUAQA1¶
Participants
| Input
| Summary
| Appendix
- Run ID: NUSCHUAQA1
- Participant: nus.kor
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
NUSCHUAQA2¶
Participants
| Input
| Summary
| Appendix
- Run ID: NUSCHUAQA2
- Participant: nus.kor
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
NUSCHUAQA3¶
Participants
| Input
| Summary
| Appendix
- Run ID: NUSCHUAQA3
- Participant: nus.kor
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
QACTIS06A¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QACTIS06A
- Participant: nsa.schone
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
QACTIS06B¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QACTIS06B
- Participant: nsa.schone
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
QACTIS06C¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QACTIS06C
- Participant: nsa.schone
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
QASCU1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QASCU1
- Participant: concordiau.kosseim
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: main
QASCU2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QASCU2
- Participant: concordiau.kosseim
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
QASCU3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: QASCU3
- Participant: concordiau.kosseim
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
Roma2006run1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Roma2006run1
- Participant: uroma.bos
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Task: main
Roma2006run2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Roma2006run2
- Participant: uroma.bos
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Task: main
Roma2006run3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: Roma2006run3
- Participant: uroma.bos
- Track: Question Answering
- Year: 2006
- Submission: 7/30/2006
- Task: main
shef06qal¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: shef06qal
- Participant: usheffield.greenwood
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
shef06sem¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: shef06sem
- Participant: usheffield.greenwood
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
shef06ss¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: shef06ss
- Participant: usheffield.greenwood
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
strath1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: strath1
- Participant: ustrathclyde.baillie
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Type: manual
- Task: ciqa_baseline
- Run description: This run was a manual run with answers obtained from manual searching of the AQUAINT collection
strath2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: strath2
- Participant: ustrathclyde.baillie
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: Interaction form aims to gather information on the assesors' perceptions of answers to the questions
strath3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: strath3
- Participant: ustrathclyde.baillie
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: Interaction form aims to gather information on the assesors' personal context and perceptions of the topic
strath4¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: strath4
- Participant: ustrathclyde.baillie
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: manual
- Task: ciqa_final
- Run description: manually created answers extracted through interactive search
TIQA200601¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TIQA200601
- Participant: trulyintelligent.satish
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
TREC06ST01¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TREC06ST01
- Participant: tomlinson.stan
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
TWQA0601¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWQA0601
- Participant: pekingu.yan
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
TWQA0602¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWQA0602
- Participant: pekingu.yan
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
TWQA0603¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: TWQA0603
- Participant: pekingu.yan
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
UMAS1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMAS1
- Participant: umass.allan
- Track: Question Answering
- Year: 2006
- Submission: 8/1/2006
- Task: ciqa_form
- Run description: Queries are primarily about external-corpus selection, term-variant finding using edit distance, and role of people, locations and organizations in the topic.
UMASSauto1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMASSauto1
- Participant: umass.allan
- Track: Question Answering
- Year: 2006
- Submission: 8/1/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: This a the first baseline run. Simple querying at the document level followed by the sentence level.
UMASSauto2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMASSauto2
- Participant: umass.allan
- Track: Question Answering
- Year: 2006
- Submission: 8/1/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: This a the second baseline run. Simple querying at the document level followed by the sentence level. Further, pseudo-relevance feedback using an automatically selected external corpus was performed.
UMASSi1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMASSi1
- Participant: umass.allan
- Track: Question Answering
- Year: 2006
- Submission: 9/6/2006
- Type: automatic
- Task: ciqa_final
- Run description: This is a final run after utilizing the feedback from the assessors in an automatic fashion. The other feature of this run UMASSi1 are 0. External expansion using only the USENET newsgroups selected by the assessors as being relevant to the topic 1. Expansion with spelling variants 2. Cleaning up the final result list by removing snippets that contain named entities the assessors have maked as not relevant to the topic 3. Duplicate information detection tracking-style
UMASSi2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMASSi2
- Participant: umass.allan
- Track: Question Answering
- Year: 2006
- Submission: 9/6/2006
- Type: automatic
- Task: ciqa_final
- Run description: This is a final run after utilizing the feedback from the assessors in an automatic fashion. The difference from UMASSi1 is that external expansion is done based on a combination of automatically and manually selected USENET newsgroups. The other features shared with UMASSi1 are 1. Expansion with spelling variants 2. Cleaning up the final result list by removing snippets that contain named entities the assessors have maked as not relevant to the topic 3. Duplicate information detection tracking-style
UMDA1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMDA1
- Participant: umaryland.oard
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: Automatic interaction forms of Unversity of Maryland -relevance feedback
UMDA1post¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMDA1post
- Participant: umaryland.oard
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: automatic
- Task: ciqa_final
- Run description: this takes the responses, re-rank the answers, and used relevant answers for query expansion.
UMDA1pre¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMDA1pre
- Participant: umaryland.oard
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: Automatic baseline run of University of Maryland using automatic query expansion
UMDM1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMDM1
- Participant: umaryland.oard
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: interaction forms for baseline manual run of Unviersity of Maryland
UMDM1post¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMDM1post
- Participant: umaryland.oard
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: manual
- Task: ciqa_final
- Run description: final run of a manual run - taking interaction questions to re-rank answers and for some to construct new queries.
UMDM1pre¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UMDM1pre
- Participant: umaryland.oard
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Type: manual
- Task: ciqa_baseline
- Run description: Manual baseline run of University of Maryland
uw574¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uw574
- Participant: uwash.lewis
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main
UWAT1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWAT1
- Participant: uwaterloo-clarke
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Task: ciqa_form
- Run description: Top 15 nuggets retrieved in UWATCIQA1 run.
UWATCIQA1¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWATCIQA1
- Participant: uwaterloo-clarke
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: Baseline automatic run. Top 200 documents were retrieved using Okapi. Sentences (nuggets) from the retrieved documents were ranked by the number of query facets they have. Ties are resolved by the sum of idf of query terms in the nugget, and then by the number of lexical bonds they have with sentences containing a different facet in the same document.
UWATCIQA2¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWATCIQA2
- Participant: uwaterloo-clarke
- Track: Question Answering
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: ciqa_baseline
- Run description: Baseline automatic run. Top 200 documents were retrieved using Okapi. Sentences (nuggets) from the retrieved documents were ranked by the number of query facets they have. Ties are resolved by the sum of idf of query terms in the nugget, and then by the average tf.idf of all terms in the nugget.
UWATCIQA3¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWATCIQA3
- Participant: uwaterloo-clarke
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: automatic
- Task: ciqa_final
- Run description: Nuggets which have lexical cohesive bonds with the nuggets selected by the users in the clarification form are included in this run.
UWATCIQA4¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: UWATCIQA4
- Participant: uwaterloo-clarke
- Track: Question Answering
- Year: 2006
- Submission: 9/5/2006
- Type: automatic
- Task: ciqa_final
- Run description: Query expansion terms were extracted from the user-selected nuggets.
uwclma¶
Participants
| Proceedings
| Input
| Summary
| Appendix
- Run ID: uwclma
- Participant: uwash.lewis
- Track: Question Answering
- Year: 2006
- Submission: 8/2/2006
- Task: main