Skip to content

Runs - Novelty 2003

ccsum2svdpqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsum2svdpqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/8/2003
  • Task: task2
  • Run description: Did not use topic info. SVD followed by pivoted QR

ccsum3pqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsum3pqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task3
  • Run description: HMMtrained on top 5 Pivoted QR Task3

ccsum3qr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsum3qr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task3
  • Run description: Trained HMM with relevant sentences from first 5 docs and then used QR to extract novel sentences.

ccsum3svdpqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsum3svdpqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Trained HMM on first 5 docs and used it to generate relevant sentences and then used SVD and pivoted QR for finding novel.

ccsum4spq001

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsum4spq001
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: No use of topic fields SVD followed by QR with a threshold of .001

ccsum4svdpqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsum4svdpqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: No use of topic fields SVD followed by pivoted QR with a threshold of .001

ccsumlaqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumlaqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: HMM using LA Times relevant sentence for training and then followed a QR with partial pivoting. We do not use the topic field at all, your web form wouldn't let me leave it blank!

ccsummeoqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsummeoqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: HMM using hand marked subset of 2003 novel sentences and then followed a QR with partial pivoting. We do not use the topic field at all, your web form wouldn't let me leave it blank!

ccsummeosvd

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsummeosvd
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: HMM using hand marked subset of 2003 novel sentences and then followed a SVD and a QR with partial pivoting. We do not use the topic field at all, your web form wouldn't let me leave it blank!

ccsumrelqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumrelqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: HMM using 2002 relevant sentences for training and then followed a QR with partial pivoting. We do not use the topic field at all, your web form wouldn't let me leave it blank!

ccsumrelsvd

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumrelsvd
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: HMM using 2002 relevant sentences for training and then followed a SVD and a QR with partial pivoting. We do not use the topic field at all, your web form wouldn't let me leave it blank!

ccsumt2pqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumt2pqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/8/2003
  • Task: task2
  • Run description: Did not use topic info. Pivoted QR to select sentences

ccsumt2qr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumt2qr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/8/2003
  • Task: task2
  • Run description: Did not use topic info. QR to select sentences

ccsumt2svdqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumt2svdqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/8/2003
  • Task: task2
  • Run description: Did not use topic info. SVD followed by QR

ccsumt4pqr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumt4pqr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: No use of topic fields QR with partial pivoting and a threshold of 0.01

ccsumt4qr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumt4qr
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: No use of topic fields QR (no pivoting with a threshold of 0.7)

ccsumt4sqr01

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsumt4sqr01
  • Participant: ccs.conroy
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: No use of topic fields SVD followed by pivoted QR with a threshold of .001

clr03n1d

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n1d
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: CL Research parses and processes sentences into an XML representation of discourse structure, discourse entities, verbs, and prepositions, which is then used for matching up with a parsed representation of the topics.

clr03n1n2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n1n2
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: CL Research parses and processes sentences into an XML representation of discourse structure, discourse entities, verbs, and prepositions, which is then used for matching up with a parsed representation of the topics.

clr03n1n3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n1n3
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: CL Research parses and processes sentences into an XML representation of discourse structure, discourse entities, verbs, and prepositions, which is then used for matching up with a parsed representation of the topics.

clr03n1t

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n1t
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: CL Research parses and processes sentences into an XML representation of discourse structure, discourse entities, verbs, and prepositions, which is then used for matching up with a parsed representation of the topics.

clr03n2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n2
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/8/2003
  • Task: task2
  • Run description: The CL Research system processes text into an XML representation which is then used for assessing relevance and novelty.

clr03n3f01

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n3f01
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: The CL Research system parses and processes text into an XML representation, tagging the text with discourse, noun, verb, and preposition characteristics, which are then used in determining relevance and novelty.

clr03n3f02

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n3f02
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: The CL Research system parses and processes text into an XML representation, tagging the text with discourse, noun, verb, and preposition characteristics, which are then used in determining relevance and novelty.

clr03n3f03

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n3f03
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: The CL Research system parses and processes text into an XML representation, tagging the text with discourse, noun, verb, and preposition characteristics, which are then used in determining relevance and novelty.

clr03n3f04

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n3f04
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: The CL Research system parses and processes text into an XML representation, tagging the text with discourse, noun, verb, and preposition characteristics, which are then used in determining relevance and novelty.

clr03n3f05

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n3f05
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: The CL Research system parses and processes text into an XML representation, tagging the text with discourse, noun, verb, and preposition characteristics, which are then used in determining relevance and novelty.

clr03n4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr03n4
  • Participant: clresearch
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task4
  • Run description: The CL Research system parses and processes text into an XML representation, tagging the text with discourse, noun, verb, and preposition characteristics, which are then used in determining relevance and novelty.

ICT03NOV1BSL

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV1BSL
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: The similarity between topic and sentence is computed according to vector space model.

ICT03NOV1DTH

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV1DTH
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: The similarity between topic and sentence is computed according to vector space model. The threshold value for determination of relevance is automatically adapted for each document accoring to the time distribution of the 25 relevant documents.

ICT03NOV1NAR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV1NAR
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: The similarity between topic and sentence is computed according to vector space model. The positive feature vector and negative feature vector are constructed according to title and narrative in the topic.

ICT03NOV1SQR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV1SQR
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: The similarity between topic and sentence is computed according to vector space model, in which we utilize the chi-square statistic for the feature selection and feature weigthing.

ICT03NOV1XTD

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV1XTD
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: The similarity between topic and sentence is computed according to vector space model. The retrieved 75 documents and given 25 relevant documents are used for feature selection.

ICT03NOV2CUR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV2CUR
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: We used the local cooccurence as the query expansion. Maximum Marginal Relevance was used to find the new sentence.

ICT03NOV2LPA

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV2LPA
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: We used the word overlap between two sentences to select the new sentence.

ICT03NOV2LPP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV2LPP
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: We used the chi-square for the feature selection. The percentage of new sentences in relevant sentences is different accoring to the ranking of the current document in 25 documents.

ICT03NOV2PNK

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV2PNK
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: We used the relevant sentences and irrelerant sentences in the 25 documents to extract the features. Maximum Marginal Relevance was used to find the new sentence.

ICT03NOV2SQR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV2SQR
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: We used the chi-square for the feature selection. Maximum Marginal Relevance is used to find the new sentence.

ICT03NOV3IKK

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV3IKK
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: KNN algorithm is used to select the relevant sentences.

ICT03NOV3KNN

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV3KNN
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: KNN algorithm is used to select the relevant sentences.

ICT03NOV3KNS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV3KNS
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: KNN algorithm is used to select the relevant sentences.

ICT03NOV3WN3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV3WN3
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Winnow algorithm is used to select the relevant sentences.

ICT03NOV3WND

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV3WND
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Winnow algorithm is used to select the relevant sentences.

ICT03NOV4ALL

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV4ALL
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Winnow algorithm is used to retrieve the new sentences given the relevant and new sentences based on words overlapping, sentence semantic distance, and head sentence tags.

ICT03NOV4LFF

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV4LFF
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Winnow algorithm is used to retrieve the new sentences given the relevant and new sentences based on words overlapping and sentence semantic distance.

ICT03NOV4OTP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV4OTP
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Winnow algorithm is used to retrieve the new sentences given the relevant and new sentences based on words overlapping.

ICT03NOV4SQR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV4SQR
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Maximum Marginal Relevance is used to retrieve the new sentence.

ICT03NOV4WNW

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT03NOV4WNW
  • Participant: cas-ict.bin
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Winnow algorithm is used to retrieve the new sentences given the relevant sentences.

IITBN1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IITBN1
  • Participant: iitb.ramakrishnan
  • Track: Novelty
  • Year: 2003
  • Submission: 9/18/2003
  • Task: task4
  • Run description: principal component analysis with random walks on wordnet

Irit1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit1
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Coverage of 4 sentences, single terms used during analysis

Irit5q

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit5q
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Coverage of 5 sentences, single terms used during analysis

IRITf2bis

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITf2bis
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Term weighting (hightly relevant, lowly relevant, non relevant). Based on the full topic.

IRITfb1MtmIb

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITfb1MtmIb
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Four types of terms are considered high, low, non relevant (same as IRITf2bis), in addition we consider irrelevant terms. Terms can be single words or phrases.

IRITfNegR2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITfNegR2
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Relevant sentences same as IRITf2bis. New sentences low filtering.

IritMtm4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IritMtm4
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Coverage of 4 sentences, phrases used text processing

IritMtm5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IritMtm5
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Coverage of 5 sentences, phrases used text processing

IRITnb1MtmI4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITnb1MtmI4
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: a modified version of run IRITfb1MtmI (parameter)

IRITnip2bis

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITnip2bis
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Four types of terms are considered high, low, non relevant (like in IRITf2bis), in addition we consider irrelevant terms (narrative part).

Irito

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irito
  • Participant: irit-sig.boughanem
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: nothing (last year many tries from different groups to 'filter' new sentences were less efficient than selecting all the relevant sentences).

ISIALL03

Results | Participants | Input | Summary | Appendix

  • Run ID: ISIALL03
  • Participant: usc-isi.hermjakob
  • Track: Novelty
  • Year: 2003
  • Submission: 8/30/2003
  • Task: task1
  • Run description: All sentences are assumed relevant.

ISIDSCm203

Results | Participants | Input | Summary | Appendix

  • Run ID: ISIDSCm203
  • Participant: usc-isi.hermjakob
  • Track: Novelty
  • Year: 2003
  • Submission: 8/30/2003
  • Task: task1
  • Run description: a presence of any subjective word in a sentence is the key of opinions.

ISIDSm203

Results | Participants | Input | Summary | Appendix

  • Run ID: ISIDSm203
  • Participant: usc-isi.hermjakob
  • Track: Novelty
  • Year: 2003
  • Submission: 8/30/2003
  • Task: task1
  • Run description: a presence of any subjective word in a sentence is the key of opinions.

ISINONE03

Results | Participants | Input | Summary | Appendix

  • Run ID: ISINONE03
  • Participant: usc-isi.hermjakob
  • Track: Novelty
  • Year: 2003
  • Submission: 8/30/2003
  • Task: task1
  • Run description: None of sentences are assumed relevant.

ISIRAND03

Results | Participants | Input | Summary | Appendix

  • Run ID: ISIRAND03
  • Participant: usc-isi.hermjakob
  • Track: Novelty
  • Year: 2003
  • Submission: 8/30/2003
  • Task: task1
  • Run description: Random selection

lexiclone03

Results | Participants | Input | Summary | Appendix

  • Run ID: lexiclone03
  • Participant: lexiclone.geller
  • Track: Novelty
  • Year: 2003
  • Submission: 8/29/2003
  • Task: task1
  • Run description: we used lexical cloning technology

MeijiHilF11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF11
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Relevant filtering approach using conceptual fuzzy sets. Novelty calculating sentence score and redundancy score using conceptual fuzzy sets.

MeijiHilF12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF12
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Relevant filtering approach using conceptual fuzzy sets. Novelty calculating sentence score and redundancy score.

MeijiHilF13

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF13
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Relevant filtering approach using conceptual fuzzy sets. Novelty calculating sentence score and redundancy score using conceptual fuzzy sets.

MeijiHilF14

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF14
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Relevant filtering approach using conceptual fuzzy sets. Novelty calculating sentence score and redundancy score

MeijiHilF15

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF15
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Relevant filtering approach using tf-idf word vector. Novelty calculating sentence score and redundancy score

MeijiHilF21

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF21
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: calculating sentence score using time window and redundancy score using conceptual fuzzy sets.

MeijiHilF22

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF22
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: calculating sentence score using time window and redundancy score using tf-idf word vector.

MeijiHilF23

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF23
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: calculating sentence score using time window.

MeijiHilF24

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF24
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: calculating basic sentence score only.

MeijiHilF31

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF31
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Relevant filtering approach using conceptual fuzzy sets. Novelty calculating sentence score and redundancy score using conceptual fuzzy sets.

MeijiHilF32

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF32
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Relevant filtering approach using conceptual fuzzy sets. Novelty calculating sentence score and redundancy score.

MeijiHilF33

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF33
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Relevant filtering approach using tf-idf word vector. Novelty calculating sentence score and redundancy score using conceptual fuzzy sets.

MeijiHilF34

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF34
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Relevant filtering approach using tf-idf word vector. Novelty calculating sentence score and redundancy score.

MeijiHilF41

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF41
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/12/2003
  • Task: task4
  • Run description: calculating sentence score using time window and redundancy score using conceptual fuzzy sets.

MeijiHilF42

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF42
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/12/2003
  • Task: task4
  • Run description: calculating sentence score using time window and redundancy score using tf-idf word vector.

MeijiHilF43

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF43
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/12/2003
  • Task: task4
  • Run description: calculating sentence score using time window.

MeijiHilF44

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHilF44
  • Participant: meijiu.takagi
  • Track: Novelty
  • Year: 2003
  • Submission: 9/12/2003
  • Task: task4
  • Run description: calculating basic sentence score only.

NLPR03n1f1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n1f1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: A Combination of following algorithms tf-idf, length normalization, relevant feedback and dynamic threshold, etc.

NLPR03n1f2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n1f2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: A Combination of following algorithms tf-idf, length normalization, relevant feedback and dynamic threshold, etc.

NLPR03n1w1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n1w1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: A Combination of following algorithms window-based weighting, relevant feedback and dynamic threshold, etc.

NLPR03n1w2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n1w2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: A Combination of following algorithms window-based weighting, relevant feedback and dynamic threshold, etc.

NLPR03n1w3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n1w3
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: A Combination of following algorithms window-based weighting, relevant feedback and dynamic threshold, etc.

NLPR03n2d1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n2d1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: We define a value called 'New Information Degree'(NID) based on 'idf' values to present whether a sentence includes new information related to the former sentences, and dynamic thresholds are used for different topics.

NLPR03n2d2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n2d2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: We define a value called 'New Information Degree'(NID) based on 'bigram' to present whether a sentence includes new information related to the former sentences, and dynamic thresholds are used for different topics.

NLPR03n2d3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n2d3
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: We define a value called 'New Information Degree'(NID) based on 'idf' and 'bigram' to present whether a sentence includes new information related to the former sentences, and dynamic thresholds are used for different topics.

NLPR03n2s1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n2s1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: We define a value called 'New Information Degree'(NID) based on 'idf' values to present whether a sentence includes new information related to the former sentences, and static threshold is used for different topics.

NLPR03n2s2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n2s2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: We define a value called 'New Information Degree'(NID) based on 'bigram' to present whether a sentence includes new information related to the former sentences, and static threshold is used for different topics.

NLPR03n3d1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n3d1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: We use the combination of the following algorithms core-window-based similarity degrees, relevant feedback, length normalizatoin, bigram-based new information retrieval and dynamic thresholds.

NLPR03n3d2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n3d2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: We use the combination of the following algorithms tf-idf similarity degrees, query expansion, relevant feedback, length normalizatoin, bigram-based new information retrieval and dynamic thresholds.

NLPR03n3d3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n3d3
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: We use the combination of the following algorithms window-based similarity degrees, relevant feedback, length normalizatoin, idf-based new information retrieval and dynamic thresholds.

NLPR03n3s1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n3s1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: We use the combination of the following algorithms core-window-based similarity degrees, relevant feedback, length normalizatoin, idf-based new information retrieval.

NLPR03n3s2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n3s2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: We use the combination of the following algorithms tf-idf similarity degrees, query expansion, relevant feedback, length normalizatoin, idf-based new information retrieval.

NLPR03n4d1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n4d1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/14/2003
  • Task: task4
  • Run description: We develop "New Information degree" based on 'idf' to represent whether a sentence includes novelty information, and use dynamic thresholds for different topics.

NLPR03n4d2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n4d2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/14/2003
  • Task: task4
  • Run description: We develop "New Information degree" based on bigram to represent whether a sentence includes novelty information, and use dynamic thresholds (learned from training data) for different topics.

NLPR03n4s1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n4s1
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/14/2003
  • Task: task4
  • Run description: We develop "New Information degree" based on idf to represent whether a sentence includes novelty information, and use static threshold (learned from training data) for different topics.

NLPR03n4s2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n4s2
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/14/2003
  • Task: task4
  • Run description: We develop "New Information degree" based on bigram to represent whether a sentence includes novelty information, and use static threshold (learned from training data) for different topics.

NLPR03n4s3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NLPR03n4s3
  • Participant: cas.nlpr
  • Track: Novelty
  • Year: 2003
  • Submission: 9/14/2003
  • Task: task4
  • Run description: We develop "New Information degree" based on 'tf' to represent whether a sentence includes novelty information, and use static threshold (learned from training data) for different topics.

NTU11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU11
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: Use IR System with Reference Corpus

NTU12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU12
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: Use IR System with Reference Corpus

NTU13

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU13
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: Use IR System with Reference Corpus

NTU14

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU14
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: Use IR System with Reference Corpus

NTU15

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU15
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/2/2003
  • Task: task1
  • Run description: Use IR System with Reference Corpus

NTU21

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU21
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: Use IR with reference corpus

NTU22

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU22
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: Use IR with reference corpus

NTU23

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU23
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: Use IR with reference corpus

NTU24

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU24
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: Use IR with reference corpus

NTU25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU25
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task2
  • Run description: Use IR with reference corpus

NTU31

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU31
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: Use IR with reference corpus

NTU32

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU32
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: Use IR with reference corpus

NTU33

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU33
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: Use IR with reference corpus

NTU34

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU34
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: Use IR with reference corpus

NTU35

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU35
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/9/2003
  • Task: task3
  • Run description: Use IR with reference corpus

NTU41

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU41
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: Use IR with reference corpus

NTU42

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU42
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: Use IR with reference corpus

NTU43

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU43
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: Use IR with reference corpus

NTU44

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU44
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: Use IR with reference corpus

NTU45

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU45
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2003
  • Submission: 9/16/2003
  • Task: task4
  • Run description: Use IR with reference corpus

THUIRnv0311

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0311
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Local feedback to find relevant information and sentence to sentece overlap judgement.

THUIRnv0312

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0312
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Using WordNet synset and local co-occurrence MI to QE on relevant step, and sentence to sentece overlap judgement.

THUIRnv0313

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0313
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Retrieval using probabilistic model on relevant step (as baseline), and sentence to sentece overlap judgement.

THUIRnv0314

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0314
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Result filtering using different weighting scheme on relevant step, and sentence to sentece overlap judgement.

THUIRnv0315

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0315
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Sub-topic retrieval using long query on relevant step, and sentence to sentece overlap judgement.

THUIRnv0321

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0321
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task2
  • Run description: event . opinion

THUIRnv0322

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0322
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task2
  • Run description: nv4

THUIRnv0323

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0323
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task2
  • Run description: fixed threhold

THUIRnv0331

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0331
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task3
  • Run description: svm+nv4

THUIRnv0332

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0332
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task3
  • Run description: log+fb

THUIRnv0333

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0333
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task3
  • Run description: long+mi+qe

THUIRnv0334

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0334
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task3
  • Run description: long

THUIRnv0341

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0341
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Using triple overlap to finding new information with fixed redundancy threshold.

THUIRnv0342

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0342
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Using triple overlap to finding new information with fixed redundancy threshold.

THUIRnv0343

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0343
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Using documents cosine similarity to find new information with fixed redundancy threshold.

THUIRnv0344

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0344
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Using QE-based documents overlap to find new information, in which QE dictionary is generated by co-occurence in relevant sentences.

THUIRnv0345

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0345
  • Participant: tsinghuau.ma
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Using documents overlap and clustering of event and opinion topics to find new information.

UIowa03Nov01

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov01
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: topic-sentence cosine similarity as relevance, with a minumum of one new entity as a criteria for 'new'

UIowa03Nov02

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov02
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/4/2003
  • Task: task1
  • Run description: cosine similarity on topic-sentence as guard for relevance on a minimum of at least one new entity.

UIowa03Nov03

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov03
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: given the relevant sentence for a topic, declare it novel...

UIowa03Nov04

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov04
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least one new entity or noun phrase

UIowa03Nov05

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov05
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least two new entities or noun phrases

UIowa03Nov06

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov06
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 09/10/03
  • Task: task2
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least three new entities or noun phrases

UIowa03Nov07

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov07
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least four new entities or noun phrases

UIowa03Nov08

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov08
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: We expand the query term vector with relevant sentence terms, log the noun phrases and entities, and then use a guard similarity to judge relevance and occurrence of new NPs/entities for novelty.

UIowa03Nov09

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov09
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: We expand the query term vector with relevant sentence terms, log the noun phrases and entities, and then use a guard similarity to judge relevance and occurrence of new NPs/e

UIowa03Nov10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov10
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task4
  • Run description: given the relevant sentence for a topic, declare it novel...

UIowa03Nov11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov11
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task4
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least one new entity or noun phrase

UIowa03Nov12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov12
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task4
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least two new entities or noun phrases

UIowa03Nov13

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov13
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task4
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least three new entities or noun phrases

UIowa03Nov14

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa03Nov14
  • Participant: uiowa.eichmann
  • Track: Novelty
  • Year: 2003
  • Submission: 9/11/2003
  • Task: task4
  • Run description: given the relevant sentence for a topic, declare it novel if there is at least four new entities or noun phrases

umbcnew1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umbcnew1
  • Participant: umarylandbc.kallurkar
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Cluster relevant sentences and return one sentence per cluster as novel sentences.

umbcnew2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umbcnew2
  • Participant: umarylandbc.kallurkar
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Compute sentence-sentence similarity and return dissimilar sentences as novel sentences.

umbcnew3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umbcnew3
  • Participant: umarylandbc.kallurkar
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Compute sentence-sentence similarity and return dissimilar sentences as novel sentences, using the reduced dimension SVD computation.

umbcrun1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umbcrun1
  • Participant: umarylandbc.kallurkar
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Query expansion using terms from the description for finding correlated terms, clustering sentences, and querying to find the best clusters

umbcrun2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umbcrun2
  • Participant: umarylandbc.kallurkar
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Query expansion by finding usefull terms such as nouns, verbs adjectives and then, clustering sentences and querying to find the best clusters

umbcrun3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umbcrun3
  • Participant: umarylandbc.kallurkar
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: Query expansion by selecting the terms with largest term-term similarity scores using SVD and then, clustering sentences and querying to find the best clusters

umich1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich1
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: We used a maximum entropy classifier with sentence features extracted using the MEAD summarizer in choosing novel and relevant sentences.

umich2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich2
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: We used a maximum entropy classifier with sentence features extracted using the MEAD summarizer in choosing novel and relevant sentences.

umich21

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich21
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich22

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich22
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich23

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich23
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich24

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich24
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich25
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task2
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich3
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: We used a maximum entropy classifier with sentence features extracted using the MEAD summarizer in choosing novel and relevant sentences.

umich31

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich31
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich32

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich32
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich33

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich33
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich34

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich34
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich35

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich35
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/10/2003
  • Task: task3
  • Run description: Used 6 sentence features calcuated using the MEAD software to train a maximum entropy model to predict novel sentences.

umich4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich4
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: We used a maximum entropy classifier with sentence features extracted using the MEAD summarizer in choosing novel and relevant sentences.

umich41

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich41
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Trained a maximum entropy model using 6 MEAD-based sentence features to distinguish novel sentences.

umich42

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich42
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Trained a maximum entropy model using 6 MEAD-based sentence features to distinguish novel sentences.

umich43

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich43
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Trained a maximum entropy model using 6 MEAD-based sentence features to distinguish novel sentences.

umich44

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich44
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Trained a maximum entropy model using 6 MEAD-based sentence features to distinguish novel sentences.

umich45

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich45
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/17/2003
  • Task: task4
  • Run description: Trained a maximum entropy model using 6 MEAD-based sentence features to distinguish novel sentences.

umich5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich5
  • Participant: umich.radev
  • Track: Novelty
  • Year: 2003
  • Submission: 9/3/2003
  • Task: task1
  • Run description: We used a maximum entropy classifier with sentence features extracted using the MEAD summarizer in choosing novel and relevant sentences.