Skip to content

Runs - Novelty 2004

ccs1f0t1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs1f0t1
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: HMM with 9 states and 1 features, log(#signature terms+1) Threshold 0 for novel Did not use topic description at all for this run.

ccs1ftop0t1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs1ftop0t1
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: HMM with 9 states and 1 features, log(#signature terms+1) choose top 25 docs based on sum gamma probabilities Threshold 0 for novel (no QR) Did not use topic description at all for this run.

ccs3fbqrt3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs3fbqrt3
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: no topic description used Used first 5 docs generate new signature terms build HMM and then selected sentence used QR to find novel

ccs3fmmr95t3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs3fmmr95t3
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: no topic description used HMM with 3 features, followed by MMR

ccs3fmmrt1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs3fmmrt1
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: HMM with 9 states and 3 features, log(w+1), sentence entropy, log(#signature terms+1) choose top 25 docs based on sum gamma probabilities MMR for Novel sentence Did not use topic description at all for this run.

ccs3fmmrt3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs3fmmrt3
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: no topic description used Used first 5 docs to build HMM and then selected sentence used MMR to find novel

ccs3fqrt1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs3fqrt1
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: HMM with 9 states and 3 features. No topic information used.

ccs3ftop0t1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccs3ftop0t1
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: HMM with 9 states and 3 features, log(w+1), sentence entropy, log(#signature terms+1) choose top 25 docs based on sum gamma probabilities Threshold 0 for novel (no QR) Did not use topic description at all for this run.

ccsfbmmrt3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsfbmmrt3
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: no topic description used Used first 5 docs to extend signature terms rebuilt HMM and then selected sentence used MMR to find novel

ccsmmr2t2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsmmr2t2
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: no topic description used MMR

ccsmmr3t2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsmmr3t2
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: no topic description used MMR lambda=0.3

ccsmmr4t2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsmmr4t2
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: no topic description used MMR lambda=0.4

ccsmmr5t2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsmmr5t2
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: no topic description used MMR lambda=0.5

ccsqrt2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ccsqrt2
  • Participant: ida.ccs.nsa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: no topic description used pivoted QR

cdvp4CnQry2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4CnQry2
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: BaseLine experiment no Qe no Doce Unique terms identification

cdvp4CnS101

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4CnS101
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/2/2004
  • Task: task1
  • Run description: Uses concataned queries over Similarilty Doc Expansion getting unique words and clearing the history.

cdvp4NSen4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4NSen4
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: Alogrithm called NewSentenceValue using the idfs of the unique terms together with idf of previously seen terms in the history.

cdvp4NSnoH4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4NSnoH4
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: Algorithm NewSenteneceValue comparing pairwise arocross all sentences encountered.

cdvp4NTerFr1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4NTerFr1
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: Alogrithm called NewnessTermFreq using the tfs of the unique terms together with idf of previously seen terms in the history.

cdvp4NTerFr3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4NTerFr3
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: Alogrithm called NewnTermFreq using the tfs of the unique terms together with idf of previously seen terms in the history.

cdvp4QePDPC2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4QePDPC2
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/2/2004
  • Task: task1
  • Run description: Qe using Proximity Docexp using Proximity. The Query is Concatenated Unique terms identification

cdvp4QePnD2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4QePnD2
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Query Expansion using Proximity 10 words. Unique terms identification

cdvp4QeSnD1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4QeSnD1
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/2/2004
  • Task: task1
  • Run description: Query expansion (Similarity of words) over a doc using newSentence variable to determine threshold of novel sentences.

cdvp4UnHis3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: cdvp4UnHis3
  • Participant: dubblincity.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: Baseline approach- overlapping of word

CIIRT1R1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT1R1
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: TFIDF techniques with query expansion were used for finding relevant sentences and new words and named entities were considered for detecting novel sentences.

CIIRT1R2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT1R2
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: TFIDF techniques with selective query expansion were used for finding relevant sentences and new words and named entities were considered for detecting novel sentences.

CIIRT1R3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT1R3
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: TFIDF techniques with query expansion and short sentences removal were used for finding relevant sentences, and specific named entities were considered for identifying redundant sentences.

CIIRT1R5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT1R5
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Take best sentence for each topic and pick top 25 best sentences. The 25 corresponding documents were chosen for further relevant sentences retrieval and novel sentences detection.

CIIRT1R6

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT1R6
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: For each topic, 25 documents were chosen based on the ranking of the best sentence in each document and selective query expansion was performed for relevant sentences retrieval.

CIIRT2R1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT2R1
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: Pairwise sentence similarity was used for identifying novel sentences.

CIIRT2R2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT2R2
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: Pairwise sentence similarity was used for identifying novel sentences with the threshold of 0.6.

CIIRT3R1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT3R1
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task3
  • Run description: TFIDF techniques with selective relevance feedback were used for finding relevant sentences, new words and named entities were considered for identifying novel sentences.

CIIRT3R2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT3R2
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task3
  • Run description: TFIDF techniques with relevance feedback were used for finding relevant sentences, new words and named entities were considered for identifying novel sentences.

CIIRT3R3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT3R3
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task3
  • Run description: TFIDF techniches with selective relevance feedback were used for finding relevant sentences and sentence pairwise similarity was consider for identify novel sentences.

CIIRT3R4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT3R4
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task3
  • Run description: TFIDF techniques with selective relevance feedback were used for finding relevant sentences,pairwise sentence similarity was used for identifying novel sentences with the threshold of 0.6.

CIIRT3R5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT3R5
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: This approach tunes similarity threshold for each topic based on its performance within the given top 5 documents for identifying novel sentences in the remaining documents.

CIIRT4R1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT4R1
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/20/2004
  • Task: task4
  • Run description: Sentence pairwise similarity was used for novelty detection and different cutoff threshold was chosen for each topic accoroding its performance within the top 5 given documents.

CIIRT4R2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT4R2
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: Both sentence pariwise similarity and overlap of named entities were considered for identifying novel sentences.

CIIRT4R3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CIIRT4R3
  • Participant: u.mass
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: Sentence pairwise similarity and percentage of named entities overlap were considered for identifying novel sentneces, parameters were tuned with the given judgment of the top 5 documents for each topic.

clr04n1h2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n1h2
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n1h3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n1h3
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n2
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task2
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n3h1f1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n3h1f1
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task3
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n3h1f2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n3h1f2
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n3h2f1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n3h2f1
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n3h2f2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n3h2f2
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

clr04n4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: clr04n4
  • Participant: clresearch
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task4
  • Run description: Topics and documents were fully parsed and processed into an XML representation; each sentence was evaluated against the topic using string, syntax, and semantic characteristics.

HIL10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: HIL10
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 8/30/2004
  • Task: task1
  • Run description: vector space model.

ICT2OKALCEAP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT2OKALCEAP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: OKAPI additional with Local Context Expansion

ICT2OKAPIAP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT2OKAPIAP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task2
  • Run description: model sentence with OKAPI and detect novel sentences using average previous sentence comparison.

ICT2VSMIG95

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT2VSMIG95
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/12/2004
  • Task: task2
  • Run description: VSM and novelty detection with information gain.

ICT2VSMLCE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT2VSMLCE
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task2
  • Run description: We empoly dynamic threshold according to the topic date. And make expansion with local cooccurrence.

ICT2VSMOLP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT2VSMOLP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/12/2004
  • Task: task2
  • Run description: Using Word Overlapping

ICT3OKAPFDBK

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT3OKAPFDBK
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: OKAPI model and information gain with relevance feedback

ICT3OKAPIIG

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT3OKAPIIG
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: OKAPI model and Information gain-based novelty detection

ICT3OKAPIOLP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT3OKAPIOLP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: OKAPI model and word overlapping-based novelty detection

ICT3VSMOLP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT3VSMOLP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: Baseline with SVM and word overlapping

ICT4IG

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT4IG
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: relevance degree differetial

ICT4OKAAP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT4OKAAP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: Detect new information comparing with average previous relevance

ICT4OKAPIIG

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT4OKAPIIG
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: relevance degree differetial based on OKAPI model

ICT4OVERLAP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT4OVERLAP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: Simple word overlapping

ICT4OVLPCHI

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICT4OVLPCHI
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: Word Overlapping considering word weight

ICTOKAPIOVLP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTOKAPIOVLP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Adapt OKAPI IR model on sentence retrieval while use word overlap to predict new sentences.

ICTVSMCOSAP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTVSMCOSAP
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Relevance retrieval using VSM with Cosine normalization, and novelty detection based on average similarity with previous sentences.

ICTVSMFDBKH

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTVSMFDBKH
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Based on basic Vector Space Model,We introduce pseu-feedback to improve relevance performace.

ICTVSMFDBKL

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTVSMFDBKL
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Based on basic Vector Space Model,We introduce top 20% result as pseu-feedback to improve relevance performace.

ICTVSMLCE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ICTVSMLCE
  • Participant: cas.ict.wang
  • Track: Novelty
  • Year: 2004
  • Submission: 9/2/2004
  • Task: task1
  • Run description: Basic VSM and adding local highly co-occurrence term expansion.

Irit1T3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit1T3
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Compared to the other runs for this task, information on relevance is not only used to expand the qurery, but also to add all the relevant sentences from these documents to the retrievd set. The filtering process is the same than in task 1.

Irit2T2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit2T2
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: all new

Irit2Task3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit2Task3
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: term expansion based on relevance feedback

Irit3Task3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit3Task3
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: same as Irit2Task3 but with different parameter values

Irit4Task3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit4Task3
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: same as Irit2Task3 but with different parameter values

Irit5Task3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: Irit5Task3
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: same as Irit2Task3 but with different parameter values

IRITT1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITT1
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 8/30/2004
  • Task: task1
  • Run description: Representative terms are filtered according to their tf.idf weights and re-weighted as very relevant, relevant or non relevant.

IRITT2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITT2
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 8/30/2004
  • Task: task1
  • Run description: Same type of process than run IRITT1 but with different parameter values.

IRITT3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITT3
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 8/30/2004
  • Task: task1
  • Run description: In addition to the process used in IRITT1 run, this run uses blind relevant feedback.

IRITT4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITT4
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Same approach than IRITT1, with different parameter values

IRITT5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRITT5
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Same approach than IRITT1, with different parameter values

IritTask2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IritTask2
  • Participant: irit.sig.boughanem
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: New sentences are filtered based on their similarity with previous proceeded sentences and on their similarity with a virtual sentence built from the best new sentences.

ISIALL04

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ISIALL04
  • Participant: usc.isi.kim
  • Track: Novelty
  • Year: 2004
  • Submission: 8/12/2004
  • Task: task1
  • Run description: All sentences are relevant and new.

ISIRUN204

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ISIRUN204
  • Participant: usc.isi.kim
  • Track: Novelty
  • Year: 2004
  • Submission: 8/12/2004
  • Task: task1
  • Run description: For the opinion topics, we used opinion-bearing words as indicators of relevant sentences and for the event topics, we treated them as document IR.

ISIRUN304

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ISIRUN304
  • Participant: usc.isi.kim
  • Track: Novelty
  • Year: 2004
  • Submission: 8/12/2004
  • Task: task1
  • Run description: For the opinion topics, we used opinion-bearing words as indicators of relevant sentences and for the event topics, we treated them as document IR.

ISIRUN404

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ISIRUN404
  • Participant: usc.isi.kim
  • Track: Novelty
  • Year: 2004
  • Submission: 8/12/2004
  • Task: task1
  • Run description: For the opinion topics, we used opinion-bearing words as indicators of relevant sentences and for the event topics, we treated them as document IR.

ISIRUN504

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ISIRUN504
  • Participant: usc.isi.kim
  • Track: Novelty
  • Year: 2004
  • Submission: 8/12/2004
  • Task: task1
  • Run description: For the opinion topics, we used opinion-bearing words as indicators of relevant sentences and for the event topics, we treated them as document IR.

LRIaze1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze1
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: statistical approach based on information (word, term, person) found in topic

LRIaze12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze12
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: statistical approach based on a heavy parsing and use of static and dynamic threshold for novelty detection.

LRIaze2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze2
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: statistical approach based on information (word, term, person) found in topic and person found in text by coreference resolution

LRIaze22

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze22
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: statistical approach based on a heavy parsing, coreference resolution and use of static and dynamic threshold for novelty detection.

LRIaze3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze3
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Mixed weighted approach statistical and knowledge based approach

LRIaze32

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze32
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: mixed approach knowledge and statistical approach based on a heavy parsing and use of static and dynamic threshold for novelty detection.

LRIaze4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze4
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Mixed weighted approach statistical and knowledge based approach

LRIaze42

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze42
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: mixed approach knowledge and statistical approach based on a heavy parsing and use of static and dynamic threshold for novelty detection.

LRIaze5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze5
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Mixed weighted approach statistical and knowledge based approach

LRIaze52

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: LRIaze52
  • Participant: u.paris.lri
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: mixed approach knowledge and statistical approach based on a heavy parsing and use of static and dynamic threshold for novelty detection.

MeijiHIL1cfs

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL1cfs
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Relevance, Conceptual fuzzy sets is used for term expanstion. Novelty, We used Sentence weight score, redundancy score and scarcity score.

MeijiHIL1odp

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL1odp
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: Relevance, ODP is used for term expanstion. Novelty, We used Sentence weight score, redundancy score and scarcity score.

MeijiHIL2CS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL2CS
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: We used redundancy score using term expansion by conceptual fuzzy sets and in considaration of the dupulication and the freshness of a sentence.

MeijiHIL2RS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL2RS
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task2
  • Run description: We used redundancy score and scarcity score in considaration of the dupulication and the freshness of a sentence.

MeijiHIL2WCS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL2WCS
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: We used sentence weight score and redundancy score using term expansion by conceptual fuzzy sets and in considaration of the N-window-tf-idf and the dupulication and the freshness of a sentence.

MeijiHIL2WR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL2WR
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task2
  • Run description: We used sentence weight score and redundancy score and in considaration of the N-window-tf-idf and the dupulication of a sentence.

MeijiHIL2WRS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL2WRS
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task2
  • Run description: We used sentence weight score and redundancy score and scarcity score in considaration of the N-window-tf-idf and the dupulication and the freshness of a sentence.

MeijiHIL3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL3
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task3
  • Run description: Vector space model is used. In Novelty, we used sentence weight score, redundancy score and scarcity score.

MeijiHIL3Tc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL3Tc
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task3
  • Run description: Vector space model. In Relevance, Conceptual Fuzzy Sets is used for Topic and Sentences expansion. In Novelty, we used sentence weight score, redundancy score and scarcity score.

MeijiHIL3TSc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL3TSc
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/13/2004
  • Task: task3
  • Run description: Vector space model. In Relevance, Conceptual Fuzzy Sets is used for Topic expansion. In Novelty, we used sentence weight score, redundancy score and scarcity score.

MeijiHIL4RS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL4RS
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: We used redundancy score and scarcity score in consideration of the dupulication and the freshness of a sentence.

MeijiHIL4RSc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL4RSc
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: We used redundancy score using term expansion by conceptual fuzzy sets and scarcity score in consideration of the dupulication and the freshness of a sentence.

MeijiHIL4WR

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL4WR
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: We used sentence weight score and redundancy score and in considerasion of the N-wondow-idf and the dupulication of a sentence.

MeijiHIL4WRc

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL4WRc
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: We used sentence weight score and redundancy score using term expansion by conceptual fuzzy sets in considerasion of the N-window-idf and the dupulication.

MeijiHIL4WRS

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: MeijiHIL4WRS
  • Participant: meiji.u
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: We used sentence weight score and redundancy score and scarcity score in considerasion of the N-wondow-idf and the dupulication and the freshness of a sentence.

novcolp1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: novcolp1
  • Participant: columbia.u.schiffman
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: Basically we tried to segment the documents into new and old sections and accepted runs of sentences.

novcolp2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: novcolp2
  • Participant: columbia.u.schiffman
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: We tried to segment the articles into new and old sections and accepted runs of sentences (1 or more).

novcolrcl

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: novcolrcl
  • Participant: columbia.u.schiffman
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: We tried to segment the articles into new and old sections and accepted runs of sentences, 1 or more in length.

novcombo

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: novcombo
  • Participant: columbia.u.schiffman
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: This run is a combination of one of the segmentation runs and the vector-space run.

novcosine

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: novcosine
  • Participant: columbia.u.schiffman
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: Straightforward vector space model with cosine distance to measure similarity.

NTU11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU11
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: use reference corpus to identify sentences

NTU12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU12
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: use reference corpus to identify sentences

NTU13

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU13
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: use reference corpus to identify sentences

NTU14

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU14
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: use reference corpus to identify sentences

NTU15

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU15
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: use reference corpus to identify sentences

NTU21

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU21
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: use reference corpus to identify novel sentences

NTU22

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU22
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: use reference corpus to identify novel sentences

NTU23

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU23
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: use reference corpus to identify novel sentences

NTU24

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU24
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: use reference corpus to identify novel sentences

NTU25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: NTU25
  • Participant: ntu.chen
  • Track: Novelty
  • Year: 2004
  • Submission: 9/9/2004
  • Task: task2
  • Run description: use reference corpus to identify novel sentences

THUIRnv0411

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0411
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: baseline long query sentence retrieval, selected pool for sentence novelty judgement.

THUIRnv0412

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0412
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: long query sentence retrieval with local QE, named-entity parsing, relevant threshold based on top sentence RSV, selected pool for sentence novelty judgement.

THUIRnv0413

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0413
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: long query sentence retrieval with local QE, named-entity parsing, combine sentence and document RSV, relevant threshold based on top result RSV, selected pool for sentence novelty judgement.

THUIRnv0414

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0414
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: long query sentence retrieval with local QE, named-entity parsing, combine sentence and document RSV, cut result by proportion, selected pool for sentence novelty judgement.

THUIRnv0415

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0415
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: long query sentence retrieval with local QE, filter irrelevant documents according to sentence retrieval result, selected pool for sentence novelty judgement.

THUIRnv0421

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0421
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: overlap method with a tightness restriction on the overlaping words.

THUIRnv0422

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0422
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: previously appeared sentences are selected to a pool, according to their similarity to the current sentence to be judged; a tightness restriction is used in the selection procedure; an overlap comparison between current sentence and the pool is employed.

THUIRnv0423

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0423
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: PCA dimension reduction and sentence vector cosine similarity comparison.

THUIRnv0424

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0424
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: previously appeared sentences are selected to a pool, according to their similarity to the current sentence to be judged; an overlap comparison between current sentence and the pool is employed.

THUIRnv0425

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0425
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: Only nouns, verbs and adjectives are used as input to the retrieval system, same pool overlap tightness method is used to rid of redundant sentences.

THUIRnv0431

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0431
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: long query sentence retrieval, named-entity parsing, combine sentence and document RSV, relevant threshold based on top result RSV, selected pool with tightness restriction for sentence novelty judgement.

THUIRnv0432

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0432
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: long query sentence retrieval with local feedback according to judgement top5 relevants; PCA dimension reduction for sentence vectors and relevant threshold based on top result's cosine similarity to the query are used for retrieval, selected pool with tightness restriction for sentence novelty judgement.

THUIRnv0433

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0433
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: long query sentence retrieval with local feedback according to judgement top5 relevants and named entity parsing; PCA dimension reduction for sentence vectors and relevant threshold based on top result's cosine similarity to the query are used for retrieval, selected pool with tightness restriction for sentence novelty judgement.

THUIRnv0434

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0434
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Relevant step SVM trained using top5 relevant and first retrieval results of long query. Novelty step the same selected pool with tightness restriction.

THUIRnv0435

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0435
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: long query retrieval result filtering based on sentence similarity. novelty step used selected pool with tightness restriction.

THUIRnv0441

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0441
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/19/2004
  • Task: task4
  • Run description: selected pool method with tightness restriction.

THUIRnv0442

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0442
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/19/2004
  • Task: task4
  • Run description: sentence-sentence overlap method with tightness restriction.

THUIRnv0443

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0443
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/19/2004
  • Task: task4
  • Run description: sentence-sentence overlap method (baseline).

THUIRnv0444

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0444
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/20/2004
  • Task: task4
  • Run description: PCA dimension reduction and cosine similarity sentence to sentence comparison.

THUIRnv0445

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRnv0445
  • Participant: tsinghua.ma
  • Track: Novelty
  • Year: 2004
  • Submission: 9/20/2004
  • Task: task4
  • Run description: PCA dimension reduction (in top 5 documents only novel sentences are included) and cosine similarity sentence to sentence comparison.

UIowa04Nov11

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov11
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: baseline run from 2003 with minor improvements. Novelty is judged by appearance of new named entities or noun phrases. Noun phrases are handled as simple text.

UIowa04Nov12

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov12
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Novelty is judged by appearance of new named entities or noun phrases. Noun phrases are expanded into all matching synset IDs using WordNet. Multi-word nounphrases are also expanded to separate nouns and also expanded to synset IDs.

UIowa04Nov13

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov13
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: Novelty is judged by appearance of new named entities or noun phrases. Noun phrases are sense disambiguated using a local ensemble-based word sense disambiguation compoment. Multi-word nounphrases are also expanded to separate nouns and sense disambiguated.

UIowa04Nov14

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov14
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 8/30/2004
  • Task: task1
  • Run description: Relevant sentences are judged by their similarity to the topic and novel sentences are judged by their similarity to the dynamic knowledge pool.

UIowa04Nov15

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov15
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 8/30/2004
  • Task: task1
  • Run description: Relevant sentences are judged by their similarity to the topic and novel sentences are judged by their similarity to the dynamic knowledge pool.

UIowa04Nov21

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov21
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: Named entities and noun phrases are used as triggers for novelty. We expand NPs to all matching synset IDs and use the IDs for comparison.

UIowa04Nov22

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov22
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task2
  • Run description: Named entities and noun phrases are used as triggers for novelty. We sense-disambiguate NPs (and their constituents) and use the synset ID for the sense for comparison.

UIowa04Nov23

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov23
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: If the similarity between the sentence and the dynamic knowledge pool is below a certain threshold, the sentence is retrieved as novel.

UIowa04Nov24

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov24
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: If the similarity between the sentence and the dynamic knowledge pool is below a certain threshold, the sentence is retrieved as novel.

UIowa04Nov25

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov25
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: If the similarity between the sentence and the dynamic knowledge pool is below a certain threshold, the sentence is retrieved as novel.

UIowa04Nov31

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov31
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task3
  • Run description: Named entities and noun phrases are used as triggers for novelty. We expand NPs to all matching synset IDs and use the IDs for comparison.

UIowa04Nov32

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov32
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task3
  • Run description: Named entities and noun phrases are used as triggers for novelty. We sense-disambiguate NPs (and their constituents) and use the synset ID for the sense for comparison.

UIowa04Nov33

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov33
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Use the provided relevant and novel sentences to train the best relevant and novel thresholds.

UIowa04Nov34

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov34
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Use the provided relevant and novel sentences to train the best relevant and novel thresholds.

UIowa04Nov35

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov35
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Use the provided relevant and novel sentences to train the best relevant and novel thresholds.

UIowa04Nov41

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov41
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task4
  • Run description: Named entities and noun phrases are used as triggers for novelty. We expand NPs to all matching synset IDs and use the IDs for comparison.

UIowa04Nov42

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov42
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/14/2004
  • Task: task4
  • Run description: Named entities and noun phrases are used as triggers for novelty. We sense-disambiguate NPs (and their constituents) and use the synset ID for the sense for comparison.

UIowa04Nov43

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov43
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: Get the optimal novel retrieval threshold by the provided novel sentences.

UIowa04Nov44

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov44
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: Get the optimal novel retrieval threshold by the provided novel sentences.

UIowa04Nov45

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UIowa04Nov45
  • Participant: u.iowa
  • Track: Novelty
  • Year: 2004
  • Submission: 9/22/2004
  • Task: task4
  • Run description: Get the optimal novel retrieval threshold by the provided novel sentences.

umich0411

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0411
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: MaxEnt training on the 2003 data with features Centroid, Length and QueryTitleCosine to find the relevant sentences; use a similarity threshold to find the new sentences.

umich0412

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0412
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 8/31/2004
  • Task: task1
  • Run description: MaxEnt training on the 2003 data with features Centroid, Length and QueryTitleCosine to find the relevant sentences; use a similarity threshold to find the new sentences.

umich0413

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0413
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: MaxEnt training on the 2003 data with features Centroid and QueryDescWordOverlap to find the relevant sentences; use a similarity threshold to find the new sentences.

umich0414

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0414
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: MaxEnt training on the 2003 data with features LexPageRank and QueryDescWordOverlap to find the relevant sentences; use a similarity threshold to find the new sentences.

umich0415

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0415
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/1/2004
  • Task: task1
  • Run description: MaxEnt training on the 2003 data with features Length and QueryTitleWordOverlap to find the relevant sentences; use a similarity threshold to find the new sentences.

umich0421

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0421
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: We formed a graph of sentences connected according to a cosine similarity threshold (=0.7), then applied the PageRank algorithm

umich0422

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0422
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: We formed a graph of sentences connected according to a cosine similarity threshold (=0.7), then applied the PageRank algorithm

umich0423

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0423
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: We formed a graph of sentences connected according to a cosine similarity threshold (=0.9), then applied the PageRank algorithm

umich0424

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0424
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: We formed a graph of sentences connected according to a cosine similarity threshold (=0.5), then applied the PageRank algorithm

umich0425

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0425
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task2
  • Run description: We formed a graph of sentences connected according to a cosine similarity threshold (=0.8), then applied the PageRank algorithm

umich0431

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0431
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: Given the relevant sentences, we expanded the "description" and tried to eliminate the irrelevant documents by looking at word overlap. Finding new sentences is the same as in Task 2.

umich0432

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0432
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: Given the relevant sentences, we expanded the "description" and tried to eliminate the irrelevant documents by looking at word overlap. Finding new sentences is the same as in Task 2.

umich0433

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0433
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/15/2004
  • Task: task3
  • Run description: Given the relevant sentences, we expanded the "description" and tried to eliminate the irrelevant documents by looking at word overlap. Finding new sentences is the same as in Task 2.

umich0434

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0434
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: Given the relevant sentences, we expanded the "description" and tried to eliminate the irrelevant documents by looking at word overlap. Finding new sentences is the same as in Task 2.

umich0435

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0435
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/16/2004
  • Task: task3
  • Run description: Given the relevant sentences, we expanded the "description" and tried to eliminate the irrelevant documents by looking at word overlap. Finding new sentences is the same as in Task 2.

umich0441

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0441
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: Same as in task 2. Howeever, we used the 2004 data provided by NIST for training instead of 2003 data.

umich0442

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0442
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: Same as in task 2. Howeever, we used the 2004 data provided by NIST for training instead of 2003 data. (cosine threshold=0.6)

umich0443

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0443
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: Same as in task 2. Howeever, we used the 2004 data provided by NIST for training instead of 2003 data. (cosine threshold=0.8)

umich0444

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: umich0444
  • Participant: u.michigan
  • Track: Novelty
  • Year: 2004
  • Submission: 9/21/2004
  • Task: task4
  • Run description: Same as in task 2. Howeever, we used the 2004 data provided by NIST for training instead of 2003 data. (cosine threshold=0.4)