Skip to content

Runs - Temporal Summarization 2015

1LtoSfltr20

Participants | Input | Summary

  • Run ID: 1LtoSfltr20
  • Participant: cunlp
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/2/2015
  • Task: ps
  • MD5: 32149e90401102a423e164b0734d16a0
  • Run description: Wikipedia was used as an external resource.

2LtoSnofltr20

Participants | Input | Summary

  • Run ID: 2LtoSnofltr20
  • Participant: cunlp
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/2/2015
  • Task: fs
  • MD5: 1b0973db99260b442f77ced3586e2e06
  • Run description: Wikipedia was used as an external resource.

3LtoSfltr5

Participants | Input | Summary

  • Run ID: 3LtoSfltr5
  • Participant: cunlp
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/2/2015
  • Task: ps
  • MD5: 8ddbfe8c935cad7ce960ed33e9bdd91e
  • Run description: Wikipedia was used as an external resource.

4APSAL

Participants | Input | Summary

  • Run ID: 4APSAL
  • Participant: cunlp
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/2/2015
  • Task: ps
  • MD5: 315832ee44784c97b5ddbea94777a562
  • Run description: Wikipedia was used as an external resource.

COS

Participants | Proceedings | Input | Summary

  • Run ID: COS
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 479b005a679c0ef0a06a93889d4cf4e5
  • Run description: Cosine similarity (no query expansion)

COSSIM

Participants | Proceedings | Input | Summary

  • Run ID: COSSIM
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 323d61ad96bbb1c890fc341f9a576af9
  • Run description: Cosine similarity

DMSL1AP1

Participants | Proceedings | Input | Summary

  • Run ID: DMSL1AP1
  • Participant: BJUT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/28/2015
  • Task: ps
  • MD5: b64841be30672f1119343af44bddeae1
  • Run description: use Affinity Propagation (AP) method

DMSL1NMF2

Participants | Proceedings | Input | Summary

  • Run ID: DMSL1NMF2
  • Participant: BJUT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/28/2015
  • Task: ps
  • MD5: 4f7bec45f79352b0478e476eb7a3ed7d
  • Run description: In this runI adopt the improved Non-negative matrix factorization algorithm with two regularizations as the cluster method .One regularization considers both structure of manifold and structure of semantics the other regularization is L2-norm to control the complexity of model .Compared with APCLUSTER Affinity Propagation Clustering algorithm,in some topics ,our method is better or comparable in residual error.

DMSL1VSH3

Participants | Proceedings | Input | Summary

  • Run ID: DMSL1VSH3
  • Participant: BJUT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/28/2015
  • Task: ps
  • MD5: df076bf1c94589886029593b62d2aeab
  • Run description: use VSHCLUSTER Vertex Substitution Heuristic method

DMSL2A1

Participants | Proceedings | Input | Summary

  • Run ID: DMSL2A1
  • Participant: BJUT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/28/2015
  • Task: so
  • MD5: 1924ecee976980673ef86874a3639825
  • Run description: use Affinity Propagation (AP) method

DMSL2N2

Participants | Proceedings | Input | Summary

  • Run ID: DMSL2N2
  • Participant: BJUT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/28/2015
  • Task: so
  • MD5: a631345e8b4d649c1f67e9a067247b94
  • Run description: In this runI adopt the improved Non-negative matrix factorization algorithm with two regularizations as the cluster method .One regularization considers both structure of manifold and structure of semantics the other regularization is L2-norm to control the complexity of model .Compared with APCLUSTER Affinity Propagation Clustering algorithm,in some topics ,our method is better or comparable in residual error.

DMSL2V3

Participants | Proceedings | Input | Summary

  • Run ID: DMSL2V3
  • Participant: BJUT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/28/2015
  • Task: so
  • MD5: e25d2625d14a7e73df6fae51da00b0af
  • Run description: use VSHCLUSTER Vertex Substitution Heuristic method

docs

Participants | Proceedings | Input | Summary

  • Run ID: docs
  • Participant: CWI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: fs
  • MD5: 4cf85fc4af5b4f909b758d9a7561face
  • Run description: clustering a stream of news articles with 3NN & cosine, matching clusters /w a embers containing all query terms

docsRecall

Participants | Proceedings | Input | Summary

  • Run ID: docsRecall
  • Participant: CWI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: fs
  • MD5: f972746368a542cbbb7a5889feb471e0
  • Run description: clustering a stream of news articles with 3NN & cosine, length <= 30 && gain >= 0.3 to increase recall

FS1A

Participants | Proceedings | Input | Summary

  • Run ID: FS1A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 0029e8a8fc9b792cfeabda6718f66ef3
  • Run description: 3-steps based approach for filtering and summarization. Iteratively, in each hour, first we select top relevant documents using BM25 model provided by the orignal query, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty by combining two features : text divergence and the detection of new entites using an AND operator.

FS1B

Participants | Proceedings | Input | Summary

  • Run ID: FS1B
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 2805723f9d386f1f8a995aef2c77bfd1
  • Run description: We used a method that is based on a fusion method that is applied at the term query level. This method assume that every query term is assumed as a query per se. Then, the scores of each ranking is fused with the other ranking from the same query.

FS2A

Participants | Proceedings | Input | Summary

  • Run ID: FS2A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: e616df299b27dcdd618dfeee1225e1ef
  • Run description: 3-steps based approach for filtering and summarization. Iteratively, in each hour, first we select top relevant documents using BM25 model provided by the orignal query, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty by combining two features : text divergence and the detection of new entites using an OR operator.

FS2B

Participants | Proceedings | Input | Summary

  • Run ID: FS2B
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: f473008e45a4bc867e96ecffb27de7ac
  • Run description: This run is based on a temporal language model.

FS3A

Participants | Proceedings | Input | Summary

  • Run ID: FS3A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: d3b9886349bce26048eb4416aa1e6af1
  • Run description: 3-steps based approach for filtering and summarization. Iteratively, in each hour, first we select top relevant documents using BM25 model provided by the orignal query, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty based only on the text divergence

FS3B

Participants | Proceedings | Input | Summary

  • Run ID: FS3B
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 889fd26b7c69042426d319801b243d8b
  • Run description: We used a method that is based on a fusion method that is applied at the term query level. This method assume that every query term is assumed as a query per se. Then, the scores of each ranking is fused with the other ranking from the same query.

FS4A

Participants | Proceedings | Input | Summary

  • Run ID: FS4A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 6120c0fd13d9a81a2966ae06599ef398
  • Run description: FS4A: 3-steps based approach for filtering and summarization. Iteratively, in each hour, first we select top relevant documents using BM25 model provided by an extended query, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty by combining two features : text divergence and the detection of new entites using an AND operator.

FS4B

Participants | Proceedings | Input | Summary

  • Run ID: FS4B
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 88f0f0dc5b6e9bb75546aefdbb1f30c8
  • Run description: This run is also based on a temporal language model.

FS5A

Participants | Proceedings | Input | Summary

  • Run ID: FS5A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 88b95270a99067e18933b722b130488b
  • Run description: 3-steps based approach for filtering and summarization. Iteratively, in each hour, first we select top relevant documents using BM25 model provided by an extended query, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty by combining two features : text divergence and the detection of new entites using an OR operator.

FS6A

Participants | Proceedings | Input | Summary

  • Run ID: FS6A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 70f7422933b052e4f1a64148004b7915
  • Run description: 3-steps based approach for filtering and summarization. Iteratively, in each hour, first we select top relevant documents using BM25 model provided by an extended query, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, , we detect the novelty based only on the text divergence

IGn

Participants | Proceedings | Input | Summary

  • Run ID: IGn
  • Participant: CWI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: fs
  • MD5: 07893107f5f2aebbd988a1b5f669c28d
  • Run description: clustering a stream of news articles with 3NN & normalized IG

IGnPrecision

Participants | Proceedings | Input | Summary

  • Run ID: IGnPrecision
  • Participant: CWI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: fs
  • MD5: 6afa7c0ba7f71f620c9602488c498c9a
  • Run description: clustering a stream of news articles with 3NN & normalized IG, half hour window, gain .5 top-1.

IGnRecall

Participants | Proceedings | Input | Summary

  • Run ID: IGnRecall
  • Participant: CWI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: fs
  • MD5: 80f97fae18c80d6271f41c213416d4f0
  • Run description: clustering a stream of news articles with 3NN & normalized IG, length <= 30 && gain >= 0.3 to increase recall

InL2DecrQE1ID1

Participants | Proceedings | Input | Summary

  • Run ID: InL2DecrQE1ID1
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 88ad9165e181a910247b8207ab3feb9e
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour decreases as time passes. The first sentences of a summary should contain all the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

InL2DecrQE2ID2

Participants | Proceedings | Input | Summary

  • Run ID: InL2DecrQE2ID2
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 5493bba0dc3b39a3d5238d3bb75e3cc8
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour decreases as time passes. The first sentences of a summary may contain any of the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

InL2DocsQE2ID5

Participants | Proceedings | Input | Summary

  • Run ID: InL2DocsQE2ID5
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: d783d4661edd36404ed686f7bc717e54
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour is the same for all the blocks (5 sentences up to 1000). The first sentences of a summary may contain any of the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

InL2IncrQE1ID7

Participants | Proceedings | Input | Summary

  • Run ID: InL2IncrQE1ID7
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 1f85f8e1fec33be3eac76baee91b4141
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour increases as time passes. The first sentences of a summary should contain all the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

InL2IncrQE2ID4

Participants | Proceedings | Input | Summary

  • Run ID: InL2IncrQE2ID4
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 7addecfb95e70dbec61f406fceffd1f6
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour increases as time passes. The first sentences of a summary may contain any of the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

InL2StabQE1ID6

Participants | Proceedings | Input | Summary

  • Run ID: InL2StabQE1ID6
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 723f0b9c8ce76f174c9f713a4d28c558
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour is the same and depends on the number of time blocks per event. The first sentences of a summary should contain all the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

InL2StabQE2ID3

Participants | Proceedings | Input | Summary

  • Run ID: InL2StabQE2ID3
  • Participant: USI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: b8b0d3be772b32fa0ecee36e4ab64a40
  • Run description: We combined relevance and novelty to calculate the score of each sentence. We used Divergence From Randomness (DFR) Framework and particular InL2 to calculate the relevance of a sentence given an event. The novelty score is based on the number of novel terms that each sentence has compared to the terms of the summary that is already produced. The number of sentences selected each hour is the same for every block and depends on the number of time blocks per event. The first sentences of a summary may contain any of the query terms. On the rest of the blocks we used query expansion by expanding the query with the top 5 most appeared terms of the summary already produced.

l3sattrec15run1

Participants | Input | Summary

  • Run ID: l3sattrec15run1
  • Participant: l3sattrec15
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/2/2015
  • Task: ps
  • MD5: 422f94206b74e94f8136d6c5727726f2
  • Run description: To select update sentences for each hour of the event we adopt the following approach: First we retrieve the top-m documents for the query to account for prevalence, then we select the top-q summary-worthy sentences using a learning to rank model, next we remove redundancy using minhash based clustering and finally we filter by information gain to select novel updates. We used DUC 2007 corpus to train the document summarisation algorithm.

l3sattrec15run2

Participants | Input | Summary

  • Run ID: l3sattrec15run2
  • Participant: l3sattrec15
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: edd4d6c1eded7375e57bb558eab59570
  • Run description: To select update sentences for each hour of the event we adopt the following approach: First we retrieve the top-m documents for the query to account for prevalence, then we select the top-q summary-worthy sentences using a learning to rank model, next we remove redundancy using minhash based clustering and finally we filter by information gain to select novel updates. We use DUC 2007 to train the multi-document summarizer.

l3sattrec15run3

Participants | Input | Summary

  • Run ID: l3sattrec15run3
  • Participant: l3sattrec15
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: 08ffd7d3283e1d7d28afc25ba9be49f0
  • Run description: To select update sentences for each hour of the event we adopt the following approach: First we retrieve the top-m documents for the query to account for prevalence, then we select the top-q summary-worthy sentences using a learning to rank model, next we remove redundancy using minhash based clustering and finally we filter by information gain to select novel updates. We use DUC 2007 to train the multi-document summarizer.

LDA

Participants | Proceedings | Input | Summary

  • Run ID: LDA
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 7ee3fa04c199cc4508be0a5206222975
  • Run description: LDA

LDAv2

Participants | Proceedings | Input | Summary

  • Run ID: LDAv2
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 52a10fe5fa7fa2780c4806f4943e4ca7
  • Run description: Latent Dirichlet Allocation probabilistic ranking variation

LexRank

Participants | Proceedings | Input | Summary

  • Run ID: LexRank
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 35fde70fefc5072fab13a369342e3554
  • Run description: LexRank summarization

LLR

Participants | Proceedings | Input | Summary

  • Run ID: LLR
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 8956ac65e9fbdf29ec240fe2cfb5ce70
  • Run description: log likelihood ratio

LM

Participants | Proceedings | Input | Summary

  • Run ID: LM
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 092134788735ee9006fd77e69d93f9ce
  • Run description: Language Modeling

OS1A

Participants | Proceedings | Input | Summary

  • Run ID: OS1A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 394e3a674d368a590720a72735a4a13c
  • Run description: An approach for summarization. Iteratively, in each hour, first we top documents annotated as relevant, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty by combining two features : text divergence and the detection of new entites using an AND operator.

OS1C

Participants | Proceedings | Input | Summary

  • Run ID: OS1C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 9426568084fe9dc9283ac54345c21caf
  • Run description: This run is based on a real time selective summarization approach. The aim was to make a decision on whether the given sentence will be added to the summary or not on real time way without the need of buffering. For a given sentence, the novelty degree and redundancy score are assessed. Only sentences that get a novelty degree and redundancy score above a threshold are added to the summary. For this run the following parameters were adopted: 1- The thresholds were fixed to 0.27 and 3 for the novelty degree and redundancy score respectively. 2- Linear smoothing was used in the estimation of the redundancy score; 3- A decay function was taken into account when computing the novelty degree

OS2A

Participants | Proceedings | Input | Summary

  • Run ID: OS2A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 572a0a6b30a8c909cfe84921a58c0490
  • Run description: An approach for summarization. Iteratively, in each hour, first we top documents annotated as relevant, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty by combining two features : text divergence and the detection of new entites using an OR operator.

OS2C

Participants | Proceedings | Input | Summary

  • Run ID: OS2C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: dac9ae33242537d44f31c713d0f96696
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1- The thresholds of the novelty degree and redundancy score were set to the average of the values observed during last time window. 2- Linear smoothing was used in the estimation of the redundancy score; 3- A decay function was taken into account when computing the novelty degree

OS3A

Participants | Proceedings | Input | Summary

  • Run ID: OS3A
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 71bc481b2a457b958851d8ea2b4b9cfd
  • Run description: An approach for summarization. Iteratively, in each hour, first we top documents annotated as relevant, then we select relevant sentences based on the presence and the proximity of query terms in the sentence, and finally, we detect the novelty based only on the text divergence

OS3C

Participants | Proceedings | Input | Summary

  • Run ID: OS3C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 4e4967dfb052deff18dd3cd5864676fe
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1-The thresholds were fixed to 0.27 and 3 for the novelty degree and redundancy score respectively. 2- Dirichlet smoothing was used in the estimation of the redundancy score; 3- A decay function was taken into account when computing the novelty degree

OS4C

Participants | Proceedings | Input | Summary

  • Run ID: OS4C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 95f21952b3cc9026774999136a857aee
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1-The thresholds of the novelty degree and redundancy score were set to the average of the values observed during last time window; 2- Dirichlet smoothing was used in the estimation of the redundancy score; 3- A decay function was taken into account when computing the novelty degree

OS5C

Participants | Proceedings | Input | Summary

  • Run ID: OS5C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 00b8d560fc1ab60436352c67cd591948
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1-The thresholds were fixed to 0.27 and 3 for the novelty degree and redundancy score respectively. 2- Dirichlet smoothing was used in the estimation of the redundancy score; 3- A decay function was not taken into account when computing the novelty degree

OS6C

Participants | Proceedings | Input | Summary

  • Run ID: OS6C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: dbdd7a441d2179eae3e2378b16d3592e
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1-The thresholds of the novelty degree and redundancy score were set to the average of the values observed during last time window; 2-Dirichlet smoothing was used in the estimation of the redundancy score; 3-A decay function was not taken into account when computing the novelty degree

OS7C

Participants | Proceedings | Input | Summary

  • Run ID: OS7C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: ad098bd56c6a5e2ec082457d6925d488
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1-The thresholds were fixed to 0.27 and 3 for the novelty degree and redundancy score respectively. 2- Linear smoothing was used in the estimation of the redundancy score; 3- A decay function was not taken into account when computing the novelty degree

OS8C

Participants | Proceedings | Input | Summary

  • Run ID: OS8C
  • Participant: IRIT
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 915e72cfdce61028e85fb8eb7fc14981
  • Run description: We use the same approach as our run OS1C expect that for this run the following parameters were adopted: 1-The thresholds of the novelty degree and redundancy score were set to the average of the values observed during last time window. 2-Linear smoothing was used in the estimation of the redundancy score; 3-A decay function was taken into account when computing the novelty degree

ProfOnly3

Participants | Proceedings | Input | Summary

  • Run ID: ProfOnly3
  • Participant: udel_fang
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 4ef4be7c9860d16b9cdbce184e9ad7c5
  • Run description: This run focuses on building a rich query representation by using external resources, such as auxiliary corpora that were generated before the query start time. Documents were processed in batch and decisions are made at the end of each hour. Sentences within the documents then are ranked and selected.

ProfOnlyFS3

Participants | Proceedings | Input | Summary

  • Run ID: ProfOnlyFS3
  • Participant: udel_fang
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: c303f6fd2c8e531d88e86ed77c4c4218
  • Run description: This run focuses on building a rich query representation by using external resources, such as auxiliary corpora that were generated before the query start time. Documents were processed in batch and decisions are made at the end of each hour. Sentences within the documents then are ranked and selected.

QL

Participants | Proceedings | Input | Summary

  • Run ID: QL
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: d90488807657784dc1d0d232432b6356
  • Run description: query likelihood

QLF

Participants | Proceedings | Input | Summary

  • Run ID: QLF
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: edd46204f56775480f31b238f878ab45
  • Run description: Query likelihood with a higher threshold for sentence selection

QLLP

Participants | Proceedings | Input | Summary

  • Run ID: QLLP
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: d63241cc40195f831ee0853d37b4637e
  • Run description: Query likelihood with smoothing

Run1

Participants | Proceedings | Input | Summary

  • Run ID: Run1
  • Participant: AIPHES
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 1831f1cfd3cbe1743403f4ec0886589a
  • Run description: This run uses a sequential clustering approach with at most 1000 clusters, min cluster score of 3, 'full token discount', and strict boilerplate removal.

Run2

Participants | Proceedings | Input | Summary

  • Run ID: Run2
  • Participant: AIPHES
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: a5849698eddfdc23d6271cbb25aed853
  • Run description: This run uses a sequential clustering approach with at most 1000 clusters, min cluster score of 1, 'full token discount', and strict boilerplate removal.

Run3

Participants | Proceedings | Input | Summary

  • Run ID: Run3
  • Participant: AIPHES
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 25d733b2a9287daa3b6eb9e59a944f65
  • Run description: This run uses a sequential clustering approach with at most 1000 clusters, min cluster score of 1, reduced token discount, and less strict boilerplate removal.

Run4

Participants | Proceedings | Input | Summary

  • Run ID: Run4
  • Participant: AIPHES
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 65bf003f1e4b9be90a2eb38352afd359
  • Run description: This run uses a sequential clustering approach with at most 100 clusters, min cluster score of 1, reduced token discount, and less strict boilerplate removal.

runvec1

Participants | Proceedings | Input | Summary

  • Run ID: runvec1
  • Participant: ISCASIR
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: bd1d4d28ef15fabf3c8b060893bba0e6
  • Run description: This run is supported by the word2vec technique.

runvec2

Participants | Proceedings | Input | Summary

  • Run ID: runvec2
  • Participant: ISCASIR
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 07734b934002007cf117c70b53663e37
  • Run description: No query expanding and supported by word2vec.

TF

Participants | Proceedings | Input | Summary

  • Run ID: TF
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: a2483c055ddae080f16586e50ea13456
  • Run description: query term frequency

TFFilter

Participants | Proceedings | Input | Summary

  • Run ID: TFFilter
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: f3aa16e85ec9f4788b8b5ed3c8ef662b
  • Run description: Filtering documents by query term frequency

TFISF

Participants | Proceedings | Input | Summary

  • Run ID: TFISF
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: so
  • MD5: 748b4ad97e4878bc2df63d1d8fd6bc41
  • Run description: tfisf sentence ranking

TFISFW

Participants | Proceedings | Input | Summary

  • Run ID: TFISFW
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: c642e36826ee7b87254594350b2cacad
  • Run description: TFISF with Wordnet query expansion

TFISFW2V

Participants | Proceedings | Input | Summary

  • Run ID: TFISFW2V
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 13d8253051066ea6be3392bcb8ad868c
  • Run description: TF.ISF sentence ranking with Word2Vec query expansion

TFW

Participants | Proceedings | Input | Summary

  • Run ID: TFW
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 2e8473ab4a01aa7de642f3011c12f676
  • Run description: Term Frequency with Wordnet query expansion

TFW2V

Participants | Proceedings | Input | Summary

  • Run ID: TFW2V
  • Participant: UvA.ILPS
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 28ebe70195ec92f7d24466c109e2cb32
  • Run description: Term Frequency Word2Vec

titles

Participants | Proceedings | Input | Summary

  • Run ID: titles
  • Participant: CWI
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: fs
  • MD5: edd5198dfc485da241505ba5cbcf47fb
  • Run description: clustering a stream of news articles with 3NN & cosine, matching clusters /w a member containing all query terms in the title

uogTrdEEQR3

Participants | Proceedings | Input | Summary

  • Run ID: uogTrdEEQR3
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 6d2ce79d679fd04e92d3ef01c706cfa3
  • Run description: R3 -- Entity-focused run, iterating over the corpus document-by-document, scoring sentences similar to query) using entity-entity interaction feature, selecting top-k updates.

uogTrdEQR1

Participants | Proceedings | Input | Summary

  • Run ID: uogTrdEQR1
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 221a9065212b63de46231e8e51dfef75
  • Run description: R1 -- Entity-focused run, iterating over the corpus document-by-document, scoring sentences (similar to query) using entity importance feature, selecting top-k updates.

uogTrdSqCR5

Participants | Proceedings | Input | Summary

  • Run ID: uogTrdSqCR5
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: d4d885024f79c6c0b7e6704e72f6b053
  • Run description: R5 -- Baseline run, iterating over the corpus document-by-document, ranking sentences by cosine similarity to query, selecting updates by cosine similarity threshold.

uogTrhEEQR4

Participants | Proceedings | Input | Summary

  • Run ID: uogTrhEEQR4
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: dbbb104b9b7ea47935b3e9f1e2b78fbe
  • Run description: R4 -- Entity-focused run, iterating over the corpus in hour-by-hour batches, scoring sentences similar to query) using entity-entity interaction feature, selecting top-k updates.

uogTrhEQR2

Participants | Proceedings | Input | Summary

  • Run ID: uogTrhEQR2
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 66be7981b85474a597351069971c80e4
  • Run description: R2 -- Entity-focused run, iterating over the corpus in hour-by-hour batches, scoring sentences (similar to query) using entity importance feature, selecting top-k updates.

uogTrhSqCR6

Participants | Proceedings | Input | Summary

  • Run ID: uogTrhSqCR6
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: d6e328502492342e028de7c3796ad1ce
  • Run description: R6 -- Baseline run, iterating over the corpus in hour-by-hour batches, ranking sentences by cosine similarity to query, selecting updates by cosine similarity threshold.

uogTrT1MANU

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT1MANU
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 059f0be1b1a3364660bd651dcddfc312
  • Run description: This is a MANUAL run. Sentences were selected by a human assessor. Note that this is a valid run for all tasks (1/2/3).

uogTrT1X2cSCP

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT1X2cSCP
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: e492915cba4b55b67c5ff897bab9985f
  • Run description: This run performs explicit diversification of the event updates based on a pre-defined taxonomy of important intents to cover. The intents were proposed by crowd workers given the event types. Score boosting based on proximity to likely relevant sentences is also applied.

uogTrT1X2iNCP

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT1X2iNCP
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: 440ab380c30f78c83a3db76ce4b79c77
  • Run description: This run performs explicit diversification of the event updates based on a pre-defined taxonomy of important intents to cover. The intents were automatically extracted from Wikipedia infoboxes from past events of the same type. Intents were supplemented with topics identified over time from an aligned News stream. Score boosting based on proximity to likely relevant sentences is also applied. External Resources: Wikipedia pages for events predating the KBA corpus. (News Stream is the KBA corpus)

uogTrT1X2iSC

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT1X2iSC
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: fd95ca78a5fa149a4c0b59300bd86033
  • Run description: This run performs explicit diversification of the event updates based on a pre-defined taxonomy of important intents to cover. The intents were automatically extracted from Wikipedia infoboxes from past events of the same type. External Resources: Wikipedia pages for events predating the KBA corpus.

uogTrT1X2iSCP

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT1X2iSCP
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/3/2015
  • Task: ps
  • MD5: 35ef3624abc14b0c94644a5d4f09b80f
  • Run description: This run performs explicit diversification of the event updates based on a pre-defined taxonomy of important intents to cover. The intents were automatically extracted from Wikipedia infoboxes from past events of the same type. Score boosting based on proximity to likely relevant sentences is also applied. Exzternal Resources: Wikipedia pages for events predating the KBA corpus.

uogTrT1X2iTCP

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT1X2iTCP
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: 9e7f900efcc50b290491098712944ee2
  • Run description: This run performs explicit diversification of the event updates based on a pre-defined taxonomy of important intents to cover. The intents were automatically extracted from Wikipedia infoboxes from past events of the same type. Intents were supplemented with topics identified over time from an aligned Twitter stream. Score boosting based on proximity to likely relevant sentences is also applied. External Resources: Wikipedia pages for events predating the KBA corpus. Aligned Twitter Stream

uogTrT2EimpP

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT2EimpP
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: 660c8ddc9c286d81f93bde78378475c6
  • Run description: Relevance + Emtity importance scoring of sentences Entities are extracted from old news corpora.

uogTrT2EintP

Participants | Proceedings | Input | Summary

  • Run ID: uogTrT2EintP
  • Participant: uogTr
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: fbc2a27297a54884aa88259f49fac4d9
  • Run description: Relevance + Entity Interaction scoring of sentences Entities are extracted from old news corpora.

UWCTSRun1

Participants | Proceedings | Input | Summary

  • Run ID: UWCTSRun1
  • Participant: WaterlooClarke
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/24/2015
  • Task: ps
  • MD5: a3550b746fbe198e9ffafd5158c374d5
  • Run description: UWCTSRun1 follows the strategy of pushing the first sentence found in each article as determined natively by python-goose.

UWCTSRun2

Participants | Proceedings | Input | Summary

  • Run ID: UWCTSRun2
  • Participant: WaterlooClarke
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/24/2015
  • Task: ps
  • MD5: 2c0ed563ba8a2d91ecea9d1b4a10d71e
  • Run description: UWCTSRun2 follows the strategy of pushing only the document headline for each of the selected documents.

UWCTSRun3

Participants | Proceedings | Input | Summary

  • Run ID: UWCTSRun3
  • Participant: WaterlooClarke
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/24/2015
  • Task: ps
  • MD5: 255c98ce193081a385f8935424751a64
  • Run description: UWCTSRun3 follows the strategy of pushing the first sentence found in each article determined using a combination of python-readability and python-goose. Python-readability parses the clean HTML and only keeps the HTML containing the main article. Then python-goose is used to extract sentences from the HTML.

UWCTSRun4

Participants | Proceedings | Input | Summary

  • Run ID: UWCTSRun4
  • Participant: WaterlooClarke
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/25/2015
  • Task: so
  • MD5: 09968fdd77c4be1a8778297b10297150
  • Run description: UWCTSRun4 follows the strategy of pushing the first sentence found in each article as determined natively by python-goose.

UWCTSRun5

Participants | Proceedings | Input | Summary

  • Run ID: UWCTSRun5
  • Participant: WaterlooClarke
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/25/2015
  • Task: so
  • MD5: ef67270e9c83c97d558dc79b168437a9
  • Run description: UWCTSRun5 follows the strategy of pushing only the document headline for each of the selected documents.

UWCTSRun6

Participants | Proceedings | Input | Summary

  • Run ID: UWCTSRun6
  • Participant: WaterlooClarke
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 8/25/2015
  • Task: so
  • MD5: bf5721eaaefe862eaf7e5d9ff11820e8
  • Run description: UWCTSRun6 follows the strategy of pushing the first sentence found in each article determined using a combination of python-readability and python-goose. Python-readability parses the clean HTML and only keeps the HTML containing the main article. Then python-goose is used to extract sentences from the HTML.

WikiOnly2

Participants | Proceedings | Input | Summary

  • Run ID: WikiOnly2
  • Participant: udel_fang
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: c3d0db158735e9349f97f9805c6e8114
  • Run description: This run focuses on building a rich query representation by using external resources, such as Wikipedia (revisions before query start time). Documents were processed in batch and decisions are made at the end of each hour. Sentences within the documents then are ranked and selected.

WikiOnlyFS2

Participants | Proceedings | Input | Summary

  • Run ID: WikiOnlyFS2
  • Participant: udel_fang
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: ba16dd029d0e45efc8fce0a4f12a6849
  • Run description: This run focuses on building a rich query representation by using external resources, such as Wikipedia (revisions before query start time). Documents were processed in batch and decisions are made at the end of each hour. Sentences within the documents then are ranked and selected.

WikiProfMix1

Participants | Proceedings | Input | Summary

  • Run ID: WikiProfMix1
  • Participant: udel_fang
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: so
  • MD5: b3902f2b69e3bf0d7e84cf8e159d48b9
  • Run description: This run focuses on building a rich query representation by using external resources, such as Wikipedia and other auxiliary corpora that were generated before the query start time. Documents were processed in batch and decisions are made at the end of each hour. Sentences within the documents then are ranked and selected.

WikiProfMixFS1

Participants | Proceedings | Input | Summary

  • Run ID: WikiProfMixFS1
  • Participant: udel_fang
  • Track: Temporal Summarization
  • Year: 2015
  • Submission: 9/4/2015
  • Task: ps
  • MD5: 992d47f0ee0df0a9d469e61559273ecd
  • Run description: This run focuses on building a rich query representation by using external resources, such as Wikipedia and other auxiliary corpora that were generated before the query start time. Documents were processed in batch and decisions are made at the end of each hour. Sentences within the documents then are ranked and selected.