Skip to content

Runs - Real-time Summarization 2016

AmILPWSEBM

Participants | Input | Summary | Appendix

  • Run ID: AmILPWSEBM
  • Participant: IRIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 1496ec57dc8f7366723bfc69d40eca69
  • Run description: The proposed approach first filters out all tweets that do not contain at least a minimum of either a predefined constant (K=2) or the number of words in the query title; then tweets that pass this filter are clustered incrementally into aspects. At the end of a day and for each topic, the approach selects a subset of tweets that cover as many aspects as possible within the specified length limit (100 tweets per day). Tweet selection problem is formulated as an Integer Linear Programming (ILP) problem that is solved through a standard branch-and-bound algorithm. We use ILP to optimize a global objective function for tweet selection, which is based on the relevance of tweets with respect to the query (user interest). We select tweets which receive the highest relevance score with respect to the user interest from each aspect cluster subject to a series of constraints related to redundancy, coverage and length limit. Extended Boolean model is used to estimate the relevance score of tweets in which the similarity between the query term and all the terms of an incoming tweet T is considered as the weight of query terms. The similarity between two words is measured by cosine similarity between their vector which are generated by wrd2vec model using a training data
  • Code: http://irit.fr

bjutdt

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: bjutdt
  • Participant: BJUT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: da5f3fc8f98140e7eaff722b8c873589
  • Run description: We had a web crawler to get daily news and website information.And the external resource is google.So we can update our query expansion every day and there is no human intervention.

bjutgbdt

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: bjutgbdt
  • Participant: BJUT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 0d9c342147aafa06099ce2be6c389c76
  • Run description: We had a web crawler to get daily news and website information.And the external resource is google.So we can update our query expansion every day and there is no human intervention.

BJUTmydt-04

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: BJUTmydt-04
  • Participant: BJUT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

BJUTmydt-05

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: BJUTmydt-05
  • Participant: BJUT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

BJUTmyrf-03

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: BJUTmyrf-03
  • Participant: BJUT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

bjutrf

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: bjutrf
  • Participant: BJUT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 893b43cc65443de1e64d2be02fce8b1d
  • Run description: We had a web crawler to get daily news and website information.And the external resource is google.So we can update our query expansion every day and there is no human intervention.

CCNUNLPrun1

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CCNUNLPrun1
  • Participant: CCNU2016NLP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/11/2016
  • Task: b
  • MD5: ca7b8afdc0ea20208e5a4a396f260b7a
  • Run description: 1. Use Bing search api to extend the query. 2. Use JS-Divergence to calculate the relevance of the summaries and coming tweet. 3.Use TFIDF to calculate the novelty 4.resort summaries in Scenario B at the end of a day by BM25
  • Code: https://github.com/beichao1314/TREC2016

CCNUNLPrun1-06

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: CCNUNLPrun1-06
  • Participant: CCNU2016NLP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

CCNUNLPrun2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CCNUNLPrun2
  • Participant: CCNU2016NLP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/11/2016
  • Task: b
  • MD5: c0fb0e503a57efc8126be1d3e49ed77e
  • Run description: 1.Use Bing search api to extend the query 1.Use JS-Divergence to calculate the relevance of the queries and tweets, summaries and coming tweet. 2.Use TFIDF to calculate the novelty 3.resort summaries in Scenario B at the end of a day by BM25
  • Code: https://github.com/beichao1314/TREC2016

CCNUNLPrun2-07

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: CCNUNLPrun2-07
  • Participant: CCNU2016NLP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

CLIP-A-1-08

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: CLIP-A-1-08
  • Participant: CLIP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

CLIP-A-2-09

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: CLIP-A-2-09
  • Participant: CLIP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

CLIP-A-3-10

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: CLIP-A-3-10
  • Participant: CLIP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

CLIP-B-2015

Participants | Input | Summary | Appendix

  • Run ID: CLIP-B-2015
  • Participant: CLIP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 00bfeb096c08bd990e252ea1c3df3b2c
  • Run description: We trained a word2vec model on 1B old tweets.

CLIP-B-MAX

Participants | Input | Summary | Appendix

  • Run ID: CLIP-B-MAX
  • Participant: CLIP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: c4923d8bc00fc8bbfec9c4cf980481c2
  • Run description: We ran a search on Twitter to determine a score threshold for returning candidate tweets. We also trained a word2vec model on 1B old tweets.

CLIP-B-MIN

Participants | Input | Summary | Appendix

  • Run ID: CLIP-B-MIN
  • Participant: CLIP
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: b537a642ce016b64812a70a2b90f0210
  • Run description: We ran a search on Twitter to determine a score threshold for returning candidate tweets. We also trained a word2vec model on 1B old tweets.

Hamid-20

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: Hamid-20
  • Participant: IRIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

HLJIT_LM

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: HLJIT_LM
  • Participant: HLJIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: a42eb1d3e6f061b9596c6667982cbf4b
  • Run description: The language model is used in this run.

HLJIT_LM-19

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: HLJIT_LM-19
  • Participant: HLJIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

HLJIT_LM_TIME

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: HLJIT_LM_TIME
  • Participant: HLJIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: d76bbadb7d562ff4da2ad5bbdfc44a29
  • Run description: The time-based document model is used to smooth the document.

HLJIT_LM_URL

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: HLJIT_LM_URL
  • Participant: HLJIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 01431ad725f3b1fdd88c47dd1a287127
  • Run description: The web pages linked from tweets are used. The retrieval model is use the model described in "A hyperlink-extended language model for microblog retrieval,International Journal of Database Theory and Application 8.6 (2015) pp.89-100".

iitbhu-15

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: iitbhu-15
  • Participant: DPLAB_IITBHU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

IritIrisSDA-22

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: IritIrisSDA-22
  • Participant: IRIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

IritIrisSDB

Participants | Input | Summary | Appendix

  • Run ID: IritIrisSDB
  • Participant: IRIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: d892a68e0c1e63f7ed1029a7e98b0cda
  • Run description: The main aim of this run was the speed of the decision making : all answers are taken within few minutes. Many features, in addition of the content of the tweet, are taken into account to make a first score on the text content of the tweet, and a second one based on all the additionnal features. Python had been prefered to Java to increase the rapidity of the decision making even more, exept for the final ranking of the result (back to Java as the sampling).

iritRunBiAm-21

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: iritRunBiAm-21
  • Participant: IRIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

IRLAB

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRLAB
  • Participant: IRLAB_DA-IICT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: b
  • MD5: 7c3bc5cc567a062bc7dc3f99c0ffd946
  • Run description: Query Expansion has been done using word2vec model which is trained using tweets of past few months. Relevance score has been obtained using Okapi BM25. Novelty detection done using cosine similarity.

IRLAB2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: IRLAB2
  • Participant: IRLAB_DA-IICT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: b
  • MD5: be8356a8175294c8a1cd8a8072c5713c
  • Run description: Query expansion has been done using word2vec model which is trained using tweets of past few months. Relevance score has been obtained using Okapi BM25. Novelty detection done using Jaccard similarity.

isikol_tag

Participants | Input | Summary | Appendix

  • Run ID: isikol_tag
  • Participant: ISIKol
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 49461a11ecf525ce33b1184d632f672c
  • Run description: This run uses parts of speech and named entity tagger to find key words from each query.

isikol_ti

Participants | Input | Summary | Appendix

  • Run ID: isikol_ti
  • Participant: ISIKol
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 49f41b3523e20c80f35f28a6cc59ed8f
  • Run description: This run uses tf-idf metric to find key words from each query.

MyBaseline-17

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: MyBaseline-17
  • Participant: HLJIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

MyBaseline-18

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: MyBaseline-18
  • Participant: HLJIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

MyBaseline-24

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: MyBaseline-24
  • Participant: ISIKol
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

nudt_biront

Participants | Input | Summary | Appendix

  • Run ID: nudt_biront
  • Participant: NUDTSNA
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: b3251ac68edbaab60cc654afde9ef427
  • Run description: We apply a recommendation framework based on Lucene. External terms extracted from Google search engine and Twitter using pro le title as query are well incorporated to enhance the understanding of a pro le's interest. We set each term's weight as its tf-idf value based on search result returned by Google and Twitter, then combine pro le title and external terms, we build our Lucene query and search hits satis ed our limit from twitter stream indexed. In the email digest task, based on the candidate tweets retrieved from the rst task, we select top-100 ranked twitter of each pro le per day as our result.

nudt_sna

Participants | Input | Summary | Appendix

  • Run ID: nudt_sna
  • Participant: NUDTSNA
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 6433a1ce0aec7b4ce7cde3590dc96346
  • Run description: Based on the search engine library Lucene, we implement a system for 2016 RTS track. With the Google search engine and the external corpora Wikipedia, we just use the title field of a profile for query expansion. Besides, we also explore the Twitter search engine to obtain useful hashtags for some profiles. Last but not least, we overwrite the similarity scoring function to improve the retrieval performance.

nudt_sna-28

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: nudt_sna-28
  • Participant: NUDTSNA
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

nudt_sna-29

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: nudt_sna-29
  • Participant: NUDTSNA
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

nudt_sna-30

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: nudt_sna-30
  • Participant: NUDTSNA
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

PKUICSTRunB1

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PKUICSTRunB1
  • Participant: PKUICST
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 0cc7eedafed9a665038c55bb5aa08934
  • Run description: We use KL divergence with Jaccard smooth to obtain relevant tweets, and utilized a uniform novel threshold N = 0.73.
  • Code: https://github.com/yaolili/trec16/tree/master/src

PKUICSTRunB2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PKUICSTRunB2
  • Participant: PKUICST
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 4d3f491107fbc39c740612e3ed804ac8
  • Run description: We utilized google web search to realize query expansion before the evaluation period, and utilized a uniform novel threshold N = 0.72.
  • Code: https://github.com/yaolili/trec16/tree/master/src

PKUICSTRunB3

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PKUICSTRunB3
  • Participant: PKUICST
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 0f875850ed578baa4fc42aaa145e494c
  • Run description: We utilized google web search to realize query expansion before the evaluation period, and adopted simhash to calculate novelty with a uniform novel threshold D = 42.
  • Code: https://github.com/yaolili/trec16/tree/master/src

PolyURunB1

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PolyURunB1
  • Participant: COMP2016
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/11/2016
  • Task: b
  • MD5: 82d32944a35db5e2d18532cd28496f8c
  • Run description: most salient features: add a word2vec information redundancy feature to train the SVM model external resources: use GoogleNews Corpus and Reuters Corpus to train the model. They are not timely with respect to the queries.

PolyURunB2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PolyURunB2
  • Participant: COMP2016
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/11/2016
  • Task: b
  • MD5: ee5d3686bb9cd1c1d43193f75ea94736
  • Run description: the most salient features: use features to train SVM model (without word2vec information redundancy feature) external resources: Reuters Corpus

PolyURunB3

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PolyURunB3
  • Participant: COMP2016
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/11/2016
  • Task: b
  • MD5: 2a2d60ac6df5f42680c71672ff5ffaca
  • Run description: the most salient features: no ML model, just a very naive method external resources: Reuters Corpus

PRNABaseline-34

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: PRNABaseline-34
  • Participant: prna
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

PRNATaskA2-35

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: PRNATaskA2-35
  • Participant: prna
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

PRNATaskA3-36

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: PRNATaskA3-36
  • Participant: prna
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

PRNATaskB1

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PRNATaskB1
  • Participant: prna
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 760f7460c6e9a9de9c029589561b139c
  • Run description: This run uses text from title, description, narrative as features for search, with dynamic push strategy.

PRNATaskB2

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PRNATaskB2
  • Participant: prna
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 109973e451f6f8657e29213626731fdd
  • Run description: This run uses text from title, description, narrative as features for search, with different weighting on different types of search terms.

PRNATaskB3

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: PRNATaskB3
  • Participant: prna
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 0107b1cc2b494876af3a6f8b26cad538
  • Run description: This run combines the two algorithms from run1 and run2 with our algorithm for Scenario A. It uses text features from topic title, description and narrative as features for search with different weighting on features and push thresholds.

QUBaseline-37

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: QUBaseline-37
  • Participant: QU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

QUDR8

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: QUDR8
  • Participant: QU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 5db59fdfc1bddf33562103edda5b450f
  • Run description: This run uses the title as a query and runs it at the end of each day against the index of tweets. Before evaluation period, we indexed tweets of 5 days to get initial statistics. It uses language modeling based on the Dirichlet smoothing method for computing the similarity between tweets and the query with a relevance threshold = 8 and novelty threshold of 0.6.

QUExpP-38

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: QUExpP-38
  • Participant: QU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

QUExpT-39

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: QUExpT-39
  • Participant: QU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

QUJM16

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: QUJM16
  • Participant: QU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 1f105546efad4383040285d947a90baa
  • Run description: This run uses the title as a query and runs it at the end of each day against the index of tweets. Before evaluation period, we indexed tweets of 5 days to get initial statistics. It uses language modeling based on the Jelinek-Mercer smoothing method for computing the similarity between tweets and the query with a relevance threshold = 16 and novelty threshold of 0.6.

QUJMDR24

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: QUJMDR24
  • Participant: QU
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: f7f76242b246047cd6e5d4c5d378ff61
  • Run description: This run uses the title as a query and runs it at the end of each day against the index of tweets. Before evaluation period, we indexed tweets of 5 days to get initial statistics. It uses language modeling based on the Jelinek-Mercer and Dirichlet smoothing methods for computing the similarity between tweets and the query with a relevance threshold of 24 and novelty threshold of 0.6.

QUT_RTS-40

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: QUT_RTS-40
  • Participant: QUT_RTS
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

ru32-33

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: ru32-33
  • Participant: PKUICST
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

run1-11

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: run1-11
  • Participant: COMP2016
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

run1-31

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: run1-31
  • Participant: PKUICST
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

run2-12

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: run2-12
  • Participant: COMP2016
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

run2-32

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: run2-32
  • Participant: PKUICST
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

run3-13

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: run3-13
  • Participant: COMP2016
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

runA_daiict_irlab-23

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: runA_daiict_irlab-23
  • Participant: IRLAB_DA-IICT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

RunBIch

Participants | Input | Summary | Appendix

  • Run ID: RunBIch
  • Participant: IRIT
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: c5443b812aae9b3aa0a7adac56c70a82
  • Run description: In this run, we used some basic features to compute the relevance of tweets wrt the given profiles.

udelRunBM25-43

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: udelRunBM25-43
  • Participant: udel
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

udelRunBM25B

Participants | Input | Summary | Appendix

  • Run ID: udelRunBM25B
  • Participant: udel
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Task: b
  • MD5: a40b8b3960f6ff831822eb4960bfbc1a
  • Run description: Trains classifier using top Google documents, tweets from Twitter and uses BM25 similarity.

udelRunTFIDF-44

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: udelRunTFIDF-44
  • Participant: udel
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

udelRunTFIDFB

Participants | Input | Summary | Appendix

  • Run ID: udelRunTFIDFB
  • Participant: udel
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: c5eef88586c73ec5c59dfc0e6e3833f5
  • Run description: Trains classifier using top Google documents, tweets from Twitter.

udelRunTFIDFQ-45

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: udelRunTFIDFQ-45
  • Participant: udel
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Task: a

udelRunTFIDFQB

Participants | Input | Summary | Appendix

  • Run ID: udelRunTFIDFQB
  • Participant: udel
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Task: b
  • MD5: 463ad70f7cdb00d0f3213bbdefeaa7ec
  • Run description: Trains classifier using top Google documents, tweets from Twitter and relevant tweets from microblog 2015 qrels file.

UDInfo_TlmN

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_TlmN
  • Participant: udel_fang
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: e6ea105b85b38798212a66269d379832
  • Run description: Use search result snippets of Bing, Google, and Yahoo to expand the original queries. Query prediction techniques, such as clarity, as well as the estimated relevant language model differences were used to predict the score threshold for each day. Simple redundancy check using set difference between tweets.

UDInfo_TlmNlm

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_TlmNlm
  • Participant: udel_fang
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: d4203b144b495aae8ec9b4c16e7684bb
  • Run description: Use search result snippets of Bing, Google, and Yahoo to expand the original queries. Query prediction techniques, such as clarity, as well as the estimated relevant language model differences were used to predict the score threshold for each day and determining the redundancy threshold for each day.

UDInfo_TN

Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UDInfo_TN
  • Participant: udel_fang
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 1ac4f6b03fb26a2dcebbdb69d428a673
  • Run description: Use search result snippets of Bing, Google, and Yahoo to expand the original queries. Query prediction techniques, such as clarity, was used to predict the score threshold for each day

UDInfoDFP-47

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: UDInfoDFP-47
  • Participant: udel_fang
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

UDInfoSFP-46

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: UDInfoSFP-46
  • Participant: udel_fang
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

UDInfoSPP-48

Participants | Proceedings | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: UDInfoSPP-48
  • Participant: udel_fang
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

UmdHcilBaseline-49

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: UmdHcilBaseline-49
  • Participant: umd_hcil
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

WaterlooBaseline-50

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: WaterlooBaseline-50
  • Participant: WaterlooClarke
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

WaterlooBaseline-51

Participants | Input | Summary (Batch) | Summary (Mobile) | Appendix

  • Run ID: WaterlooBaseline-51
  • Participant: WaterlooLin
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/13/2016
  • Type: automatic
  • Task: a

YoGoslingBSL

Participants | Input | Summary | Appendix

  • Run ID: YoGoslingBSL
  • Participant: WaterlooLin
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: de0ac209a6e243498c3df40c7c8a50d8
  • Run description: This run served as the track-wide baseline, which timely searches the index using only title term match scoring.
  • Code: https://github.com/YoGosling/Anserini

YoGoslingLMGTFY

Participants | Input | Summary | Appendix

  • Run ID: YoGoslingLMGTFY
  • Participant: WaterlooLin
  • Track: Real-time Summarization
  • Year: 2016
  • Submission: 8/12/2016
  • Type: automatic
  • Task: b
  • MD5: 9e9e29ade5bbcecc0938ac938409c9c7
  • Run description: Our method, based on the track wide baseline YoGosling, additionally queries the Google custom search API every day at the beginning, using the "title" term. We compute the KL-divergence statistics to generate a list of "relevant" phrase and perform phrase search with slopping distance tolerance.