Skip to content

Runs - Enterprise 2007

AUTORUN

Results | Participants | Input | Summary | Appendix

  • Run ID: AUTORUN
  • Participant: iiit-hyderbad
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Automatic query search for experts in specific domain

CSIROdsQfb

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: CSIROdsQfb
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: feedback
  • Task: document
  • Run description: This script takes the query field from the Topic and the pages, and generates the combined result from the clustering and the genre analysis results. The pages are used to improve the ranking. The module used is pfcm_clustering_algo_with_genre_feedback.pm

CSIROdsQnarr

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: CSIROdsQnarr
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: automatic
  • Task: document
  • Run description: This script takes the query field from the Topic and the narrative, and generates the combined result from the clustering and the genre analysis results. The stop words are removed both from the query and the narrative. The module used is pfcm_clustering_algo_with_genre_nrt.pm

CSIROdsQonly

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: CSIROdsQonly
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: automatic
  • Task: document
  • Run description: This script takes the simple query field from the Topic and generates the combined result from the clustering and the genre analysis results. The module used is pfcm_clustering_algo_with_genre.pm, and the results from Algorithm B were chosen.

CSIROdsQsimp

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: CSIROdsQsimp
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: feedback
  • Task: document
  • Run description: This script takes the query field from the Topic and the pages, and generates the combined result from the clustering and the genre analysis results. The pages are used to improve the ranking. The module used is pfcm_clustering_algo_with_genre_feedback.pm. It tries to prioritise similar genres of pages to those already provided as important.

CSIROesQnarr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CSIROesQnarr
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: expert
  • Run description: This script takes the query +query + narrative from the Topic and generates the combined result from the 4 algorithms designed for expert search. The four modules used are 1. expert_anchor_term.pm 2. profiles_experts.pm 3. expert_documents_result.pm 4. expert_exactmatch.pm

CSIROesQonly

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CSIROesQonly
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: automatic
  • Task: expert
  • Run description: This script takes the simple query field from the Topic and generates the combined result from the 4 algorithms designed for expert search. The four modules used are use 1. expert_anchor_term.pm 2. profiles_experts.pm 3. expert_documents_result.pm 4. expert_exactmatch.pm

CSIROesQpage

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CSIROesQpage
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: feedback
  • Task: expert
  • Run description: This script takes the query from the topic and the relevant pages given in the topic and finds experts in the relevant pages and tries to map those experts on the employee surrogate pages and tries to find the experts there and ranks the experts. The module used is expert_documents_result_with_pages.pm

CSIROesQprof

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: CSIROesQprof
  • Participant: csiro-ict.bailey
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: expert
  • Run description: This script takes the query +query expert + narrative and runs it over the profiles index. For example if topic query is genetic modification If topic narrative is what is the current research in CSIRO about genetics The query to the profiles index is genetic modification +genetic +modification expert what current research CSIRO about genetics. The stop words are also removed from the query and the narrative. The module used is profiles_experts_narr.pm

DocRun01

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DocRun01
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: This result utilizes only the query field. And a simple BM25 model is implemented.

DocRun02

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DocRun02
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: Both the query field and the narrative field are used. A extended BM25 Model is implemented. URLanchor texttitle are assigned with different weight.

DocRun03

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DocRun03
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: PageRank is used to rank the document.

DocRun04

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DocRun04
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: This is the forth one.

DUTDST1

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DUTDST1
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: manual
  • Task: document
  • Run description: index indri,porter stemmer,all documents have been cleaned,using BM25 weighting

DUTDST2

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DUTDST2
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: index indri,porter stemmer,all documents have been cleaned,using BM25 weighting

DUTDST3

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DUTDST3
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: feedback
  • Task: document
  • Run description: index indri,porter stemmer,all documents have been cleaned,using BM25 weighting

DUTDST4

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: DUTDST4
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: index indri,porter stemmer,all documents have been cleaned,using BM25 weighting

DUTEXP1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: DUTEXP1
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Firstly, we build profile for every candidate. Then an index was built by Indri based on the profiles. It's a query-only automatic run, used BM25 to rank the experts and then indri to get support documents.

DUTEXP2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: DUTEXP2
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: manual
  • Task: expert
  • Run description: Firstly, we build profile for every candidate. Then an index was built by Indri based on the profiles. use "query" and "narr" fields in topics to form queries. used BM25 to rank the experts and then indri to get support documents.

DUTEXP3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: DUTEXP3
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: manual
  • Task: expert
  • Run description: Firstly, we build profile for every candidate. Then an index was built by Indri based on the profiles. use query, narr and page fields in topics to form queries. used BM25 to rank the experts and then indri to get support documents.

DUTEXP4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: DUTEXP4
  • Participant: dalianu.yang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: manual
  • Task: expert
  • Run description: Firstly, we build profile for every candidate. Then an index was built by Indri based on the profiles. use query and narr fields in topics to form queries. used indri to get support documents and then rank the experts.

ExpertRun01

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ExpertRun01
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Only query field is used.

ExpertRun02

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ExpertRun02
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Query field and narrative field are used.

ExpertRun03

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ExpertRun03
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Based on person profile.

ExpertRun04

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ExpertRun04
  • Participant: cas-liu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Based on PageRank.

FDUBase

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: FDUBase
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: automatic
  • Task: document
  • Run description: We use Lemur as the search engine to index and query the topics. We add the score of the document by analyzing the pages which link to this document.

FDUExpan

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: FDUExpan
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: automatic
  • Task: document
  • Run description: By analyzing the narrative field, we automatically choose candidate expansion terms for topics and calculate the frequencies of the terms in the first five documents in the search result by Lemur. We use those high frequency terms as query terms for the second time search. Then we add the score of the document by analyzing the pages which link to this document.

FDUFeedH

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: FDUFeedH
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: feedback
  • Task: document
  • Run description: We use HITS algorithm to judge the quality of the document and regard documents of the same quality as a category. We use FDUBase as the basic result. If the document is in the same category as the documents in the page field, we add the score of the document.

FDUFeedT

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: FDUFeedT
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 7/31/2007
  • Type: feedback
  • Task: document
  • Run description: We use FDUBase as the basic result. Then we analyze the structure of each document and if the document has similar structure as the documents in the page field, we add the score of the document.

FDUGroup

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: FDUGroup
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/7/2007
  • Type: automatic
  • Task: expert
  • Run description: We detect email addresses and relevant full names automatically from the corpus. We also filter the candidate list and remain those who are probably contacts on some projects. We divide the corpus to several groups according to the document structure. We give different weights for the different kinds of the documents when calculating each candidate's score on the topic.

FDUn3e7

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: FDUn3e7
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/7/2007
  • Type: automatic
  • Task: expert
  • Run description: We detect email addresses and relevant full names automatically from the corpus. We also filter the candidate list and remain those who are probably contacts on some projects. We calculate two scores for each candidate, one for the candidate's name and another for his email. We add the two scores by 30% and 70%.

FDUn5e5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: FDUn5e5
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/7/2007
  • Type: automatic
  • Task: expert
  • Run description: We detect email addresses and relevant full names automatically from the corpus. We also filter the candidate list and remain those who are probably contacts on some projects. We calculate two scores for each candiate, one for the candidate's name and another for his email. We add the two scores by 50% and 50%.

FDUn7e3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: FDUn7e3
  • Participant: fudanu.niu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/7/2007
  • Type: automatic
  • Task: expert
  • Run description: We detect email addresses and relevant full names automatically from the corpus. We also filter the candidate list and remain those who are probably contacts on some projects. We calculate two scores for each candidate, one for the candidate's name and another for his email. We add the two scores by 70% and 30%.

feedbackrun

Results | Participants | Input | Summary | Appendix

  • Run ID: feedbackrun
  • Participant: twenteu.serdyukov
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: feedback
  • Task: expert
  • Run description: Modeling expert finding through probabilistic random walk on graph of candidates and documents. The probability of relevance of judged documents is increased and propagated through the graph.

insu1

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: insu1
  • Participant: st.petersburg.nemirovsky
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Documents are presented as probability distribution over set of words based on tf-profiles and are clustered. Both the query and narrative fields are used. Documents that are considered relevant are sorted according to the distance to the center of the cluster which a document belongs to. No external resources were used.

insu2

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: insu2
  • Participant: st.petersburg.nemirovsky
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Documents are presented as probability distribution over set of words based on Markov chain theory and are clustered. Both the query and narrative fields are used. Documents that are considered relevant are sorted according to the distance to the center of the cluster which a document belongs to. No external resources were used.

insu3

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: insu3
  • Participant: st.petersburg.nemirovsky
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Documents are presented as probability distribution over set of words based on tf-profiles and are clustered. The query field is used only. Documents that are considered relevant are sorted according to the distance to the center of the cluster which a document belongs to. No external resources were used.

insu4

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: insu4
  • Participant: st.petersburg.nemirovsky
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Documents are presented as probability distribution over set of words based on tf-profiles and are clustered. Both the query and narrative fields are used. Documents that are considered relevant are sorted according to PageRank values. No external resources were used.

MANUALRUN

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: MANUALRUN
  • Participant: iiit-hyderbad
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/6/2007
  • Type: manual
  • Task: document
  • Run description: Used the queries from query and description fields for manual run

NARRAUTORUN

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: NARRAUTORUN
  • Participant: iiit-hyderbad
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/6/2007
  • Type: automatic
  • Task: document
  • Run description: Used the query and narrative fields for automatic query and narrative narrative run

ouExNarr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ouExNarr
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: manual
  • Task: expert
  • Run description: Multiple windows, query topic, manual tweaking narrative field, anchor texts plus document contents, in-links, and url length.

ouExNarrAu

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ouExNarrAu
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Multiple windows, query topic plus narrative, stop word removing from narrative field, partial matching of topic and narrative fields, anchor texts plus document contents, in-links, and url length.

ouExNarrRF

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ouExNarrRF
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: manual
  • Task: expert
  • Run description: Multiple windows, query topic, manual tweaking narrative field, relevance feedback, anchor texts plus document contents, in-links, and url length.

ouExTitle

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: ouExTitle
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Multiple windows, query topic only, anchor texts plus document contents, in-links, and url length.

ouNarr

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: ouNarr
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: manual
  • Task: document
  • Run description: Manual tweaking of query from narrative, anchor texts plus document contents, inlinks, and url length.

ouNarrAuto

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: ouNarrAuto
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: stop word removing from narrative, narrative plus query topic for automatic search, anchor texts plus document contents, inlinks, and url length.

ouNarrRF

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: ouNarrRF
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: manual
  • Task: document
  • Run description: Manual tweaking of query from narrative, relevance feedback, anchor texts plus document contents, inlinks, and url length.

ouTopicOnly

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: ouTopicOnly
  • Participant: openu.zhu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: query topic only, anchor texts plus document contents, in-links, and url length.

PRISDF

Results | Participants | Input | Summary | Appendix

  • Run ID: PRISDF
  • Participant: beijingu-posts-tele.weiran
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Use expert df to improve team leader finding.

PRISEM

Results | Participants | Input | Summary | Appendix

  • Run ID: PRISEM
  • Participant: beijingu-posts-tele.weiran
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: use email counts for identify team leaders

PRISLM

Results | Participants | Input | Summary | Appendix

  • Run ID: PRISLM
  • Participant: beijingu-posts-tele.weiran
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Using languague model and window-based expert profile

PRISRR

Results | Participants | Input | Summary | Appendix

  • Run ID: PRISRR
  • Participant: beijingu-posts-tele.weiran
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: use rerank method

qorw

Results | Participants | Input | Summary | Appendix

  • Run ID: qorw
  • Participant: twenteu.serdyukov
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Modeling expert finding through probabilistic random walk on graph of candidates and documents

qorwkstep

Results | Participants | Input | Summary | Appendix

  • Run ID: qorwkstep
  • Participant: twenteu.serdyukov
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Modeling expert finding through probabilistic random walk on graph of candidates and documents. The walking process is restricted to K-step walk. K =5.

Results | Participants | Input | Summary | Appendix

  • Run ID: qorwnewlinks
  • Participant: twenteu.serdyukov
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Modeling expert finding through probabilistic random walk on graph of candidates and documents. Here, we use extracted links among documents (hyperlinks) and among candidates (through subdomain in their email addresses).

QRYBASICRUN

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: QRYBASICRUN
  • Participant: iiit-hyderbad
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: Basic engine, very restrictive, precision focus, low recall. Uses only query.

QUERYRUN

Results | Participants | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: QUERYRUN
  • Participant: iiit-hyderbad
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: Basic engine, somewhat liberal, precision focus with higher recall. Uses only query.

RmitQ

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: RmitQ
  • Participant: rmitu.scholer
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: In this base run, we use words from topic field as a query, index contents from original documents, and rank documents according to KL divergence.

RmitQAnc

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: RmitQAnc
  • Participant: rmitu.scholer
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: In this run, the content of an indexed document is extended to include its anchor texts. As in our base run, we still use words from topic field as a query and rank documents according to KL divergence.

RmitQAncIndg

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: RmitQAncIndg
  • Participant: rmitu.scholer
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: In this run, the content of an indexed document is extended to include its anchor text. The weight of a document is a combination of content weight (KL divergence) and in-degree.

RmitQFir

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: RmitQFir
  • Participant: rmitu.scholer
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: In this run, we divide the collection into sub-collections. All documents in a sub-collection have the same domain name. Each sub-collection is indexed separately and documents are retrieved and ranked in a federated manner.

SJTUEntDS01

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: SJTUEntDS01
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: query Phase Query, Proximity Query, Ordinary Query, Query Expansion from Narrative Field, Query of terms with low document frequency BM25 on Title, Anchor, H1, H2, Keywords, Extracted Body Positional Model Added Hostrank used for Re-ranking

SJTUEntDS02

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: SJTUEntDS02
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Basically the same with SJTUEntDS01. Different weights for fields from SJTUEntDS01 in combination. Reshuffled phrase query adopted.

SJTUEntDS03

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: SJTUEntDS03
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: Basically the same with SJTUEntDS01. Use document frequency to boost each query word.

SJTUEntDS04

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: SJTUEntDS04
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: feedback
  • Task: document
  • Run description: Basically the same with SJTUEntDS01. Use words from the given key pages as candidates for query expension.

SJTUEntES01

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: SJTUEntES01
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/8/2007
  • Type: automatic
  • Task: expert
  • Run description: Parsing email appearences for Expert Names. Filter out names containing single word or host names. Query Phrase query, Bigram query, Proximity query, Single word query, Stemming query, Expended query. Model Window based model (different size), Section model, Title-author model, Body-author model, Tree model, Reversed Tree model. Add-in Feature Position Refine, Bigram query refine, Dom-structure refine Name-Musk Fullname match, email match, potential name match.

SJTUEntES02

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: SJTUEntES02
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Basically the same with SJTUEntES01. Topic sensitive Expert Rank is added.

SJTUEntES03

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: SJTUEntES03
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Basically the same with SJTUEntES01. Homepage refine is added.

SJTUEntES04

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: SJTUEntES04
  • Participant: sjtu-apex-ent
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Basically the same with SJTUEntES01. Topic sensitive Expert Rank and Homepage refine are added.

THUDSANCHOR

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: THUDSANCHOR
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Instead of the full text, we just use the inlink anchor text of web pages as collection for retrieval

THUDSFULLSR

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: THUDSFULLSR
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: We retrieve documents from the full text collection in the first stage. Then we apply link analysis among the retieved documents.

THUDSSEL

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: THUDSSEL
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Instead of the full text, we used those documents with outlink number more than 100 as collection. In each documents, we also enlarge the weight of the inlink anchor text part.

THUDSSELSR

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: THUDSSELSR
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Instead of the full text, we used those documents with outlink number more than 100 as collection. In each documents, we also enlarge the weight of the inlink anchor text part. Plus, we apply link analysis among the retieved documents.

THUIRMPDD2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRMPDD2
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/8/2007
  • Type: automatic
  • Task: expert
  • Run description: The result list is a linear combination from two kinds of PDD(Person Description Documents), including original PDD and precisely extracted PDD, .

THUIRMPDD4

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRMPDD4
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/8/2007
  • Type: automatic
  • Task: expert
  • Run description: The result list is a linear combination from four kinds of PDD (Person Description Documents), including original PDD, precisely extracted PDD, PDD only based on anchor text and PDD built from those documents with outlinks more than 100.

THUIRPDD2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRPDD2
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/8/2007
  • Type: automatic
  • Task: expert
  • Run description: The result list is from PDD(Person Description Documents)

THUIRPDD2C40

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: THUIRPDD2C40
  • Participant: tsinghuau.zhang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/8/2007
  • Type: automatic
  • Task: expert
  • Run description: The result list from PDD with the window of content is 40 character

UALR07Ent1

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: UALR07Ent1
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: automatic
  • Task: document
  • Run description: Our title-only baseline run with pseudo feedback from the top 5 result documents

UALR07Ent2

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: UALR07Ent2
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: feedback
  • Task: document
  • Run description: We used MMRSummApp which is bundled with Lemur toolkit to extract summaries from the authoritative pages of each query. Next, we find search results for these summary keywords (excluding common words) and rerank UALR07Ent1 to include search results of summary keywords with higher weight.

UALR07Ent3

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: UALR07Ent3
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: feedback
  • Task: document
  • Run description: We use a 'word difference' approach. Compare all words from authoritative pages with the words from the top 5 search results of each query. The question is /what is unique in authoritative pages that is not present in the top search results. The net difference in the words will give us unique words (excluding common words) which we use to obtain results for. We rerank UALR07Ent1 using these results.

UALR07Ent4

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: UALR07Ent4
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/2/2007
  • Type: manual
  • Task: document
  • Run description: We looked at authoritative pages that were extracted using the script. Next, we identified (manually) the keywords from each of these authoritative pages. We rerank UALR07Ent1 according to manual keyword selection method.

UALR07Exp1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UALR07Exp1
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: We used our document search run UALR07Ent3 to identify potential experts for each query

UALR07Exp2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UALR07Exp2
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: We used our document search run UALR07Ent3 to identify potential experts for each query only from the available 1000 results for each query

UALR07Exp3

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: UALR07Exp3
  • Participant: uarkansas-littlerock.bayrak
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/11/2007
  • Type: manual
  • Task: expert
  • Run description: Uses manual verified experts

uams07bfb

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uams07bfb
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: The MixtureModel constructs the document model based on five components (indices) and a background component. The components used are title, headers, metadata, anchors and body; the background components consists of all 5 components combined. The mixture for this run is based on tests on the TRECweb collection title has a weight of 0.30, headers 0.10, metadata 0.05, anchors 0.40 and body 0.10. The background model has a weight of 0.05. As document prior we used the log of the number of inlinks per document. The query model is constructed using blind feedback on the top 10 documents of the original query; new weights are assigned to the new and original query terms and the query is issued again.

uams07bfbex

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uams07bfbex
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: feedback
  • Task: document
  • Run description: The MixtureModel constructs the document model based on five components (indices) and a background component. The components used are title, headers, metadata, anchors and body; the background components consists of all 5 components combined. The mixture for this run is based on tests on the TRECweb collection title has a weight of 0.30, headers 0.10, metadata 0.05, anchors 0.40 and body 0.10. The background model has a weight of 0.05. As document prior we used the log of the number of inlinks per document. The query model is constructed using a combination of blind feedback on the top 10 documents and using the example documents. New weights are assigned to the new and original query terms and the query is issued again.

uams07bl

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uams07bl
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: The MixtureModel constructs the document model based on five components (indices) and a background component. The components used are title, headers, metadata, anchors and body; the background components consists of all 5 components combined. This run assigns equal weights to all components (0.18) and 0.10 to the background model. No document priors are used. The query model is constructed using the number of occurences of a term in the query (e.g. a two-word query assign 0.5 weight to each word).

uams07exbl

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uams07exbl
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: baseline run document-centric language model, naive document-candidate associations

uams07exfr

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uams07exfr
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: frequency-based document-candidate associations. mixture model for document retrieval.

uams07exmm

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uams07exmm
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: document-centric, using mixture model for document retrieval

uams07expp

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uams07expp
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: combination of document-centric and candidate-centric lm approach. model 1 builds a lm from personal pages of candidates, model 2 is the same as in run uams07exmm. The two models are combined using equal weights.

uams07pr

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uams07pr
  • Participant: uamsterdam.deRijke
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: The MixtureModel constructs the document model based on five components (indices) and a background component. The components used are title, headers, metadata, anchors and body; the background components consists of all 5 components combined. The mixture for this run is based on tests on the TRECweb collection title has a weight of 0.30, headers 0.10, metadata 0.05, anchors 0.40 and body 0.10. The background model has a weight of 0.05. As document prior we used the log of the number of inlinks per document. The query model is constructed using the number of occurences of a term in the query (e.g. a two-word query assign 0.5 weight to each word).

uiowa07entD1

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uiowa07entD1
  • Participant: uiowa.srinivasan
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/4/2007
  • Type: automatic
  • Task: document
  • Run description: + High recall search boolean search on meta field (title, subject, and keywords) and document content. + Retrieval feedback

uiowa07entD2

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uiowa07entD2
  • Participant: uiowa.srinivasan
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/4/2007
  • Type: automatic
  • Task: document
  • Run description: High recall search boolean search on meta fields (title, subject, and keywords) and document content

uiowa07entD3

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uiowa07entD3
  • Participant: uiowa.srinivasan
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/4/2007
  • Type: feedback
  • Task: document
  • Run description: High precision search (phrase search), and relevance feedback

uiowa07entD4

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uiowa07entD4
  • Participant: uiowa.srinivasan
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/4/2007
  • Type: feedback
  • Task: document
  • Run description: High recall search (boolean search), and relevance feedback

uiowa07entE1

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiowa07entE1
  • Participant: uiowa.srinivasan
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/13/2007
  • Type: feedback
  • Task: expert
  • Run description: First, created profiles for topics using top 25 documents (top 75 for topic CE-25) retrieved for expanded queries created from original query terms and high frequency metadata terms from relevant documents (within field). A profile consists of weighted stemmed terms. Terms with higher weight better characterize the topic. Second, identified all PERSON named-entities from these documents that map to a csiro.au email address and created profiles for these experts. Third, ranked experts for each topic based on cosine similarity of expert profile with topic profile. Fourth, ranked expert documents using original topic query + expansion terms from relevant documents as query.

uiowa07entE2

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uiowa07entE2
  • Participant: uiowa.srinivasan
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: feedback
  • Task: expert
  • Run description: Probabilistic Topic Modelling

uogEDSCLCDIS

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uogEDSCLCDIS
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: feedback
  • Task: document
  • Run description: linking structure evidence of feedback documents is used in this run

uogEDSComPri

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uogEDSComPri
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: feedback
  • Task: document
  • Run description: mixed linking structure evidence is used in this run

uogEDSF

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uogEDSF
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: A simple automatic run uses document structure information

uogEDSINLPRI

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uogEDSINLPRI
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: feedback
  • Task: document
  • Run description: linking structure evidence is used in this run

uogEXFeMNZcP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogEXFeMNZcP
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Document structure, voting technique, proximity (2).

uogEXFeMNZdQ

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogEXFeMNZdQ
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Document structure, voting technique, QE (2)

uogEXFeMNZP

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogEXFeMNZP
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Document structure, voting technique, proximity.

uogEXFeMNZQE

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: uogEXFeMNZQE
  • Participant: glasgow.ounis
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/10/2007
  • Type: automatic
  • Task: expert
  • Run description: Document structure, voting technique, QE

uwKLD

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uwKLD
  • Participant: uwaterloo-olga
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: feedback run (automatic) using kld

uwRF

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uwRF
  • Participant: uwaterloo-olga
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/4/2007
  • Type: feedback
  • Task: document
  • Run description: Using pages to extract expansion terms.

uwtbase

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: uwtbase
  • Participant: uwaterloo-olga
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: query only run

WHU10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: WHU10
  • Participant: wuhanu.lu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Using cocument combined method for this run, and the document cutoff is set to 10

WHU15

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: WHU15
  • Participant: wuhanu.lu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Using cocument combined method for this run, and the document cutoff is set to 15

WHUC5

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: WHUC5
  • Participant: wuhanu.lu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Using common method for this run, and the document cutoff is set to 5

WHUE10

Results | Participants | Proceedings | Input | Summary | Appendix

  • Run ID: WHUE10
  • Participant: wuhanu.lu
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/9/2007
  • Type: automatic
  • Task: expert
  • Run description: Using common method for this run, and the expert relevant document cutoff is set to 10

york07ed1

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: york07ed1
  • Participant: yorku.huang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: 1. Use db1 jtxt. 2. No query expansion. 3. Use only terms extracted from the raw topic for retrieval.

york07ed2

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: york07ed2
  • Participant: yorku.huang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: 1. Use db1 ltxt. 2. No query expansion. 3. Use only terms extracted from the raw topic for retrieval.

york07ed3

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: york07ed3
  • Participant: yorku.huang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: 1. Use db2 ltxt. 2. Expand query terms with "narrative" from raw topics.

york07ed4

Results | Participants | Proceedings | Input | Summary (document) | Summary (doc-promotion) | Summary (doc-residual) | Appendix

  • Run ID: york07ed4
  • Participant: yorku.huang
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/3/2007
  • Type: automatic
  • Task: document
  • Run description: 1. Use db1 jtxt. 2. Expand query terms with "narrative" from raw topics.

zslrun

Results | Participants | Input | Summary | Appendix

  • Run ID: zslrun
  • Participant: pekingu.zhou
  • Track: Enterprise
  • Year: 2007
  • Submission: 8/13/2007
  • Type: automatic
  • Task: expert
  • Run description: THIS RUN IS MISSING A DESCRIPTION, PLEASE PROVIDE ONE.