Runs - Enterprise 2006¶
allbasic¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: allbasic
- Participant: case-western.ru.troy
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Automatic, title, all emails
basic¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: basic
- Participant: case-western.ru.troy
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Automatic, title, reply emails only
body¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: body
- Participant: queen-mary-ulondon.forst
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: expert
- Run description: "lists"-part of collection only; limited to email-bodies
DUTDS1¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: DUTDS1
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: index indri,porter stemmer,all documents have been cleaned,using BM25 weighting
DUTDS2¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: DUTDS2
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: discussion
- Run description: index indri,porter stemmer,all documents have been cleaned,using BM25 weighting
DUTDS3¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: DUTDS3
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: discussion
- Run description: index indri,porter stemmer,all documents have been cleaned
DUTDS4¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: DUTDS4
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: index indri,porter stemmer,all documents have been cleaned
DUTEX1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: DUTEX1
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: expert
- Run description: the indri has been used to retrieve, build a document pool with the 200 words appearing around the expert identifier, without using query expansion and relevance feedback, stem krovetz
DUTEX2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: DUTEX2
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: expert
- Run description: the indri has been used to retrieve, build a document pool with the 200 words appearing around the expert identifier, manual query, stem krovetz
DUTEX3¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: DUTEX3
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: expert
- Run description: the indri has been used to retrieve, build a document pool with the 50 words appearing around the expert identifier, without using query expansion and relevance feedback, stem krovetz
DUTEX4¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: DUTEX4
- Participant: dalianu.yang
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: expert
- Run description: the indri has been used to retrieve, build a document pool with the 200 words appearing around the expert identifier, query based on the areas of Title Description and Narrative, stem krovetz
ex3512¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ex3512
- Participant: cityu.macfarlane
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Use content around the expert(both name and mail) as the new collection for searching.
ex5512¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ex5512
- Participant: cityu.macfarlane
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Use content around the expert(both name and mail) as the new collection for searching.
ex5518¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ex5518
- Participant: cityu.macfarlane
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Use content around the expert(both name and mail) as the new collection for searching.
ex7512¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ex7512
- Participant: cityu.macfarlane
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Use content around the expert(both name and mail) as the new collection for searching.
FDUSF¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: FDUSF
- Participant: fudanu.niu
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: Full corpus is used for retieval. Experts appeared in a document is assigned different weight based on the probility of them being editors/authors in the documents. Document is weighted according to their rank.
FDUSN¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: FDUSN
- Participant: fudanu.niu
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: A reconstructed corpus consisting of presentations, thread of emails and technical reports is used for retieval. Experts appeared in a document is assigned different weight based on the probility of them being editors/authors in the documents. Document is weighted according to their rank. And after that, a social network constructed from the email archives in w3c corpus is used for re-rank the experts
FDUSO¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: FDUSO
- Participant: fudanu.niu
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: A reconstructed corpus consisting of presentations, thread of emails and technical reports is used for retieval. Experts appeared in a document is assigned different weight based on the probility of them being editors/authors in the documents. Document is weighted according to their rank.
IBM06EXP¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: IBM06EXP
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Adopted multiple agents for expert finding. Incorporated support document ranking strategies. Used Foldoc for query expansion and Google Scholar.
IBM06JAQ¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: IBM06JAQ
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Used one search engine and one pro/con assessment agent. Used Foldoc dictionary in query expansion.
IBM06JAQD¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: IBM06JAQD
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Used single search engine and one pro/con assessment agent. Used Foldoc dictionary in query expansion.
IBM06JILAPQD¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: IBM06JILAPQD
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Used multiple search engines and multiple pro/con assessment agents. Used Foldoc dictionary in query expansion.
IBM06JILAQD¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: IBM06JILAQD
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Used multiple search engines and one pro/con assessment agent. Used Foldoc dictionary in query expansion.
IBM06MA¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: IBM06MA
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: manual
- Task: expert
- Run description: This is a manual run to evaluate how our semantic search tool may help with expert finding and filtering of automatic expert search results. We combined manual semantic search query construction and expert/document filtering of semantic search results and automatic expert search results.
IBM06PR¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: IBM06PR
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Adopted multiple agents for expert finding. Used Foldoc for query expansion and Google Scholar.
IBM06QO¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: IBM06QO
- Participant: ibm.prager
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Used Foldoc for query expansion and Google Scholar.
ICTCSXRUN01¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ICTCSXRUN01
- Participant: cas-iiis.tan
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: two-stage relevance model. clustering-based re-ranking.
ICTCSXRUN02¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ICTCSXRUN02
- Participant: cas-iiis.tan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Based on ICTCSXRUN01, we add a score provided by the mail link structure analysis using PageRank-like algorithm.
ICTCSXRUN03¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ICTCSXRUN03
- Participant: cas-iiis.tan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: This run is purely based on the mail link structure analysis. Only using the "lists" corpus. We use a PageRank-like algorithm in this run.
ICTCSXRUN04¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ICTCSXRUN04
- Participant: cas-iiis.tan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: two-stage relevance model. clustering-based re-ranking. In the two-stage relevance model, we use document relevance rank instead of document relevance score to give the score of the expert.
ICTCSXRUN05¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: ICTCSXRUN05
- Participant: cas-iiis.tan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: We use the result of ICTCSXRUN04 as root set of the HITS algorithm. We use this HITS algorithm to analysis the link structure of the mail links network.
IIISRUN¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: IIISRUN
- Participant: cas-iiis.tan
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: our only run
InsunEnt06¶
Results
| Participants
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: InsunEnt06
- Participant: harbin.zhao
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: We use Terrier as the retrieval tools to search the W3C corpus based on the "title" fields.Next we process the format of corpus,for most of document are email and it is necessary to extract the content of them. Later we compute the similarity between the topic and the email content based on the analysis of the field "title","description" and "narrative".
kmiZhu1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: kmiZhu1
- Participant: openu.zhu
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Integrate document relevance model and co-occurrence model. Use semi-structured nature of documents to give weight to terms occurring in different parts of documents. Incremental window size to take into account association on various levels. Using boolean query based relevance measure for query relevance. Expand query based on title, description, and narrative information. Enhanced named entity recognition based on variations of entities.
kmiZhu2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: kmiZhu2
- Participant: openu.zhu
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Integrate document relevance model and co-occurrence model. Use semi-structured nature of documents to give weight to terms occurring in different parts of documents. Incremental window size to take into account association on various levels. Using window based relevance measure for query relevance. Expand query based on title, description, and narrative information. Enhanced named entity recognition based on variations of entities.
kmiZhu4¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: kmiZhu4
- Participant: openu.zhu
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Integrate document relevance model and co-occurrence model. Use semi-structured nature of documents to give weight to terms occurring in different parts of documents. Incremental window size to take into account association on various levels. Using window based relevance measure for query relevance. The whole document is seen as the unit of co-occurrence. Expand query based on title, description, and narrative information. Enhanced named entity recognition based on variations of entities.
kmiZhu5¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: kmiZhu5
- Participant: openu.zhu
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Integrate document relevance model and co-occurrence model. Use semi-structured nature of documents to give weight to terms occurring in different parts of documents. Incremental window size to take into account association on various levels. Using boolean query based relevance measure for query relevance. Expand query based on title, description, and narrative information. Enhanced named entity recognition based on variations of entities.
l3s1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: l3s1
- Participant: uhannover.chernov
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: 1. Dummy Run (automatic) Only the Title part of the query is used. For each author in returned set of relevant emails we count number of emails (in the relevant set). All authors are sorted in decreasing order of their email counts. The only parameter is the number of experts to be returned from the top of the ranking, which is set arbitrarily. Parameters Number of experts to retrieve = 10.
l3s2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: l3s2
- Participant: uhannover.chernov
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: 2. Documents score threshold and expert number run (automatic) Every query is a boolean OR query composed from the Title (weight 3.0), Description (weight 2.0) and Narrative (weight 1.0). Most of the documents are somehow relevant, since queries are very long. Here we used the threshold on the sum of the documents score. We considered documents as relevant only until sum over the first N highly ranked documents does not exceed some threshold, the rest of the documents are considered as too weak evidence for identifying an expert. The expert score is now not the simple email counts, but the sum of retrieval status values (RSV) of her emails (over the set of relevant emails). Parameters Number of experts to retrieve = 5. Top-k documents considered relevant = 240 (k can be translated into a sum of document RSV = 76.5)
l3s3¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: l3s3
- Participant: uhannover.chernov
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: 3. Two-threshold Run (automatic) The same run as N2, but instead of fixed number, we retrieve all experts which score pass some threshold. Expert score is computed as RSV sum over all emails in the relevant set, which were written by this expert. Parameters Expert score threshold = 1.2 = Avg score of expert at rank 5 Top-k documents considered relevant = 240 (k can be translated into a sum of document RSV = 76.5)
l3s4¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: l3s4
- Participant: uhannover.chernov
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: manual
- Task: expert
- Run description: 4. Documents score threshold and expert score threshold run (manual) This method is different from N3 in that it uses different expert threshold. Here expert threshold is also specified as a sum of scores of retrieved relevant documents written by an expert, but this sum is also multiplied by the topic specificity value. Topic specificity value lies in the interval [0.5, 1.5], where 0.5 - corresponds to a general query with many experts expected, and 1.5 - corresponds to a very specific query with the low number of experts expected. Parameters Each query gets some real number from 0.5 to 1.5 as its specificity level Expert score threshold = 1.2*specificity Documents score sum threshold = 240
listbq¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: listbq
- Participant: queen-mary-ulondon.forst
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: expert
- Run description: "lists"-part of collection only
MAPCrelTret¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: MAPCrelTret
- Participant: lowlands-team.deVries
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: For each candidate we produce a ranked list of the 1000 most relevant documents based on a name+email adres query. For each topic we produce a separate ranked list of the top 1500 most relevant documents. The experts are ranked by computing MAP of the top1500 topic lists, using the candidates' top1000 as qrels
MAPTrelCret¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: MAPTrelCret
- Participant: lowlands-team.deVries
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: For each candidate we produce a ranked list of the 1000 most relevant documents based on a name+email adres query. For each topic we produce a separate ranked list of the top 1500 most relevant documents. The experts are ranked by computing MAP of their top1000 lists, using the topics' top1500 as qrels
PITTMANUAL¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PITTMANUAL
- Participant: upittsburgh.he
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: manual
- Task: expert
- Run description: We generate initial results based on Indri index, which contains at most 20 candidate for each query, and at most 5 support document for each candidate. Then 2 people manually pick up at most 10 candidates for each query, and remove unjudged candidates.
PITTNOPH¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PITTNOPH
- Participant: upittsburgh.he
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Only w3c email collection was used. Title and decription were considered as key words. Then email threading information is used to processed the results.
PITTPHFREQ¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PITTPHFREQ
- Participant: upittsburgh.he
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Only w3c email collection was used. We consider title as a phrase, description as key words. Initial results came from indri search results. Then email threading information is used to processed the results. Only email frequency is considered. Content length is not considered.
PITTPHFULL¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PITTPHFULL
- Participant: upittsburgh.he
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Only w3c email collection was used. Title was considered as a phrase, and decription was considered as key words. Then email threading information is used to processed the results.
PRISEXB¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PRISEXB
- Participant: beijingu-posts-tele.weiran
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: This is the baseline run.
PRISEXR¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PRISEXR
- Participant: beijingu-posts-tele.weiran
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: The run developed on the baseline run.
PRISEXRM¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PRISEXRM
- Participant: beijingu-posts-tele.weiran
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: This run added Emails as a special profile.
PRISEXRMT¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: PRISEXRMT
- Participant: beijingu-posts-tele.weiran
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: The same as the PRISEXRM, but only used title words as the query words.
quotes¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: quotes
- Participant: queen-mary-ulondon.forst
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: expert
- Run description: only "list"-part of collection used; limited to quotations
qutbaseline¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: qutbaseline
- Participant: queenslandu.geva
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Base line run. Base on the top 100 documents retrieved from w3c corpus using Terrier. Best Match search model was used.
qutlmv2¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: qutlmv2
- Participant: queenslandu.geva
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Base on the top 100 documents retrieved from w3c corpus using Terrier. Language Model search model was used.
qutmoreterms¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: qutmoreterms
- Participant: queenslandu.geva
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Base on the top 100 documents retrieved from w3c corpus using Terrier. Best Match search model was used. Key words from both title and desc fields were used.
SJTU01¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SJTU01
- Participant: sjtu-apex-lab.bao
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: SJTU01 Automatic Run, * Query Using the queries exactly the same as the texts in
field of Test51-105. * Index Alias enhanced person name disambiguation. * Model a) Bigram model with document count normalization, proximity model, and fuzzy Model are implied. b) Window based model with distance normalization c) 2-level tree model with reference block removal d) 1-level inversed tree model with reference block removal e) Pagerank enhanced document relevence model
SJTU02¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SJTU02
- Participant: sjtu-apex-lab.bao
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: * Query Using the queries exactly the same as the texts in
field of Test51-105. * Index Alias enhanced person name disambiguation. * Model a) Bigram model with document count normalization, proximity model, and fuzzy Model are implied. b) Window based model with distance normalization c) 2-level tree model with reference block removal d) Pagerank enhanced document relevence model e) Low weight for dev corpus
SJTU03¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SJTU03
- Participant: sjtu-apex-lab.bao
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: * Query Using the queries exactly the same as the texts in
field of Test51-105. * Index Alias enhanced person name disambiguation. * Model a) Bigram model with document count normalization, proximity model, and fuzzy Model are implied. b) Window based model with distance normalization c) 2-level tree model with reference block removal d) Low weight for dev corpus
SJTU04¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SJTU04
- Participant: sjtu-apex-lab.bao
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: * Query Using the queries exactly the same as the texts in
field of Test51-105. * Index Alias enhanced person name disambiguation. * Model a) Bigram model with document count normalization, proximity model, and fuzzy Model are implied. b) Window based model with distance normalization c) 2-level tree model with reference block removal d) Pagerank enhanced document relevence model e) Low weight for dev corpus f) Cluster based Re-ranking.
sophiarun1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: sophiarun1
- Participant: uulster.patterson
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: cluster based search
sophiarun2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: sophiarun2
- Participant: uulster.patterson
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: cluster based search
sophiarun3¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: sophiarun3
- Participant: uulster.patterson
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: cluster based search
SP¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SP
- Participant: lowlands-team.deVries
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: For each candidate we produce a ranked list of the 1000 most relevant documents based on a name+email adres query. For each topic we produce a separate ranked list of the top 1500 most relevant documents. Spearman's rank correlation between candidates and topics lists is used to rank the candidates.
SPlog¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SPlog
- Participant: lowlands-team.deVries
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: For each candidate we produce a ranked list of the 1000 most relevant documents based on a name+email adres query. For each topic we produce a separate ranked list of the top 1500 most relevant documents. Spearman's rank correlation between candidates and topics lists is used to rank the candidates. A log transformation on the ranks is applied to emphasize the top ranks.
srcbds1¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: srcbds1
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: discussion
- Run description: Field based, timeline and query expansion
srcbds2¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: srcbds2
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/27/2006
- Type: automatic
- Task: discussion
- Run description: Field based, timeline, query expansion and Speciall words processed.
srcbds3¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: srcbds3
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: timeline with query expandsion, and emphasize the uppercase word.
srcbds4¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: srcbds4
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: field-based with query expandsion, and emphasize the uppercase word.
srcbds5¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: srcbds5
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: timeline and advanced field based method, narrative used, without query expandsion.
SRCBEX1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SRCBEX1
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: BM25, Phrase, Variable document length
SRCBEX2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SRCBEX2
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: DFR_BM25, variable document length, phrase, no parameter tuning
SRCBEX3¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SRCBEX3
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: DFR_BM25, phrase, variable document length, training with 2005 topics
SRCBEX4¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SRCBEX4
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: DFR_BM25, phrase, profile length, training with 2005 topics
SRCBEX5¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: SRCBEX5
- Participant: ricoh.you
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: manual
- Task: expert
- Run description: BM25, Phrase, variable document length, Field based search, trained by 8 topics of 2006 Expert Track.
THUDSSUBPFSM¶
Results
| Participants
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: THUDSSUBPFSM
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: Primary feature space of subject field is employed on mail retrieval process. Bi-gram tech is also applied.For short query. For medium query.
THUDSSUBPFSS¶
Results
| Participants
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: THUDSSUBPFSS
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: Primary feature space of subject field is employed on mail retrieval process. Bi-gram tech is also applied.For short query. For short query.
THUDSTHDM¶
Results
| Participants
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: THUDSTHDM
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: The similarity of the mail retrieved is the sum of mail's similarity and its thread's similarity. Primary feature space of subject field is employed on mail retrieval process which is based on BM25 ranking. Bi-gram tech is also applied. For medium query.
THUDSTHDPFSM¶
Results
| Participants
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: THUDSTHDPFSM
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: The similarity of the mail retrieved is the sum of mail's similarity and its thread's similarity. Primary feature space of subject field is employed on mail and thread retrieval process respectively. Bi-gram tech is also applied.
THUDSTHDPFSS¶
Results
| Participants
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: THUDSTHDPFSS
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: The similarity of the mail retrieved is the sum of mail's similarity and its thread's similarity. Primary feature space of subject field is employed on mail and thread retrieval process respectively. Bi-gram tech is also applied.For short query.
THUPDDEML¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: THUPDDEML
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Seach on PDD document. Result combination with email search, using long query.
THUPDDFBS¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: THUPDDFBS
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Search on PDD document. Wordpair applied. Relevance feedback used.
THUPDDL¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: THUPDDL
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Search on PDD document. For long query.
THUPDDS¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: THUPDDS
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Search on PDD document. Wordpair applied.Short query.
THUPDDSNEMS¶
Results
| Participants
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: THUPDDSNEMS
- Participant: tsinghuau.zhang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Search on PDD document. Reranking using social network based on email communication. Result combination with email search.
UAmsBase¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UAmsBase
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: discussion
- Run description: Baseline. Used a stopped and stemmed version of the title only.
UAmsPOSBase¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UAmsPOSBase
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Weighted mixture (linear interpolation) of
+ POS-tagged run and baseline (run 1)
UAmsPOStQE¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UAmsPOStQE
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Weighted mixture (linear interpolation) of
+ POS-tagged run and threadQE run
UAmsThreadQE¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UAmsThreadQE
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: discussion
- Run description: Blind relevance feedback on the threads of the top-x ranked documents
UIUCe1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UIUCe1
- Participant: uiuc.zhai
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: This is an automatic title run using the basic language modeling approach (Dirichlet prior smoothing parameter=100, Email prior = 2)
UIUCe2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UIUCe2
- Participant: uiuc.zhai
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: This is an automatic title run using the basic language modeling approach (Dirichlet prior smoothing parameter=100, Email prior = 5)
UIUCeFB1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UIUCeFB1
- Participant: uiuc.zhai
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: This is an automatic title run using the basic language modeling approach and topic pseudo feedback (Dirichlet prior smoothing parameter=100, Email prior = 2)
UIUCeFB2¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UIUCeFB2
- Participant: uiuc.zhai
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: This is an automatic title run using the basic language modeling approach and topic pseudo feedback (Dirichlet prior smoothing parameter=100, Email prior = 2; more aggressive feedback as compared with UIUCeFB1)
UMaTDFb¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMaTDFb
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Query-independent representations where an expert model is a mixture of associated documents. The mixing parameters specified by posterior distribution P(D|E). Dependency model query + description and narrative (stopped) + pseudo-relevance feedback.
UMaTDMixThr¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UMaTDMixThr
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: Query expansion with pseudo-relevance feedback and term depedency. Title + description (stopped) + narrative (stopped). Emails represented as mixtures of header, main body and thread text (Dirichlet smoothed language models).
UMaTiDm¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMaTiDm
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Query-independent representations where an expert model is a mixture of associated documents. The mixing parameters specified by posterior distribution P(D|E). Dependency model query.
UMaTiMixHdr¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UMaTiMixHdr
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: Query expansion with pseudo-relevance feedback and term depedency. Emails represented as mixtures of header and main body text (Dirichlet smoothed language models).
UMaTiMixThr¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UMaTiMixThr
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: Query expansion with pseudo-relevance feedback and term depedency. Emails represented as mixtures of header, main body and thread text (Dirichlet smoothed language models).
UMaTiSmoThr¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: UMaTiSmoThr
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: discussion
- Run description: Query expansion with pseudo-relevance feedback and term depedency. Emails represented as mixtures of header and main body text. Hierarchical Dirichlet smoothing Emails in thread smoothed with thread language models; threads smoothed with collection.
UMaTNDm¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMaTNDm
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Query-independent representations where an expert model is a mixture of associated documents. The mixing parameters specified by posterior distribution P(D|E). Dependency model query + description (stopped).
UMaTNFb¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMaTNFb
- Participant: umass.allan
- Track: Enterprise
- Year: 2006
- Submission: 7/29/2006
- Type: automatic
- Task: expert
- Run description: Query-independent representations where an expert model is a mixture of associated documents. The mixing parameters specified by posterior distribution P(D|E). Dependency model query + description (stopped) + pseudo-relevance feedback.
UMDemailTLNR¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMDemailTLNR
- Participant: umaryland.oard
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: This run only depends only on mailing lists. It retrieves the "relevant" emails and assign a score for participants and mentioned people in each email according its similarity score. This is a "title + narrative" run.
UMDemailTTL¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMDemailTTL
- Participant: umaryland.oard
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: This run only depends only on mailing lists. It retrieves the "relevant" emails and assign a score for participants and mentioned people in each email according its similarity score. This is a "title only" run.
UMDthrdTTL¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMDthrdTTL
- Participant: umaryland.oard
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: This run only depends only on mailing lists. It retrieves the "relevant" threads and assign a score for participants and mentioned people in each email according to its distance from the root and thread similarity score. This is a "title only" run.
UMDthrdTTLDS¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMDthrdTTLDS
- Participant: umaryland.oard
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: This run only depends only on mailing lists. It retrieves the "relevant" threads and assign a score for participants and mentioned people in each email according to its distance from the root and thread similarity score. This is a "title + description" run.
UMDthrdTTLNR¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UMDthrdTTLNR
- Participant: umaryland.oard
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: This run only depends only on mailing lists. It retrieves the "relevant" threads and assign a score for participants and mentioned people in each email according to its distance from the root and thread similarity score. This is a "title + narrative" run.
uogX06csnP¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uogX06csnP
- Participant: uglasgow.ounis
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Advanced voting model. Proximity.
uogX06csnQE¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uogX06csnQE
- Participant: uglasgow.ounis
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Query expansion.
uogX06csnQEF¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uogX06csnQEF
- Participant: uglasgow.ounis
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Document structure. Query expansion.
uogX06ecm¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uogX06ecm
- Participant: uglasgow.ounis
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Different voting model.
UvAbase¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UvAbase
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/23/2006
- Type: automatic
- Task: expert
- Run description: Locate documents on topic, then find the associated experts.
UvAPOS¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UvAPOS
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Extracted NP's using POS tagging on the
and fields and used these for Query Expansion.
UvAprofiling¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UvAprofiling
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: expert
- Run description: Rerank baseline results using automatically extracted expert profiles.
UvAprofPOS¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: UvAprofPOS
- Participant: uamsterdam.ilps
- Track: Enterprise
- Year: 2006
- Submission: 7/28/2006
- Type: automatic
- Task: expert
- Run description: Combination of profiling and POS run
uwTbaseline¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: uwTbaseline
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: baseline run using title field only and not using any external resources.
uwTDbaseline¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: uwTDbaseline
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: run using the title and description fields of the query topic
uwTDsubj¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: uwTDsubj
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: run based on reranking of lists retrieved ( by using title and description) reranking is based on the presence of subjective adjectives at a distance to the query terms, thereby inducing some opinion to the text. Also, to experiment the selective consideration of the adjectives.
uwTsubj¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: uwTsubj
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: run based on reranking of lists retrieved, reranking is based on the presence of subjective adjectives at a distance to the query terms, thereby inducing some opinion to the text.
uwXSHUBS¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uwXSHUBS
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: Obtain the list of possible experts from the list corpus and then carry out graph based ranking methods to identify the authoritative candidate
uwXSOUT¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uwXSOUT
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: baseline run to gather the experts based on their contributions towards the dicussions
uwXSPMI¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: uwXSPMI
- Participant: uwaterloo-clarke
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: expert
- Run description: Experiment in which the user's, selected from lists sub-corpus, expertise is cross checked/verified with the sub-corpus -www for example.
w1r1s1¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: w1r1s1
- Participant: case-western.ru.troy
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: expert
- Run description: Automatic, title, reply emails only, WordNet
www¶
Results
| Participants
| Proceedings
| Input
| Summary (experts)
| Summary (supported)
| Appendix
- Run ID: www
- Participant: queen-mary-ulondon.forst
- Track: Enterprise
- Year: 2006
- Submission: 7/26/2006
- Type: automatic
- Task: expert
- Run description: "www"-part of collection only
york06ed01¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: york06ed01
- Participant: yorku.huang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: 1. Use Okapi BM25 for weighting and retrieval. 2. Set k1, k2, k3 and b to be the default values. 3. No thread feature is used.
york06ed02¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: york06ed02
- Participant: yorku.huang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: 1. Use Okapi BM25 for weighting and retrieval. 2. Set k1, k2, k3 and b to be the default values. 3. No thread feature is used.
york06ed03¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: york06ed03
- Participant: yorku.huang
- Track: Enterprise
- Year: 2006
- Submission: 7/30/2006
- Type: automatic
- Task: discussion
- Run description: 1. Use Okapi BM25 for weighting and retrieval. 2. Set k1, k2, k3 and b to be the default values. 3. No thread feature is used.
york06ed04¶
Results
| Participants
| Proceedings
| Input
| Summary (rel_nonrel)
| Summary (rel_procon)
| Appendix
- Run ID: york06ed04
- Participant: yorku.huang
- Track: Enterprise
- Year: 2006
- Submission: 7/31/2006
- Type: automatic
- Task: discussion
- Run description: 1. Use Okapi BM25 for weighting and retrieval. 2. Set k1, k2, k3 and b to be the default values. 3. Use the thread feature.