Skip to content

Runs - Legal 2011

HELclrAM

Participants | Proceedings | Appendix

  • Run ID: HELclrAM
  • Participant: dioileh
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: cde60bf552d82a03e35b2b0f292eac61
  • Run description: Constant LToR reorder of expansion with 20 grams run

HELq20rAM

Participants | Proceedings | Appendix

  • Run ID: HELq20rAM
  • Participant: dioileh
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: 02af151ed96761d56acaa4c24f3a4397
  • Run description: Expansion with 20 terms based on TF

HELqlaA1

Participants | Proceedings | Appendix

  • Run ID: HELqlaA1
  • Participant: dioileh
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: automatic
  • Task: main
  • MD5: a368122b2b5298d82a0caa513c9f8889
  • Run description: Previous retrieved max 5000 documents, this run retrieves max all documents.

ISICLST1

Participants | Appendix

  • Run ID: ISICLST1
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/10/2011
  • Type: techassist
  • Task: main
  • MD5: e2baf78555abc22ee02d09af3af4acaf
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster.

ISICLUT1

Participants | Appendix

  • Run ID: ISICLUT1
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/9/2011
  • Type: techassist
  • Task: main
  • MD5: eb18dfcbc12314de7e46b4ee0bea2171
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster.

ISICLUT2

Participants | Appendix

  • Run ID: ISICLUT2
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/19/2011
  • Type: techassist
  • Task: main
  • MD5: 252da99c4ef3e11277a0ea8bb6552311
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was judged responsive by TA, was chosen as a responsive cluster.

ISIFUSAM

Participants | Appendix

  • Run ID: ISIFUSAM
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: ca4451fc370761827696d51ed178862c
  • Run description: We produced two runs - one, by Terrier 3.0 Relevance Feedback and the other, by ranking the judged relevant documents above the other documents in the collection. Then, we fused these two runs using Z-fusion technique to produce this run.

ISIFuSAM

Participants | Appendix

  • Run ID: ISIFuSAM
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: b9eba89df27db62329219dc1a89fbead
  • Run description: We produced two runs - one, by Terrier 3.0 Relevance Feedback and the other, by ranking the judged relevant documents above the other documents in the collection. Then, we fused these two runs using Z-fusion technique to produce this run.

ISILrFTF

Participants | Appendix

  • Run ID: ISILrFTF
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: 1829de05713f1829e8172def61b25ee6
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback using Terrier 3.0. The Indri query was then expanded using the feedback from Terrier.

ISILRFTF

Participants | Appendix

  • Run ID: ISILRFTF
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: 4cbf6a63d1f657c04fdba0230d49fc1f
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback using Terrier 3.0. The Indri query was then expanded using the feedback from Terrier.

ISIRFCT2

Participants | Appendix

  • Run ID: ISIRFCT2
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/25/2011
  • Type: techassist
  • Task: main
  • MD5: 545b10179ea567d27c64f4d58f94f428
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback.

ISIRoTAM

Participants | Appendix

  • Run ID: ISIRoTAM
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: 611c2fbe0a11309253181ec5a73f52a1
  • Run description: The judged relevant documents in the mop-up collection are ranked arbitrarily. Then, the remaining documents in the collection were placed arbitrarily in the run after the judged relevant documents.

ISIROTAM

Participants | Appendix

  • Run ID: ISIROTAM
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: 835fc225d2279334c6e4e8c401a84632
  • Run description: The judged relevant documents in the mop-up collection are ranked arbitrarily. Then, the remaining documents in the collection were placed arbitrarily in the run after the judged relevant documents.

ISIROTTF

Participants | Appendix

  • Run ID: ISIROTTF
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: 6183b947f49dda67c3af83e35a0cdb12
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback using Terrier 3.0 and Indri.

ISIRoTTF

Participants | Appendix

  • Run ID: ISIRoTTF
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: f3b02ed8f41ae109d4dd48e99cd6a0ba
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback using Terrier 3.0 and Indri.

ISITrFAM

Participants | Appendix

  • Run ID: ISITrFAM
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: f4e94e1ff076fbdfcb9e196bbbb72513
  • Run description: Based on the judged relevant documents in the mop-up documents, we performed Relevance Feedback using Terrier 3.0.

ISITRFAM

Participants | Appendix

  • Run ID: ISITRFAM
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: 8047934338255773fdd50f367464ef85
  • Run description: Based on the judged relevant documents in the mop-up documents, we performed Relevance Feedback using Terrier 3.0.

ISITRFTF

Participants | Appendix

  • Run ID: ISITRFTF
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: 767bf745d6b13e16e176e0b19ec97698
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback using Terrier 3.0.

ISITrFTF

Participants | Appendix

  • Run ID: ISITrFTF
  • Participant: IRISICAL
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: 59989daad0e33d6b95b969c7833814ab
  • Run description: The notion of relevance was imbibed from the kickoff call. Next documents were retrieved using Indri. These documents were clustered. One, arbitrarily chosen document from each cluster was reviewed for responsiveness. The cluster whose representative was deemed responsive, was chosen as a responsive cluster. Based on the judged relevant documents, we performed Relevance Feedback using Terrier 3.0.

mlbclsA1

Participants | Proceedings | Appendix

  • Run ID: mlbclsA1
  • Participant: unimelb_plus
  • Track: Legal
  • Year: 2011
  • Submission: 8/10/2011
  • Type: automatic
  • Task: main
  • MD5: d7c9cfe496ffc32ec2f9b07807878045
  • Run description: For initial interim run, just TF*IDF ranking based on topic keywords.

mlbclsAF

Participants | Proceedings | Appendix

  • Run ID: mlbclsAF
  • Participant: unimelb_plus
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: a424a7ec05d0b76fce54fa19c288e832
  • Run description: Please see above.

mlblrnTF

Participants | Proceedings | Appendix

  • Run ID: mlblrnTF
  • Participant: unimelb_plus
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: techassist
  • Task: main
  • MD5: ad5a356ac121b5db69ee02f23a981e08
  • Run description: Please see above.

mlblrnTM

Participants | Proceedings | Appendix

  • Run ID: mlblrnTM
  • Participant: unimelb_plus
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: c89cf3bd43f3f6f0d28cd6b022c4b220
  • Run description: Text-based SVM on mop-up labels.

otL11BT1

Participants | Proceedings | Appendix

  • Run ID: otL11BT1
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 8/1/2011
  • Type: techassist
  • Task: main
  • MD5: b16e000aa6529adb969eca71a6e5dc07
  • Run description: Boolean-based run.

otL11BT2

Participants | Proceedings | Appendix

  • Run ID: otL11BT2
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: techassist
  • Task: main
  • MD5: 86bacaafc4e17d7576b21da1399be2df
  • Run description: Boolean-based run for which the prob estimates were improved using the 100 example judgments per topic.

otL11BTM

Participants | Proceedings | Appendix

  • Run ID: otL11BTM
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 9/5/2011
  • Type: techassist
  • Task: main
  • MD5: ab76b790766537233075fbeffcc6957c
  • Run description: Boolean-based run with mopup rels moved to front and for which the prob estimates were improved using an earlier sample of 100 example judgments per topic.

otL11FT1

Participants | Proceedings | Appendix

  • Run ID: otL11FT1
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 8/26/2011
  • Type: techassist
  • Task: main
  • MD5: 73e6b6bd9512c8465dd3faf7b7d831c7
  • Run description: This run just used the terms in the topic statement.

otL11FT2

Participants | Proceedings | Appendix

  • Run ID: otL11FT2
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 8/27/2011
  • Type: techassist
  • Task: main
  • MD5: 5374e087bceec31cbc5abeb039281f37
  • Run description: Pure relevance feedback run based on 100 example judgments per topic (no use of topic statements).

otL11FTM

Participants | Proceedings | Appendix

  • Run ID: otL11FTM
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 9/5/2011
  • Type: techassist
  • Task: main
  • MD5: 24d368f5baee2fcb06834ed23ee3e083
  • Run description: Pure relevance feedback run based on the 2000+ example mopup judgments per topic (no use of topic statements).

otL11HT1

Participants | Proceedings | Appendix

  • Run ID: otL11HT1
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: techassist
  • Task: main
  • MD5: 60d71319c84cf1f0820ef403d3460782
  • Run description: Fusion run of otL11BT1 and otL11FT1.

otL11HT2

Participants | Proceedings | Appendix

  • Run ID: otL11HT2
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: techassist
  • Task: main
  • MD5: ace8794f9dde78ca4a0cd3037358cdfb
  • Run description: Fusion run of otL11BT2 and otL11FT2.

otL11HTM

Participants | Proceedings | Appendix

  • Run ID: otL11HTM
  • Participant: ot
  • Track: Legal
  • Year: 2011
  • Submission: 9/5/2011
  • Type: techassist
  • Task: main
  • MD5: d9b6c49aef885f9571b2c101ed07afe3
  • Run description: Fusion run of otL11BTM and otL11FTM.

priindA1

Participants | Proceedings | Appendix

  • Run ID: priindA1
  • Participant: PRIS
  • Track: Legal
  • Year: 2011
  • Submission: 7/19/2011
  • Type: automatic
  • Task: main
  • MD5: f5e541c25ae65b0467e00f4bf67533c8
  • Run description: we use indri as the searching tool we use tf and idf as the feature we use edit distance as the similarity function

priindA2

Participants | Proceedings | Appendix

  • Run ID: priindA2
  • Participant: PRIS
  • Track: Legal
  • Year: 2011
  • Submission: 8/16/2011
  • Type: automatic
  • Task: main
  • MD5: 51aea2a80d82fe7780912a6a403f4cd6
  • Run description: we use the feedback judgements as our training data we use a bayes based algorithm to give every candidate document a probability we sort the probability as our final result

priindA3

Participants | Appendix

  • Run ID: priindA3
  • Participant: BUPT_WILDCAT
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: a7dd59096cbd8e34b2f834afa504b8b3
  • Run description: feedback and indri

priindAM

Participants | Appendix

  • Run ID: priindAM
  • Participant: BUPT_WILDCAT
  • Track: Legal
  • Year: 2011
  • Submission: 9/7/2011
  • Type: automatic
  • Task: main
  • MD5: 8382c66b97854506dc048453b61ffc64
  • Run description: feedback and indri

recommind03T

Participants | Proceedings | Appendix

  • Run ID: recommind03T
  • Participant: Recommind
  • Track: Legal
  • Year: 2011
  • Submission: 8/31/2011
  • Type: techassist
  • Task: main
  • MD5: 7339aefc87bcf033c29fa92afb3b69c0
  • Run description: A combination of Boolean searches, phrase extraction, conceptual analysis and random sampling was used to identify some potentially responsive documents. These documents were then reviewed for responsiveness. Documents found to be responsive were added to a seed set and trained on using proprietary machine learning techniques. Documents returned form the training that had a high computer generated score were then passed on for human review. The training and review process was then repeated

recommind04T

Participants | Proceedings | Appendix

  • Run ID: recommind04T
  • Participant: Recommind
  • Track: Legal
  • Year: 2011
  • Submission: 9/7/2011
  • Type: techassist
  • Task: main
  • MD5: de9bdb90d57673439fd13b25384c4b95
  • Run description: A combination of Boolean searches, phrase extraction, conceptual analysis and random sampling was used to identify some potentially responsive documents. These documents were then reviewed for responsiveness. Documents found to be responsive were added to a seed set and trained on using proprietary machine learning techniques. Documents returned form the training that had a high computer generated score were then passed on for human review. The training and review process was then repeated

tcdicskwA1

Participants | Proceedings | Appendix

  • Run ID: tcdicskwA1
  • Participant: TCDI
  • Track: Legal
  • Year: 2011
  • Submission: 8/22/2011
  • Type: automatic
  • Task: main
  • MD5: 0f3ba28215229d2c0ed5a8b82ec1bbfb
  • Run description: This is an automated baseline to see how effective an automatically derived concept search+keyword + WordNet + bigram is with no exemplar requirement compared with categorization methods which require exemplars.

tcdihentA3

Participants | Proceedings | Appendix

  • Run ID: tcdihentA3
  • Participant: TCDI
  • Track: Legal
  • Year: 2011
  • Submission: 8/29/2011
  • Type: automatic
  • Task: main
  • MD5: ef04a45c5220de4d369cdfcd7e9ce832
  • Run description: Wordnet LSI bigram and 40 yes/no responsive calls from TA for each topic with emphasis on feature building

tcdilentA2

Participants | Proceedings | Appendix

  • Run ID: tcdilentA2
  • Participant: TCDI
  • Track: Legal
  • Year: 2011
  • Submission: 8/29/2011
  • Type: automatic
  • Task: main
  • MD5: 7739aa797375236b5292a921d2a4e8ef
  • Run description: Wordnet LSI bigram and 40 yes/no responsive calls from TA for each topic with emphasis on feature building

tcdinokaAF

Participants | Proceedings | Appendix

  • Run ID: tcdinokaAF
  • Participant: TCDI
  • Track: Legal
  • Year: 2011
  • Submission: 8/30/2011
  • Type: automatic
  • Task: main
  • MD5: 00a5b3cbfe22a552d213308108cf1cc0
  • Run description: This run is the control, taking in no account of TA assessments (compare with runs 1-3). Also, no keyword filter is applied as in 1-3. So I expect this run to be high recall but low precision.

URS205A1

Participants | Proceedings | Appendix

  • Run ID: URS205A1
  • Participant: URSINUS
  • Track: Legal
  • Year: 2011
  • Submission: 8/18/2011
  • Type: automatic
  • Task: main
  • MD5: 71d308516d37239528c70af6618dfd6c
  • Run description: Term frequency-inverse document frequency weighting on the term-document matrix. Then LSI with 205 singular values is applied with a .25 weight and Vector Space retrieval is applied with a .75 weight to get the final scores. This run differs from the future runs in that it does not use any query expansion, because we have no information about which documents are relevant.

URS205A3

Participants | Proceedings | Appendix

  • Run ID: URS205A3
  • Participant: URSINUS
  • Track: Legal
  • Year: 2011
  • Submission: 8/29/2011
  • Type: automatic
  • Task: main
  • MD5: 2adc68d017147125b3c141736952ccb6
  • Run description: We used a combination of LSI and vector-space retrieval techniques (called Essential Dimensions of LSI (EDLSI)) combined with selective query expansion based on the determinations from the TAs.

URS205AM

Participants | Proceedings | Appendix

  • Run ID: URS205AM
  • Participant: URSINUS
  • Track: Legal
  • Year: 2011
  • Submission: 9/7/2011
  • Type: automatic
  • Task: main
  • MD5: f2671db12f28b6884ba26d117add0e97
  • Run description: This is the mopup run and uses all determination requests from all teams.

URS222A2

Participants | Proceedings | Appendix

  • Run ID: URS222A2
  • Participant: URSINUS
  • Track: Legal
  • Year: 2011
  • Submission: 8/24/2011
  • Type: automatic
  • Task: main
  • MD5: bf6662af58b3de037e817226c1b4f4b8
  • Run description: Differs from first run by way of query expansion using the first set of determinations. This is the second of two runs but since I haven't received feedback on my first determination set for topic 403, it is not included. In order to get more determinations for 401 and 402, I need to send in interim submissions.

URS403A2

Participants | Proceedings | Appendix

  • Run ID: URS403A2
  • Participant: URSINUS
  • Track: Legal
  • Year: 2011
  • Submission: 8/25/2011
  • Type: automatic
  • Task: main
  • MD5: 046b7134380cd10675e37fd31f720a44
  • Run description: This run is just topic 403, as a run with 401 and 402 after the first 100 determinations has already been sent in. This is EDLSI using a query vector made up of the average of document vectors of docs we know to be relevant from the first determination set.

USFDSET

Participants | Proceedings | Appendix

  • Run ID: USFDSET
  • Participant: USF_ISDS
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: techassist
  • Task: main
  • MD5: dd6ada00cad245066b61f965fe61d026
  • Run description: In this case we tuned our classifier based on the responses to our 100 documents submitted. We then evaluated the 27 documents from our submission that were judged non-responsive to set our Elimination Component operator. This operator is our new development for this year. Last year we focused on recall, this year our focus is on precision.

USFEOLT

Participants | Proceedings | Appendix

  • Run ID: USFEOLT
  • Participant: USF_ISDS
  • Track: Legal
  • Year: 2011
  • Submission: 8/22/2011
  • Type: techassist
  • Task: main
  • MD5: c322eb42f29d88cd91c0eac1760d31b2
  • Run description: Context based search terms with filtering characteristics with a focus on the EOL search term

USFMOPT

Participants | Proceedings | Appendix

  • Run ID: USFMOPT
  • Participant: USF_ISDS
  • Track: Legal
  • Year: 2011
  • Submission: 9/4/2011
  • Type: techassist
  • Task: main
  • MD5: be3c672f1635e7b71dc748b951416fd4
  • Run description: In this case we tuned our classifier based on the responses to our 100 documents submitted. We then evaluated the 27 documents from our submission that were judged non-responsive to set our Elimination Component operator. This operator is our new development for this year. Last year we focused on recall, this year our focus is on precision.

UWABASA1

Participants | Proceedings | Appendix

  • Run ID: UWABASA1
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/23/2011
  • Type: automatic
  • Task: main
  • MD5: edf75bf0546cf6b66eb1ffbe62ead3d8
  • Run description: Base run with Okapi to measure increase in performance.

UWABASA2

Participants | Proceedings | Appendix

  • Run ID: UWABASA2
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: b1f988a9c31c3192c6c9810c24e9c9ba
  • Run description: Okapi relevance feedback using first round TA determinations from TOP-100 document from pseudo relevance feedback.

UWABASA3

Participants | Proceedings | Appendix

  • Run ID: UWABASA3
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: d5c2ead23bc1dc8597760f428d51bc27
  • Run description: Okapi relevance feedback using first and second round TA determinations from TOP-200 document from pseudo relevance feedback.

UWABASA4

Participants | Proceedings | Appendix

  • Run ID: UWABASA4
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: 579a5a4e422661d1dd9aff538e79767c
  • Run description: Okapi relevance feedback using first, second and third round TA determinations from TOP-300 document from pseudo relevance feedback.

UWABASAF

Participants | Proceedings | Appendix

  • Run ID: UWABASAF
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: 8154d148780c2868edfec32da0ce66ee
  • Run description: Clone of UWABASA4 for administrative reasons. Okapi relevance feedback on first second and third round TA documents.

UWABASAM

Participants | Proceedings | Appendix

  • Run ID: UWABASAM
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: dca738a8e78160280ea29898da616c5c
  • Run description: okapi relevance feedback on all TA relevance determinations.

UWALINA2

Participants | Proceedings | Appendix

  • Run ID: UWALINA2
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: 463908626ef35b928a0894080be861cd
  • Run description: Linear regression classifier trained on first round TA documents and top-10 relevance pseudo relevance feedback from Okapi runs.

UWALINA3

Participants | Proceedings | Appendix

  • Run ID: UWALINA3
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: 24336c7d495b1f4e46f38929bf1de28f
  • Run description: Linear regression classifier trained on first and second round TA documents and top-10 relevance pseudo relevance feedback from Okapi runs.

UWALINA4

Participants | Proceedings | Appendix

  • Run ID: UWALINA4
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: cf7b595f7b5e35e8a0652ac78ff51d8e
  • Run description: Linear regression classifier trained on first, second and third round TA documents and top-10 relevance pseudo relevance feedback from Okapi runs.

UWALINAF

Participants | Proceedings | Appendix

  • Run ID: UWALINAF
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: f32886607d44b40ff6421d049b36bf4e
  • Run description: Clone of UWALINA4 for administrative reasons. Linear regression classifier trained on first, second and third round TA documents and top-10 relevance pseudo relevance feedback from Okapi runs.

UWALINAM

Participants | Proceedings | Appendix

  • Run ID: UWALINAM
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: 4354a6e4ba57c2f96decde1b2835dd60
  • Run description: Linear regression classifier trained on all TA relevance determinations.

UWASNAA1

Participants | Proceedings | Appendix

  • Run ID: UWASNAA1
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: fc55c34b9bc86dfa43c1b4b43b1530a3
  • Run description: Okapi pseudo relevance feedback and using social network analysis of document sender and receiver. Documents without sender and receiver tags only get okapi probability assigned.

UWASNAA2

Participants | Proceedings | Appendix

  • Run ID: UWASNAA2
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: 34fbfb57fba39c8fc974af0490ad46fe
  • Run description: Okapi relevance feedback on first round TA documents and social network analysis of document sender and receiver. Documents without sender and receiver tags only get okapi probability assigned.

UWASNAA3

Participants | Proceedings | Appendix

  • Run ID: UWASNAA3
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: b16154ed11883f304180feef8f1cfe09
  • Run description: Okapi relevance feedback on first and second round TA documents and social network analysis of document sender and receiver. Documents without sender and receiver tags only get okapi probability assigned.

UWASNAA4

Participants | Proceedings | Appendix

  • Run ID: UWASNAA4
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: e08b9e6050ae8c70d329f5b3f8f4fcfc
  • Run description: Okapi relevance feedback on first second and third round TA documents and social network analysis of document sender and receiver. Documents without sender and receiver tags only get okapi probability assigned.

UWASNAAF

Participants | Proceedings | Appendix

  • Run ID: UWASNAAF
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 8/28/2011
  • Type: automatic
  • Task: main
  • MD5: 0c686f856419117a246f451f8a4e785e
  • Run description: Clone of UWASNAA4 for administrative reasons. Okapi relevance feedback on first second and third round TA documents and social network analysis of document sender and receiver. Documents without sender and receiver tags only get okapi probability assigned.

UWASNAAM

Participants | Proceedings | Appendix

  • Run ID: UWASNAAM
  • Participant: waterloo
  • Track: Legal
  • Year: 2011
  • Submission: 9/6/2011
  • Type: automatic
  • Task: main
  • MD5: 0dc0ed7d3dc43f5a9340310411f29b97
  • Run description: okapi relevance feedback on all TA relevance determinations and social network analysis of document sender and receiver.