Skip to content

Runs - Contextual Suggestion 2013

1

Participants | Input | Appendix

  • Run ID: 1
  • Participant: PRIS
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: 8d342203b458085c2eece535d3e7b744
  • Run description: calculate the similarities with Tf-idf ,reduce the dimension using LSI model

2

Participants | Input | Appendix

  • Run ID: 2
  • Participant: PRIS
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: b276d5d3ce677c59fff5c2531e73aaa7
  • Run description: using LSI model with another Coefficient (80)

baselineA

Participants | Input | Appendix

  • Run ID: baselineA
  • Participant: UWaterlooCLAC
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: b8969757523f47d9b53661b7f6828a17
  • Run description: Results from Google Places API pointing to the Open Web filtered by whether the attraction has a corresponding URL.

baselineB

Participants | Input | Appendix

  • Run ID: baselineB
  • Participant: UWaterlooCLAC
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Type: clueweb12cs
  • Task: main
  • MD5: 58212ca9d3ea3b52f9fc57b9b431747b
  • Run description: Results from Google Places API pointing to the documents in the ClueWeb contextual suggestion subcollection filtered by whether the attraction has a corresponding DocID.

BOW_V17

Participants | Proceedings | Input | Appendix

  • Run ID: BOW_V17
  • Participant: GeorgetownYang
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Type: clueweb12b13
  • Task: main
  • MD5: af6a68cfa8e1404ce1e83cfb7058f602
  • Run description: category

BOW_V18

Participants | Proceedings | Input | Appendix

  • Run ID: BOW_V18
  • Participant: GeorgetownYang
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Type: clueweb12b13
  • Task: main
  • MD5: 343f164a409c3761787da3d03c26b653
  • Run description: use sample's category do retrieval

CIRG_IRDISCOA

Participants | Proceedings | Input | Appendix

  • Run ID: CIRG_IRDISCOA
  • Participant: CIRG_IRDISCO
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: b59588dad68109760dbed93a8ed76a77
  • Run description: Our runs are based on an item-item based similarity metric that is normally used in collaborative filtering. However, unlike traditional collaborative filtering approaches we make use of Wikipedia category graph and Wikipedia article graph to compute similarity between places fetched from Google Places and WikiTravel. The descriptions of example suggestions given as part of user profiles are decomposed into n-grams and from within these n-grams we filter those which have a corresponding Wikipedia entry (i.e., a Wikipedia article); finally we determine an intersection between these n-grams and the Wikipedia article titles extracted from n-grams of returned places' descriptions (using Google Places API and Bing API). The computed intersections (precisely, Wikipedia articles) are then used to further extract Wikipedia categories until depth 2 and these categories are used in a score computation framework that indicates a measure of similarity between example suggestions and returned places.

CIRG_IRDISCOB

Participants | Proceedings | Input | Appendix

  • Run ID: CIRG_IRDISCOB
  • Participant: CIRG_IRDISCO
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 69f41ee949d198c48341b265234eedc7
  • Run description: Our runs are based on an item-item based similarity metric that is normally used in collaborative filtering. However, unlike traditional collaborative filtering approaches we make use of Wikipedia category graph and Wikipedia article graph to compute similarity between places fetched from Google Places and WikiTravel. The descriptions of example suggestions given as part of user profiles are decomposed into n-grams and from within these n-grams we filter those which have a corresponding Wikipedia entry (i.e., a Wikipedia article); finally we determine an intersection between these n-grams and the Wikipedia article titles extracted from n-grams of returned places' descriptions (using Google Places API and Bing API). The computed intersections (precisely, Wikipedia articles) are then used to further extract Wikipedia categories until depth 2 and these categories are used in a score computation framework that indicates a measure of similarity between example suggestions and returned places. This particular run assigns a high priority score to locations fetched from Wikitravel.

complexScore

Participants | Proceedings | Input | Appendix

  • Run ID: complexScore
  • Participant: ULugano
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/22/2013
  • Task: main
  • MD5: 84e59f37095598b8daab7b49c25eead2
  • Run description: Main source is Google Place. Descriptions are fetched from Open Web and Yandex RCA. We used Naive bayes classifier (Weka package) with complex score to train ranking function. Complex score means that we divide one class from another by calculating score which considers place types. More precisely, description and website weights depend on a place type.

csui01

Participants | Input | Appendix

  • Run ID: csui01
  • Participant: fasilkomui
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: b0abf9bf91595698cd30289cac8b2788
  • Run description: this run search to yelp and foursquare for user's preferred category, then perform re-rank based on descriptiveness(url, description, etc.) and attractiveness(rating, review count, etc.) of the place itself..

csui02

Participants | Input | Appendix

  • Run ID: csui02
  • Participant: fasilkomui
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 84519f626be08e78989ed22bd9d55d69
  • Run description: This run search on Yelp and Foursquare data for places based on user's preferred category & context, then perform re-rank based on descriptiveness(url, description, etc.) and attractiveness(rating, review count, etc.) of the place itself, merge in round robin fashion to ensure diversity among top results.

DuTH_A

Participants | Proceedings | Input | Appendix

  • Run ID: DuTH_A
  • Participant: DuTH
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/22/2013
  • Task: main
  • MD5: 3e705fffa8909d3bff51305799e065c4
  • Run description: Suggestion model based on k-nearest neighbor algorithm (k-NN) weighted with tf-idf.

DuTH_B

Participants | Proceedings | Input | Appendix

  • Run ID: DuTH_B
  • Participant: DuTH
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/22/2013
  • Task: main
  • MD5: b478de336376436ee0a410b6f0e3470d
  • Run description: Suggestion model based on Rocchio algorithm.

IBCosTop1

Participants | Proceedings | Input | Appendix

  • Run ID: IBCosTop1
  • Participant: CWI
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Type: clueweb12full
  • Task: main
  • MD5: aa4cd69bc182f18398cc24e96ac31bb2
  • Run description: first we extracted candidate documents from the ClueWeb12 that mentioned one of the context that we have. And we generated users profiles from the descriptions given in the examples. Then we computed the cosine similarit between users profiles and the candidate documents. Finally, we took the top 50 from the top 1000, where docs have titles and descriptions.

IRIT.ClueWeb

Participants | Proceedings | Input | Appendix

  • Run ID: IRIT.ClueWeb
  • Participant: IRIT
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Type: clueweb12cs
  • Task: main
  • MD5: b9cdb2ea276841669bbd945820aa7295
  • Run description: Profiles were computed for the 562 users according to their votes. Users were assigned one or more "categories" from WordNet and Google Places. Documents were retrieved for each profile with Terrier according to a query composed of the users' categories and their weight (used as a boost for the associated query term). Results are at this point context-independent. Documents are then ranked for each context and user according to their Terrier score and their similarity to the user profile. The description of a suggestion states the snippet of the suggested website.

IRIT.OpenWeb

Participants | Proceedings | Input | Appendix

  • Run ID: IRIT.OpenWeb
  • Participant: IRIT
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/19/2013
  • Task: main
  • MD5: d106a58f3392139a2350b5e8cf2d616b
  • Run description: Profiles were computed for the 562 users according to their votes. Users were assigned one or more "categories" from WordNet and Google Places. Places were retrieved for the 50 contexts with Google Places (+ snippets from Bing). Each place was tagged with "categories" from WordNet and Google Places. For a given context, suggestions match the user's preferences by mapping the "categories" from the user and the places available around this context (distance computed between the two GPS coordinates). The description of a suggestion states the type(s) of place, the address, distance and travel time (foot/car), the Points of Interest (POIs) around and a snippet of the suggested website.

isirun

Participants | Proceedings | Input | Appendix

  • Run ID: isirun
  • Participant: ISIatTREC
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 346f0a582de8faf32ec6796097d7151d
  • Run description: Google Places are used for fetching the context. Google search is used for the description.

ming_1

Participants | Proceedings | Input | Appendix

  • Run ID: ming_1
  • Participant: PITT
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: d0cf2d99db3454dc921791459951dd0e
  • Run description: Based on the approaches of last year, our method includes three parts: 1) candidate datasets preparation; 2) feature extraction; and 3) ranking. We search for the candidate data from Yelp and Google search engine. The vector space model is used to compute the similarity between each candidate data and example according to their descriptions. Apart from the similarity, other four features, which are lv2_category (category from Yelp), lv1_category (classify the category from Yelp manually), rating (general popularity) and distance (the distance between the suggestion and the location of users), are extracted. In the third part, we construct linear regression model on the judged data of last year, getting the weighting parameter of each feature, and compute the score of each candidate suggestion for ranking.

ming_2

Participants | Proceedings | Input | Appendix

  • Run ID: ming_2
  • Participant: PITT
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 16a3f5a375023888604271d2a7c59352
  • Run description: Based on the approaches of last year, our method includes three parts: 1) candidate datasets preparation; 2) feature extraction; and 3) ranking. We search for the candidate data from Yelp and Google search engine. The vector space model is used to compute the similarity between each candidate data and example according to their descriptions. Apart from the similarity, other four features, which are lv2_category (category from Yelp), lv1_category (classify the category from Yelp manually), rating (general popularity) and distance (the distance between the suggestion and the location of users), are extracted. In the third part, we construct linear regression model on the judged data of last year, getting the weighting parameter of each feature, and compute the score of each candidate suggestion for ranking.

run01

Participants | Input | Appendix

  • Run ID: run01
  • Participant: ICMC_USP
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Type: clueweb12cs
  • Task: main
  • MD5: 467726cad6bc53abf4f0cfe88788c030
  • Run description: We extracted interesting and non interesting keywords from the examples, and based on them, we created personalized queries for each user. Then, we used the Lucene Search Engine to query the set of attractions based on the descriptions available on the websites. The set of documents was created based on the ClueQWeb12-CS, which was filtered using the Google Places API.

RUN1

Participants | Proceedings | Input | Appendix

  • Run ID: RUN1
  • Participant: ICTNET
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Type: clueweb12cs
  • Task: main
  • MD5: 935a3693a287e591e8aaeb5ef13e7f3b
  • Run description: Results based on geographical information.

RUN2

Participants | Proceedings | Input | Appendix

  • Run ID: RUN2
  • Participant: ICTNET
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Type: clueweb12cs
  • Task: main
  • MD5: 4c7e0e07e3468266fe2f8ec34b1f4985
  • Run description: geographical information + users' preference

simpleScore

Participants | Proceedings | Input | Appendix

  • Run ID: simpleScore
  • Participant: ULugano
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/22/2013
  • Task: main
  • MD5: 616df0152c08fd5b68af8d61c7b315ea
  • Run description: Main source is Google Place. Descriptions are fetched from Open Web and Yandex RCA. We used Naive bayes classifier with simple score to train ranking function. Simple score means that we divide one class from another by calculating simple score.

UAmsTF30WU

Participants | Proceedings | Input | Appendix

  • Run ID: UAmsTF30WU
  • Participant: UAmsterdam
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 4541ad7ee715ab9d77ad6e01dfcf80a7
  • Run description: Suggestions from Wikitravel pages of all US cities are ranked based on description of the provided examples. For a particular profile, the rankings of positive (score 3 or 4) examples are merged to a single ranked list of suggestions per user. The suggestions are then filtered on location.

udel_run_D

Participants | Input | Appendix

  • Run ID: udel_run_D
  • Participant: udel
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: 5aa7376cc5410c3ef5905223c31c83be
  • Run description: Yelp API was used for training purposes. Each example suggestion was categorized using Yelp API. Based upon each users preferences, keywords were appended to each profile. For each keyword and context combination, Results were retrieved using Google Place API. Round Robin approach was used to maintain diversity while selecting the top 50 suggestions for each user.

udel_run_SD

Participants | Input | Appendix

  • Run ID: udel_run_SD
  • Participant: udel
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: 111b1c0ed51c573903d02684cdbf0f7b
  • Run description: Yelp API was used for training purposes. Each example suggestion was categorized using Yelp API. Based upon each users preferences, keywords were appended to each profile. For each keyword and context combination, Results were retrieved using Google Place API. Each list was sorted on the basis of Google Places rating. Round Robin approach was used to maintain diversity while selecting the top 50 suggestions for each user.

UDInfoCS1

Participants | Proceedings | Input | Appendix

  • Run ID: UDInfoCS1
  • Participant: udel_fang
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/22/2013
  • Task: main
  • MD5: eb32aecb0165133a239c992eb52e07dd
  • Run description: Candidates are crawled from Yelp. User profiles are constructed based on the summary reviews in order to generalize what a user likes or dislikes, and the candidate suggestions are then ranked based on their similarity to the user profiles. Descriptions are generated based on the category, meta-description and the content of the website, reviews, and examples that the user liked.

UDInfoCS2

Participants | Proceedings | Input | Appendix

  • Run ID: UDInfoCS2
  • Participant: udel_fang
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/22/2013
  • Task: main
  • MD5: ede3249449a7235706aa155ec85656f7
  • Run description: Candidates are crawled from Yelp. User profiles are constructed based on the unique terms from the reviews. Candidates are ranked based on their similarity to the user profiles. Descriptions are generated based on the category, meta-description of the website, reviews and examples.

uncsils_base

Participants | Proceedings | Input | Appendix

  • Run ID: uncsils_base
  • Participant: UNC_SILS
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: cc7d6f38046f41cb07cd06acece6ed41
  • Run description: Each candidate recommendation was scored using the weighted-average rating given to documents in the user's profile. The weights given to documents in the profile were computed using the cosine similarity between the candidate recommendation and the profile document. The cosine similarity was computed using tf.idf term-weights.

uncsils_param

Participants | Proceedings | Input | Appendix

  • Run ID: uncsils_param
  • Participant: UNC_SILS
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: 34ec8630b8cd679989da74e661f55c9d
  • Run description: Each candidate recommendation was scored using the weighted-average rating given to documents in the user's profile. The weights given to documents in the profile were computed using the cosine similarity between the candidate recommendation and the profile document. The cosine similarity was computed using tf.idf term-weights. The score was boosted using the rating given to the profile document that was the most similar to the candidate recommendation.

uogTrCFP

Participants | Proceedings | Input | Appendix

  • Run ID: uogTrCFP
  • Participant: uogTr
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 9a875b74c05c78b1d2f97aa0f7fee921
  • Run description: Ranking venues using similarity measures between user profile and the venue description and the popularity of the venue.

uogTrCFX

Participants | Proceedings | Input | Appendix

  • Run ID: uogTrCFX
  • Participant: uogTr
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/24/2013
  • Task: main
  • MD5: 4946a01a939eeb52f5ff133bb8390087
  • Run description: This run uses a diversification approach to re-rank the venues for a given user such that they cover the categories of the user's interest

york13cr1

Participants | Proceedings | Input | Appendix

  • Run ID: york13cr1
  • Participant: YORK
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: 39634453a67873d6622ce9184f68c07b
  • Run description: This run is based on Google places API where only the user context is considered for returning the top 50 attractions.

york13cr2

Participants | Proceedings | Input | Appendix

  • Run ID: york13cr2
  • Participant: YORK
  • Track: Contextual Suggestion
  • Year: 2013
  • Submission: 7/23/2013
  • Task: main
  • MD5: e3edfc0eda68883fdf45480253a59428
  • Run description: This run is based on exploiting a semantic user profile for personalized recommendation. The user profile is composed of a set of categories issued from the Open Directory Project (ODP) ontology. Each category in the user profile is represented with terms based on positive and negative attractions previously rated by the user. For each context-profile pair, we rerank the top 50 suggestions returned by Google places API using the user profile. Each suggestion is scored according to how well it matches each of the categories in the user profile.