Skip to content

Proceedings - Product Search and Recommendation 2025

Georgios Arampatzis, Symeon Symeonidis, Avi Arampatzis

Abstract

In this work, we present the fully lexical and reproducible system developed by the DUTH team for the TREC 2025 Product Search & Recommendation track, aiming to improve performance on task-oriented e-commerce queries. Such queries (e.g., home office makeover, birthday party essentials) often perform poorly in purely lexical retrieval systems because they express high-level user intents rather than concrete product attributes. Our system indexes approximately 1.08M products using Lucene/Pyserini, retrieves with BM25 (tuned to k1=0.9, b=0.4), and bridges the intent–metadata gap through carefully calibrated RM3 pseudo-relevance feedback. For the interactive setting, we automatically generate four PRF-based query reformulations per topic and aggregate complementary signals using weighted Reciprocal Rank Fusion. The system requires neither neural re-ranking nor external resources, runs efficiently on a single CPU node, and produces standard six-column TREC runs with strict de-duplication. Official evaluation results confirm that RM3 and fusion yield consistent improvements over the BM25 baseline across task completion nDCG, MAP, and Essential Recall@1000. These findings highlight that thoughtful lexical reformulation, classical PRF, and simple fusion strategies remain strong and efficient baselines for task-oriented product search.

Bibtex
@inproceedings{DUTH-trec2025-papers-proc-5,
    title = {Precision by Design: RM3 and Fusion in Product Search},
    author = {Georgios Arampatzis and Symeon Symeonidis and Avi Arampatzis},
    booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
    year = {2025},
    address = {Gaithersburg, Maryland},
    series = {NIST SP xxxx}
}

JBNU at TREC 2025 Product Search and Recommendations Track

Seong-Hyuk Yim, Jae-Young Park, Woo-Seok Choi, Gi-Taek An, Kyung-Soon Lee

Abstract

This paper presents the JBNU team’s participation in the TREC 2025 Product Search and Recommendations Track. For the Search Task, we develop two complementary query reformulation strategies: an LLM-driven method that generates structured Lucene-style reformulations to reduce query ambiguity, and a multimodal approach that leverages a vision–language model (VLM) to extract additional semantic cues from web-sourced images. For the Recommendation Task, we adopt a two-stage architecture in which neural retrieval models (dense and learned sparse) generate candidate products, and relation classification—performed by either an LLM or a fine-tuned BERT model—reranks them as substitutes or complements, with final lists refined through weighted score aggregation. Experimental results show that both LLM-based query reformulation and classification-driven reranking consistently improve effectiveness across tasks. Overall, the study demonstrates that lightweight LLM components, when strategically integrated into retrieval and recommendation pipelines, provide a scalable and robust approach to product understanding in the TREC setting.

Bibtex
@inproceedings{JBNU-trec2025-papers-proc-1,
    title = {JBNU at TREC 2025 Product Search and Recommendations Track},
    author = {Seong-Hyuk Yim and Jae-Young Park and Woo-Seok Choi and Gi-Taek An and Kyung-Soon Lee},
    booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
    year = {2025},
    address = {Gaithersburg, Maryland},
    series = {NIST SP xxxx}
}