Proceedings - Interactive Knowledge Assistance Track (IKAT) 2025¶
TREC iKAT 2025: The Interactive Knowledge Assistance Track Overview¶
Mohammad Aliannejadi, Simon Lupart, Marcel Gohsen, Zahra Abbasiantaeb, Nailia Mirzakhmedova, Johannes Kiesel, Jeffrey Dalton
Abstract
Conversational information seeking has evolved rapidly in the last few years with the development of large language models (LLMs) providing the basis for interpreting and responding in a naturalistic manner to user requests. iKAT emphasizes the creation and research of conversational search agents that adapt responses based on the user’s prior interactions and present context, maintaining a long-term memory of user-system interactions. This means that the same question might yield varied answers, contingent on the user’s profile and preferences. The challenge lies in enabling conversational search agents (CSA) to incorporate personalized context to guide users through the relevant information effectively. iKAT’s third year introduced an interactive conversation task, attracting seven teams and a total of 47 runs. Most of the runs leveraged LLMs in their pipelines, for single or multiple query rewriting, some also adopted agentic pipelines.
Bibtex
@inproceedings{coordinators-trec2025-papers-proc-5,
title = {TREC iKAT 2025: The Interactive Knowledge Assistance Track Overview},
author = {Mohammad Aliannejadi and Simon Lupart and Marcel Gohsen and Zahra Abbasiantaeb and Nailia Mirzakhmedova and Johannes Kiesel and Jeffrey Dalton},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}
USIIR at TREC 2025 iKAT Track¶
Lili Lu
- Participant: usiir
- Paper: https://trec.nist.gov/pubs/trec34/papers/usiir.ikat.pdf
- Runs: usiir_run1 | usiir_run2
Abstract
This year’s TREC iKAT track contains several tasks such as passage ranking, response generation and Personal Text Knowledge Base (PTKB) statement classification. We focus on response generation (only) task due to time and budget limitations. This task is to generate a response based on retrieved passages, given the additional context. We submitted two runs for this task mainly to explore the impact of user profiles on the generation quality of responses. In this short report, we describe the method that was used for generation and present the results.
Bibtex
@inproceedings{usiir-trec2025-papers-proc-1,
title = {USIIR at TREC 2025 iKAT Track},
author = {Lili Lu},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}
GRILL Lab at TREC 2025: Agentic Iterative Retrieval and Gap-Aware Refinement for TREC IKAT and TREC RAG¶
Paul Owoicho, Jeff Dalton
- Participant: grilllab
- Paper: https://trec.nist.gov/pubs/trec34/papers/grilllab.ikat.rag.pdf
- Runs: grilllab-larf-finetuned | grilllab-larf-finetuned-10-rounds | grilllab-larf-finetuned-rankllm | grilllab-larf-finetuned-22-rounds | grilllab-agentic-gpt4.1 | grilllab-agentic-gpt4.1-larf | grilllab-agentic-gpt4.1-larf-v2 | grilllab-larf-fine-tuned-judge
Abstract
This paper describes the GRILL Lab’s participation in the TREC 2025 Interactive Knowledge Assistance Track (IKAT) and the Retrieval-Augmented Generation (RAG) track, covering four sub-tasks: IKAT Passage Ranking/Response Generation, IKAT Simulation, RAG Retrieval Only, and RAG Full. Our approach centres on a modular, agentic pipeline that pursues high recall through iterative feedback. The system proceeds in three stages: (1) initial candidate generation via BM25; (2) document expansion using Query-by-Document techniques; and (3) an LLM-driven gap analysis phase in which the model identifies informational gaps and formulates supplementary queries. A key architectural feature is a fine-tuned GPT-4.1 nano binary relevance filter, trained on TREC CAsT 2022 and IKAT 2023 relevance judgments, which prunes irrelevant documents between each stage to contain topic drift.
Bibtex
@inproceedings{grilllab-trec2025-papers-proc-1,
title = {GRILL Lab at TREC 2025: Agentic Iterative Retrieval and Gap-Aware Refinement for TREC IKAT and TREC RAG},
author = {Paul Owoicho and Jeff Dalton},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}
GUIDANCE@TREC iKAT 2025¶
Ahmed Rayane Kebir, Victor Morand, Pierre-Antoine Lequeu, Zineddine Tighidet, Mitodru Niyogi, Jonah Turner, Rishu Kumar, Benjamin Piwowarski
- Participant: guidance
- Paper: https://trec.nist.gov/pubs/trec34/papers/guidance.ikat.pdf
- Runs: cosine-orconvqa-sum-top10 | agg_true-qrec-mse-sum-top10 | agg_false-qrec-mse-sum-top10 | gpt-clarif-sum-top10
Abstract
The report describes the work conducted by several teams involved in the ANR GUIDANCE project for the iKAT evaluation campaign. The ANR GUIDANCE aims to advance research in General Purpose Dialogue-assisted Digital Information Access. This involves tackling challenges such as the design and adaptation of large language models (LLMs) for better information access, enhancing LLMs' generalization capabilities to new domains and languages, ensuring the truthfulness of outputs, and addressing the lack of open-access state-of-the-art models. The GUIDANCE project also seeks to unite the French Information Retrieval community and produce open-access resources for model evaluation and development.
Bibtex
@inproceedings{guidance-trec2025-papers-proc-1,
title = {GUIDANCE@TREC iKAT 2025},
author = {Ahmed Rayane Kebir and Victor Morand and Pierre-Antoine Lequeu and Zineddine Tighidet and Mitodru Niyogi and Jonah Turner and Rishu Kumar and Benjamin Piwowarski},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}
Evaluating Full Dialogue History vs. Summarized Context for Personalized Knowledge Assistance: Findings from the TREC 2025 iKAT Track¶
Suveyda Yeniterzi, Reyyan Yeniterzi
- Participant: GenAIus
- Paper: https://trec.nist.gov/pubs/trec34/papers/GenAIus.ikat.pdf
- Runs: genaius-genonly-summary-gpt4o | genaius-genonly-full-gpt4o | genaius-full-rewrite | genaius-summary-rewrite
Abstract
We present GenAIus’s participation in the TREC 2025 Interactive Knowledge Assistance Track (iKAT), focusing on personalized and context-aware response generation in both offline and interactive settings. We develop a multi-stage pipeline that integrates conversation summarization, Personal Textual Knowledge Base (PTKB) classification, query rewriting, passage retrieval, and grounded response generation. To study the impact of conversational context modeling, we compare two configurations: conditioning on the full dialogue history versus using an evolving conversation summary updated at each turn. Experimental results show that full-history conditioning yields slightly stronger performance in offline generation and dialogue-level interactive metrics, while summary-based conditioning achieves comparable overall results with improvements in engagement and contextual efficiency. Both approaches rank within the top tier of participating systems, demonstrating the robustness of our pipeline and the viability of structured conversational summarization as a scalable alternative to full-history conditioning.
Bibtex
@inproceedings{GenAIus-trec2025-papers-proc-5,
title = {Evaluating Full Dialogue History vs. Summarized Context for Personalized Knowledge Assistance: Findings from the TREC 2025 iKAT Track},
author = {Suveyda Yeniterzi and Reyyan Yeniterzi},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}
CFDA & CLIP at TREC iKAT 2025: Enhancing Personalized Conversational Search via Query Reformulation and Rank Fusion¶
Yu-Cheng Chang, Guan-Wei Yeo, Quah Eugene, Fan-Jie Shih, Yuan-Ching Kuo, Tsung-En Yu, Hung-Chun Hsu, Ming-Feng Tsai, Chuan-Ju Wang
- Participant: cfda
- Paper: https://trec.nist.gov/pubs/trec34/papers/cfda.ikat.pdf
- Runs: cfda-auto-3 | cfda-auto-4 | cfda-auto-1 | cfda-auto-2 | cfda-gen-only-2 | cfda-gen-only-1 | cfda-adarewriter-chiq-llm4cs-splade | cfda-chiq-llm4cs-splade-rrf
Abstract
The 2025 TREC Interactive Knowledge Assistance Track (iKAT) featured both interactive and offline submission tasks. The former requires systems to operate under real-time constraints, making robustness and efficiency as important as accuracy, while the latter enables controlled evaluation of passage ranking and response generation with pre-defined datasets. To address this, we explored query rewriting and retrieval fusion as core strategies. We built our pipelines around Best-of-N selection and Reciprocal Rank Fusion (RRF) strategies to handle different submission tasks. Results show that reranking and fusion improve robustness while revealing trade-offs between effectiveness and efficiency across both tasks.
Bibtex
@inproceedings{cfda-trec2025-papers-proc-1,
title = {CFDA \& CLIP at TREC iKAT 2025: Enhancing Personalized Conversational Search via Query Reformulation and Rank Fusion},
author = {Yu-Cheng Chang and Guan-Wei Yeo and Quah Eugene and Fan-Jie Shih and Yuan-Ching Kuo and Tsung-En Yu and Hung-Chun Hsu and Ming-Feng Tsai and Chuan-Ju Wang},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}
UvA-IRLab at iKAT25: Exploring Learned Sparse Retrieval and Query Rewriting for Personalized Conversational QA¶
Simon Lupart, Zahra Abbasiantaeb, Mohammad Aliannejadi
- Participant: uva
- Paper: https://trec.nist.gov/pubs/trec34/papers/uva.ikat.pdf
- Runs: genonly-noptkb | genonly-ptkb | disco-qrecc-norerank | mq4cs-gpt41-bm25 | mq4cs-gpt41-splade | mq4cs-llamaft-splade | uva-gpt5-bm25-debertav3-gpt5 | uva-gpt5-bm25-debertav3-gpt5mini-nopersonal | uva-gpt5mini-bm25-debertav3-gpt5mini | uva-gpt5mini-no-no-gpt5mini
Abstract
The TREC interactive Knowledge Assistant Track (iKAT) 2025 is the third edition of the iKAT shared task. It focuses on developing conversational assistants that can adapt their responses using personal user knowledge from a Personal Textual Knowledge Base (PTKB). This year’s edition also introduces a new interactive task that evaluates systems using a user simulator. Since query rewriting is an effective way to handle conversational context, we study the use of Large Language Models (LLMs) as query rewriters. In our runs, we generate multiple query aspects using the MQ4CS framework and frontier LLMs (GPT-4.1), as well as open-source LLMs finetuned for the task (Llama-8B). We also strengthen the approach with SPLADE-based sparse retrieval and cross-encoder reranking. Finally, we also explore a rewrite-free technique, based on learned sparse retrieval (LSR) using the DiSCo model. Our results show that multi-aspect query generation improves performance when paired with strong retrieval and reranking models. They also suggest that LLM-based query rewriting can support better personalization in conversational search.
Bibtex
@inproceedings{uva-trec2025-papers-proc-1,
title = {UvA-IRLab at iKAT25: Exploring Learned Sparse Retrieval and Query Rewriting for Personalized Conversational QA},
author = {Simon Lupart and Zahra Abbasiantaeb and Mohammad Aliannejadi},
booktitle = {Proceedings of the 34th Text {REtrieval} Conference (TREC 2025)},
year = {2025},
address = {Gaithersburg, Maryland},
series = {NIST SP xxxx}
}