Meta-Reinforcement Learning with Self-Reflection for Agentic Search
Abstract
MR-Search enables adaptive agent search through meta-reinforcement learning with self-reflection, improving exploration and generalization across multiple episodes.
This paper introduces MR-Search, an in-context meta reinforcement learning (RL) formulation for agentic search with self-reflection. Instead of optimizing a policy within a single independent episode with sparse rewards, MR-Search trains a policy that conditions on past episodes and adapts its search strategy across episodes. MR-Search learns to learn a search strategy with self-reflection, allowing search agents to improve in-context exploration at test-time. Specifically, MR-Search performs cross-episode exploration by generating explicit self-reflections after each episode and leveraging them as additional context to guide subsequent attempts, thereby promoting more effective exploration during test-time. We further introduce a multi-turn RL algorithm that estimates a dense relative advantage at the turn level, enabling fine-grained credit assignment on each episode. Empirical results across various benchmarks demonstrate the advantages of MR-Search over baselines based RL, showing strong generalization and relative improvements of 9.2% to 19.3% across eight benchmarks. Our code and data are available at https://github.com/tengxiao1/MR-Search.
Community
This work introduces MR-Search, an in-context meta reinforcement learning (RL) formulation for agentic search with self-reflection. Instead of optimizing a policy within a single independent episode with sparse rewards, MR-Search trains a policy that conditions on past episodes and adapts its search strategy across episodes. MR-Search learns to learn a search strategy with self-reflection, allowing search agents to improve in-context exploration at test-time.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SE-Search: Self-Evolving Search Agent via Memory and Dense Reward (2026)
- Training Multi-Turn Search Agent via Contrastive Dynamic Branch Sampling (2026)
- How to Train Your Deep Research Agent? Prompt, Reward, and Policy Optimization in Search-R1 (2026)
- Search-P1: Path-Centric Reward Shaping for Stable and Efficient Agentic RAG Training (2026)
- Evaluate-as-Action: Self-Evaluated Process Rewards for Retrieval-Augmented Agents (2026)
- Agentic-R: Learning to Retrieve for Agentic Search (2026)
- SIGHT: Reinforcement Learning with Self-Evidence and Information-Gain Diverse Branching for Search Agent (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper