# Relace Search The relace-search model uses 4-12 `view_file` and `grep` tools in parallel to explore a codebase and return relevant files to the user request. In contrast to RAG, relace-search performs agentic multi-step reasoning to produce highly precise results 4x faster than any frontier model. It's designed to serve as a subagent that passes its findings to an "oracle" coding agent, who orchestrates/performs the rest of the coding task. To use relace-search you need to build an appropriate agent harness, and parse the response for relevant information to hand off to the oracle. Read more about it in the [Relace documentation](https://docs.relace.ai/docs/fast-agentic-search/agent). ## Model Information - **Organization**: [Relace](/llm.txt) - **Slug**: relace-search - **Available at Providers**: 6 - **Release Date**: December 8, 2025 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [OpenRouter](/llm/openrouter.txt) | Relace Search | 1.00 | 3.00 | | [View](https://openrouter.ai/relace/relace-search-20251208) | | [ValorGPT](/llm/valorgpt.txt) | RE | | | | [View](https://www.valorgpt.com/models/relace-relace-search) | | [Kilo Code](/llm/kilocode.txt) | Relace: Relace Search | 1.00 | 3.00 | | [View](https://kilo.ai/models/relace/relace-search) | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/relace/relace-search | | | | | | [LangDB](/llm/langdb.txt) | relace-search | | | | [View](https://langdb.ai/app/models) | | [WaveSpeed AI](/llm/wavespeed.txt) | relace-search | 1.10 | 3.30 | | | --- [← Back to all providers](/llm.txt)