# vLLora - Debug your agents in realtime > Your AI Agent Debugger This file contains links to documentation sections following the llmstxt.org standard. ## Table of Contents - [Clone and Experiment with Requests](https://vllora.dev/docs/clone-and-experiment): Use **Clone Request** to turn any finished trace into an isolated **Experiment**, so you can safely try new prompts, models, and parameters without... - [Configuration](https://vllora.dev/docs/configuration): vLLora can be configured via a `config.yaml` file or through command-line arguments. CLI arguments take precedence over config file settings. - [Custom Endpoints](https://vllora.dev/docs/custom-endpoints): Connect your own endpoint to any provider in vLLora. This allows you to use custom API gateways, self-hosted models, or OpenAI-compatible proxies. - [Custom Providers and Models](https://vllora.dev/docs/custom-providers): vLLora is designed to be agnostic and flexible, allowing you to register **Custom Providers** (your own API endpoints) and **Custom Models** (speci... - [Debugging LLM Requests](https://vllora.dev/docs/debug-mode): vLLora supports interactive debugging for LLM requests. When Debug Mode is enabled, vLLora pauses requests before they are sent to the model. You c... - [Installation](https://vllora.dev/docs/installation): vLLora can be installed via Homebrew, the Rust crate, or by building from source. - [Introduction](https://vllora.dev/docs/introduction): Debug your AI agents with complete visibility into every request. vLLora works out of the box with OpenAI-compatible endpoints, supports 300+ model... - [License](https://vllora.dev/docs/license): vLLora is [fair-code](https://faircode.io/) distributed under the **Elastic License 2.0 (ELv2)**. - [Lucy](https://vllora.dev/docs/lucy): Diagnose agent failures and latency issues directly inside your traces using Lucy. - [MCP Support](https://vllora.dev/docs/mcp-support): vLLora provides full support for **Model Context Protocol (MCP)** servers, enabling seamless integration with external tools by connecting with MCP... - [Quickstart](https://vllora.dev/docs/quickstart): Get up and running with vLLora in minutes. This guide will help you install vLLora, setup provider and start debugging your AI agents immediately. - [Roadmap](https://vllora.dev/docs/roadmap): Planned features and improvements for vLLora, including framework support, debugging tools, and API enhancements. - [vLLora CLI](https://vllora.dev/docs/vllora-cli): Retrieve and inspect LLM traces directly from the terminal using the vLLora CLI. Designed for fast iteration, automation, and local debugging of AI... - [vllora LLM crate (`vllora_llm`)](https://vllora.dev/docs/vllora-llm): [![Crates.io](https://img.shields.io/crates/v/vllora_llm)](https://crates.io/crates/vllora_llm) [![GitHub](https://img.shields.io/badge/github-repo... - [Installation](https://vllora.dev/docs/vllora-llm/installation): Install the `vllora_llm` crate from [crates.io](https://crates.io/crates/vllora_llm) using Cargo. - [License](https://vllora.dev/docs/vllora-llm/license): The `vllora_llm` Rust crate is distributed under the **Apache License 2.0**. - [Anthropic](https://vllora.dev/docs/vllora-llm/provider-examples/anthropic): Route OpenAI-style requests to Anthropic through `VlloraLLMClient` using `async_openai_compat` request types. - [Bedrock](https://vllora.dev/docs/vllora-llm/provider-examples/bedrock): Route OpenAI-style requests to AWS Bedrock through `VlloraLLMClient` using `async_openai_compat` request types. - [Gemini](https://vllora.dev/docs/vllora-llm/provider-examples/gemini): Route OpenAI-style requests to Gemini through `VlloraLLMClient` using `async_openai_compat` request types. - [Provider Examples](https://vllora.dev/docs/vllora-llm/provider-examples): There are runnable examples under `llm/examples/` that mirror the patterns in the quick start and usage guides: - [OpenAI](https://vllora.dev/docs/vllora-llm/provider-examples/openai): Send both non-streaming and streaming OpenAI-style chat completions through `VlloraLLMClient` using `async_openai_compat` request types. - [LangDB Proxy](https://vllora.dev/docs/vllora-llm/provider-examples/proxy): Send OpenAI-style requests through the LangDB OpenAI-compatible proxy using `async_openai_compat` request types. - [Tracing (OTLP)](https://vllora.dev/docs/vllora-llm/provider-examples/tracing-otlp): Export spans/events to an OTLP HTTP endpoint while sending OpenAI-style requests through `VlloraLLMClient` using `async_openai_compat` request types. - [Tracing (console)](https://vllora.dev/docs/vllora-llm/provider-examples/tracing): Same OpenAI-style flow as the [OpenAI example](./openai.md), but with `tracing_subscriber::fmt()` configured to emit spans and events to the consol... - [Quick start](https://vllora.dev/docs/vllora-llm/quickstart): Get up and running with the Rust SDK in minutes. This guide shows two approaches: using gateway-native types and using OpenAI-compatible types. - [Image Generation with Responses API](https://vllora.dev/docs/vllora-llm/responses-api/image-generation): This guide demonstrates how to build an AI-powered application that combines web search and image generation capabilities using the Responses API. - [Responses API](https://vllora.dev/docs/vllora-llm/responses-api): The vllora Responses API provides a unified interface for building advanced AI agents capable of executing complex tasks autonomously. This API is ... - [Usage guide](https://vllora.dev/docs/vllora-llm/usage): This guide covers the core concepts and patterns for using the `vllora_llm` crate effectively. - [vLLora MCP Server](https://vllora.dev/docs/vllora-mcp-server): vLLora's MCP server exposes trace + run inspection as **tools** that coding agents (Claude Code / Cursor / your own agent) can call while you stay ... - [Google ADK](https://vllora.dev/docs/working-with-agents/google-adk): Enable end-to-end tracing for your Google ADK agents by installing the vLLora Python package with the ADK feature flag. - [Working with Agent Frameworks](https://vllora.dev/docs/working-with-agents): vLLora works out of the box with any OpenAI-compatible API and provides better tracing locally. - [OpenAI Agents SDK](https://vllora.dev/docs/working-with-agents/openai-agents): Enable end-to-end tracing for your OpenAI agents by installing the vLLora Python package with the OpenAI feature flag.