Skip to main content

Debugging LiveKit Voice Agents with vLLora

· 2 min read
Matteo Pelati
Matteo Pelati

Voice agents built with LiveKit Agents enable real-time, multimodal AI interactions that can handle voice, video, and text. These agents power everything from customer support bots to telehealth assistants, and debugging them requires visibility into the complex pipeline of speech-to-text, language model, and text-to-speech interactions.

In this video, we go over how you can debug voice agents built using LiveKit Agents with vLLora. You'll see how to trace every model call, tool execution, and response as your agent processes real-time audio streams.

Using vLLora with OpenAI Agents SDK

· 2 min read
Mrunmay
AI Engineer

The OpenAI Agents SDK makes it easy to build agents with handoffs, streaming, and function calling. The hard part? Seeing what's actually happening when things don't work as expected.

OpenAI Agents Tracing

Using vLLora with Google ADK

· 2 min read
Mrunmay
AI Engineer

Google ADK (Agent Development Kit) lets you build multi-agent systems across different LLM providers—Gemini, OpenAI, Anthropic, and more. But when your planner agent produces a FunctionCall for an AgentTool that doesn't run correctly, or a nested sub-agent fails silently, debugging what happened across agents and sessions becomes nearly impossible.

Traces of Google ADK on vLLora

Using vLLora to debug Agents

· 3 min read
Mrunmay
AI Engineer

Building AI agents is hard. Debugging them locally across multiple SDKs, tools, and providers feels like flying blind. Logs give you partial visibility. You need to see every call, latency, cost, and output in context without rewriting code.

Debugging demo