LiveKit and Murf

Build voice and multimodal conversational AI applications with Murf TTS

This is an official integration of Murf for LiveKit Agents, a framework for building voice and multimodal conversational AI applications. You can install the Python package livekit-murf to use Murf as a TTS service in your LiveKit Agents, providing high-quality voice synthesis with real-time streaming capabilities.

What is livekit-murf?

livekit-murf is the official Python package that integrates Murf’s high-quality text-to-speech (TTS) capabilities with LiveKit Agents. This integration enables you to add natural-sounding voice synthesis to your LiveKit-powered conversational AI applications, supporting real-time streaming and low-latency audio generation.

Installation

You can install the Murf TTS integration for LiveKit Agents using several methods:

Using pip

The recommended way to install the package is using pip:

$pip install livekit-murf

Using uv

If you’re using uv as your Python package manager:

$uv add livekit-murf

From source

To install from source, clone the repository and install it in development mode:

$git clone https://github.com/murf-ai/livekit-murf.git
>cd livekit-murf
>pip install -e .

For Existing LiveKit Projects

If you already have a LiveKit project, you can quickly integrate Murf TTS by simply initializing the murf.TTS() class in your existing AgentSession. Make sure you have your Murf API key configured in your environment variables. You can get your Murf API key from the Murf API Dashboard:

1from livekit.plugins import murf
2
3# Add Murf TTS to your existing session
4session = AgentSession(
5 stt="your-stt-provider", # e.g., "deepgram/nova-3"
6 llm="your-llm-provider", # e.g., "openai/gpt-4o"
7 tts=murf.TTS(voice="Matthew", style="Conversation", model="Falcon")
8 # ... your existing configuration
9)

View all configuration parameters →

Guide to Building Voice Agents with Murf and LiveKit

This guide provides setup instructions and examples for building your first LiveKit Agent with Murf TTS.

Setup & Requirements

Before running the examples above, ensure you have everything configured properly:

Requirements

  • Python >= 3.9
  • livekit-agents >= 1.3.5

Required Packages

The examples in this guide use the Murf TTS integration along with specific LiveKit plugins for speech-to-text (Deepgram), language models (OpenAI), and voice activity detection (Silero). Install all packages used in the examples:

$pip install livekit-murf livekit-agents livekit-plugins-openai livekit-plugins-deepgram livekit-plugins-silero python-dotenv

API Keys

You’ll need API keys for the services used in your LiveKit Agent:

  • Murf API Key: Sign up at the Murf API Dashboard and generate your API key
  • LiveKit API Credentials: Get your LiveKit server URL, API key, and secret from LiveKit Cloud.
  • Additional Services: Depending on your setup, you may need API keys for STT (e.g., Deepgram) and LLM (e.g., OpenAI) services

Environment Variables

To keep your API keys secure, it’s recommended to use environment variables. Create a .env file in your project root:

1LIVEKIT_URL=wss://your-livekit-server.livekit.cloud # Your LiveKit server URL
2LIVEKIT_API_KEY=your_livekit_api_key_here # Your LiveKit API key
3LIVEKIT_API_SECRET=your_livekit_api_secret_here # Your LiveKit API secret
4MURF_API_KEY=your_murf_api_key_here
5DEEPGRAM_API_KEY=your_deepgram_api_key_here # Required for STT
6OPENAI_API_KEY=your_openai_api_key_here # Required for LLM

Then load these variables in your Python code using python-dotenv:

1from dotenv import load_dotenv
2load_dotenv()

Quick Start Example

Here’s a simple example of how to create a LiveKit Agent Worker with Murf TTS:

1import logging
2import os
3from dotenv import load_dotenv
4
5load_dotenv()
6
7from livekit.agents import (
8 Agent,
9 AgentServer,
10 AgentSession,
11 JobContext,
12 JobProcess,
13 MetricsCollectedEvent,
14 cli,
15 metrics,
16 room_io,
17)
18from livekit.plugins import silero, deepgram, murf
19from livekit.plugins import openai as openai_plugin
20
21logger = logging.getLogger("murf-agent")
22
23class MyAgent(Agent):
24 def __init__(self) -> None:
25 super().__init__(
26 instructions="You are a voice agent built using Murf TTS. Keep responses short and natural."
27 )
28
29 async def on_enter(self):
30 await self.session.say(
31 "Hi, I am a voice agent powered by Murf, how can I help you?"
32 )
33
34server = AgentServer()
35
36def prewarm(proc: JobProcess):
37 proc.userdata["vad"] = silero.VAD.load()
38
39server.setup_fnc = prewarm
40
41@server.rtc_session()
42async def entrypoint(ctx: JobContext):
43 ctx.log_context_fields = {"room": ctx.room.name}
44
45 session = AgentSession(
46 stt="deepgram/nova-3",
47 llm="openai/gpt-4o",
48 tts=murf.TTS(voice="Matthew", style="Conversation"),
49 vad=ctx.proc.userdata["vad"],
50 preemptive_generation=True,
51 resume_false_interruption=True,
52 false_interruption_timeout=1.0,
53 )
54
55 usage_collector = metrics.UsageCollector()
56
57 @session.on("metrics_collected")
58 def on_metrics(ev: MetricsCollectedEvent):
59 metrics.log_metrics(ev.metrics)
60 usage_collector.collect(ev.metrics)
61
62 async def log_usage():
63 logger.info(f"Usage: {usage_collector.get_summary()}")
64
65 ctx.add_shutdown_callback(log_usage)
66
67 await session.start(
68 agent=MyAgent(),
69 room=ctx.room,
70 room_options=room_io.RoomOptions(
71 audio_input=room_io.AudioInputOptions()
72 ),
73 )
74
75if __name__ == "__main__":
76 cli.run_app(server)

Save this code as agent.py and run it with:

$python agent.py console

This will start the agent in console where you can directly speak with the agent in the terminal and hear responses in Murf’s natural voice.

Livekit terminal Session

Configuration

The murf.TTS class provides extensive configuration options to customize the voice output according to your needs.

TTS Parameters Reference

ParameterTypeDefaultRange/OptionsDescription
voicestr"Matthew"Any valid Murf voice IDVoice identifier for TTS synthesis
stylestr"Conversation"Voice-specific stylesVoice style (e.g., “Conversation”, “Narration”)
speedint0-50 to 50Speech rate adjustment
pitchint0-50 to 50Pitch adjustment
modelstr"FALCON""FALCON", "GEN2"The model to use for audio output
sample_rateint240008000, 24000, 44100, 48000Audio sample rate in Hz
localestrNoneLanguage codes (e.g., "en-US")Locale for language-specific voice synthesis

Complete Example with Custom Configuration

Here’s a more advanced example showing how to customize the Murf TTS configuration with metrics and error handling:

1import logging
2import os
3from dotenv import load_dotenv
4
5load_dotenv()
6
7from livekit.agents import (
8 Agent,
9 AgentServer,
10 AgentSession,
11 JobContext,
12 JobProcess,
13 MetricsCollectedEvent,
14 cli,
15 metrics,
16 room_io,
17)
18from livekit.plugins import silero, deepgram, murf
19from livekit.plugins import openai as openai_plugin
20
21logger = logging.getLogger("murf-agent")
22
23class CustomAgent(Agent):
24 def __init__(self) -> None:
25 super().__init__(
26 instructions="You are a helpful assistant with a natural speaking style. Provide detailed but concise responses."
27 )
28
29 async def on_enter(self):
30 await self.session.say(
31 "Hello! I'm powered by Murf's high-quality voice synthesis. How can I assist you today?"
32 )
33
34server = AgentServer()
35
36def prewarm(proc: JobProcess):
37 proc.userdata["vad"] = silero.VAD.load()
38
39server.setup_fnc = prewarm
40
41@server.rtc_session()
42async def entrypoint(ctx: JobContext):
43 ctx.log_context_fields = {"room": ctx.room.name}
44
45 session = AgentSession(
46 stt="deepgram/nova-3",
47 llm="openai/gpt-4o",
48 tts=murf.TTS(
49 voice="Matthew",
50 style="Conversation",
51 speed=5,
52 pitch=0,
53 model="FALCON",
54 sample_rate=24000,
55 locale="en-US",
56 ),
57 vad=ctx.proc.userdata["vad"],
58 preemptive_generation=True,
59 resume_false_interruption=True,
60 false_interruption_timeout=1.0,
61 )
62
63 usage_collector = metrics.UsageCollector()
64
65 @session.on("metrics_collected")
66 def on_metrics(ev: MetricsCollectedEvent):
67 metrics.log_metrics(ev.metrics)
68 usage_collector.collect(ev.metrics)
69
70 async def log_usage():
71 summary = usage_collector.get_summary()
72 logger.info(f"Session usage: {summary}")
73
74 ctx.add_shutdown_callback(log_usage)
75
76 await session.start(
77 agent=CustomAgent(),
78 room=ctx.room,
79 room_options=room_io.RoomOptions(
80 audio_input=room_io.AudioInputOptions()
81 ),
82 )
83
84if __name__ == "__main__":
85 cli.run_app(server)

💡 Try it out: For complete working examples and deployment guides, check out the LiveKit Agents documentation. You can use the examples above as a starting point to build your own voice agents with Murf TTS.

Features

The Murf TTS integration for LiveKit Agents provides a comprehensive set of features for building voice applications:

  • High-Quality Voice Synthesis: Leverage Murf’s advanced TTS technology with access to over 150 voices across 35+ languages
  • Real-time Streaming: WebSocket-based streaming for low-latency audio generation, perfect for interactive conversations
  • Voice Customization: Control voice style, rate and pitch to match your application’s needs
  • Multi-Language Support: Multiple languages and locales with native speaker quality
  • Agent Framework Integration: Seamless integration with LiveKit’s Agent framework for building conversational AI
  • Flexible Configuration: Comprehensive audio format and quality options including sample rate, channel type, and output formats

Available Voices

Support

If you encounter any issues or have questions about the integration:

Contributing

Contributions to the integration are welcome! If you’d like to contribute, please feel free to submit a Pull Request on the GitHub repository.

License

This project is licensed under the MIT License. See the LICENSE file for details.