Pipecat and Murf

Build voice and multimodal conversational AI applications with Murf TTS

This is an official integration of Murf for Pipecat, a framework for building voice and multimodal conversational AI applications. You can install the Python package pipecat-murf-tts to use Murf as a TTS service in your Pipecat pipelines, providing high-quality voice synthesis with real-time streaming capabilities.

Introduction to Pipecat

Pipecat is an open-source framework developed by Daily that simplifies the creation of voice and multimodal conversational AI applications. It provides a flexible pipeline architecture that allows you to connect various components like speech-to-text (STT), large language models (LLMs), and text-to-speech (TTS) services seamlessly.

With the official Murf TTS integration for Pipecat, you can build sophisticated voice applications that combine Murf’s natural-sounding voices with other AI services, creating end-to-end conversational experiences.

Features

The Murf TTS integration for Pipecat provides a comprehensive set of features for building voice applications:

  • High-Quality Voice Synthesis: Leverage Murf’s advanced TTS technology with access to over 150 voices across 35+ languages
  • Real-time Streaming: WebSocket-based streaming for low-latency audio generation, perfect for interactive conversations
  • Voice Customization: Control voice style, rate, pitch, and variation to match your application’s needs
  • Multi-Language Support: Multiple languages and locales with native speaker quality
  • Flexible Configuration: Comprehensive audio format and quality options including sample rate, channel type, and output formats
  • Metrics Support: Built-in performance tracking and monitoring capabilities

Compatibility

This integration has been tested with Pipecat v0.0.87. For compatibility with other versions, please refer to the Pipecat changelog.

Installation

You can install the Murf TTS integration for Pipecat using several methods:

Using pip

The recommended way to install the package is using pip:

$pip install pipecat-murf-tts

Using uv

If you’re using uv as your Python package manager:

$uv add pipecat-murf-tts

From source

To install from source, clone the repository and install it in development mode:

$git clone https://github.com/murf-ai/pipecat-murf-tts.git
>cd pipecat-murf-tts
>pip install -e .

Quick Start

Get Your Murf API Key

Before you can use the integration, you’ll need a Murf API key. Sign up at the Murf API Dashboard and generate your API key from the dashboard.

Basic Usage

Here’s a simple example of how to initialize and use the Murf TTS service in your Pipecat pipeline:

1import asyncio
2import os
3from dotenv import load_dotenv
4from pipecat.pipeline.pipeline import Pipeline
5from pipecat.pipeline.runner import PipelineRunner
6from pipecat.pipeline.task import PipelineTask
7from pipecat_murf_tts import MurfTTSService
8
9load_dotenv()
10
11async def main():
12 # Initialize the Murf TTS service
13 tts = MurfTTSService(
14 api_key=os.getenv("MURF_API_KEY"),
15 params=MurfTTSService.InputParams(
16 voice_id="en-UK-ruby",
17 style="Conversational",
18 rate=0,
19 pitch=0,
20 sample_rate=44100,
21 format="PCM",
22 ),
23 )
24
25 # Create a simple pipeline with just TTS
26 pipeline = Pipeline([tts])
27
28 # Create and run the pipeline task
29 task = PipelineTask(pipeline)
30 runner = PipelineRunner()
31 await runner.run(task)
32
33if __name__ == "__main__":
34 asyncio.run(main())

Complete Example with Pipeline

Here’s a complete example that demonstrates how to build a full conversational AI pipeline with Speech-to-Text (STT), LLM, and Text-to-Speech (TTS) using Deepgram for STT, OpenAI for LLM, and Murf for TTS:

1import asyncio
2import os
3from dotenv import load_dotenv
4from pipecat.pipeline.pipeline import Pipeline
5from pipecat.pipeline.runner import PipelineRunner
6from pipecat.pipeline.task import PipelineTask
7from pipecat.services.deepgram import DeepgramSTTService
8from pipecat.services.openai.llm import OpenAILLMService
9from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext
10from pipecat_murf_tts import MurfTTSService
11
12load_dotenv()
13
14async def main():
15 # Initialize the Deepgram STT service
16 stt = DeepgramSTTService(api_key=os.getenv("DEEPGRAM_API_KEY"))
17
18 # Initialize the Murf TTS service
19 tts = MurfTTSService(
20 api_key=os.getenv("MURF_API_KEY"),
21 params=MurfTTSService.InputParams(
22 voice_id="en-UK-ruby",
23 style="Conversational",
24 ),
25 )
26
27 # Initialize the LLM service
28 llm = OpenAILLMService(api_key=os.getenv("OPENAI_API_KEY"))
29
30 # Set up the conversation context
31 messages = [
32 {"role": "system", "content": "You are a helpful assistant."},
33 ]
34 context = OpenAILLMContext(messages)
35 context_aggregator = llm.create_context_aggregator(context)
36
37 # Create the pipeline connecting STT, LLM, and TTS
38 pipeline = Pipeline([
39 stt,
40 llm,
41 tts,
42 context_aggregator.assistant(),
43 ])
44
45 # Run the pipeline
46 task = PipelineTask(pipeline)
47 runner = PipelineRunner()
48 await runner.run(task)
49
50if __name__ == "__main__":
51 asyncio.run(main())

💡 Try it out: For a complete working example with browser-based interaction, check out the Pipecat Quickstart repository. You can clone the repo and replace the TTS service with Murf to see a full STT → LLM → TTS pipeline in action.

Configuration

The MurfTTSService.InputParams class provides extensive configuration options to customize the voice output according to your needs.

InputParams Reference

ParameterTypeDefaultRange/OptionsDescription
voice_idstr"en-UK-ruby"Any valid Murf voice IDVoice identifier for TTS synthesis
stylestr"Conversational"Voice-specific stylesVoice style (e.g., “Conversational”, “Narration”)
rateint0-50 to 50Speech rate adjustment
pitchint0-50 to 50Pitch adjustment
variationint10 to 5Variation in pause, pitch, and speed (Unavailable for FALCON, Gen2 only)
modelstr"FALCON""FALCON", "GEN2"The model to use for audio output
sample_rateint441008000, 24000, 44100, 48000Audio sample rate in Hz
channel_typestr"MONO""MONO", "STEREO"Audio channel configuration
formatstr"PCM""MP3", "WAV", "FLAC", "ALAW", "ULAW", "PCM", "OGG"Audio output format
multi_native_localestrNoneLanguage codes (e.g., "en-US")Language for Gen2 model audio
pronunciation_dictionarydictNoneCustom pronunciation mappingsDictionary for custom word pronunciations

Example with Custom Configuration

You can customize various aspects of the voice output to match your application’s requirements:

1from pipecat_murf_tts import MurfTTSService
2
3tts = MurfTTSService(
4 api_key="your-api-key",
5 params=MurfTTSService.InputParams(
6 voice_id="en-US-natalie",
7 style="Narration",
8 rate=10,
9 pitch=-5,
10 variation=3, # Only available when model="GEN2"
11 model="GEN2",
12 sample_rate=48000,
13 channel_type="STEREO",
14 format="WAV",
15 multi_native_locale="en-US",
16 pronunciation_dictionary={
17 "Pipecat": {"pronunciation": "pipe-cat"},
18 },
19 ),
20)

Available Voices

Murf AI offers a wide variety of voices across different languages and styles. You can explore the complete voice library in the Murf API Dashboard.

Some popular voice IDs include:

  • en-US-natalie - American English, female voice
  • en-UK-ruby - British English, female voice
  • en-US-amara - American English, female voice

For a complete list of available voices, visit the Murf API Dashboard.

Environment Variables

To keep your API keys secure, it’s recommended to use environment variables. Create a .env file in your project root:

1MURF_API_KEY=your_murf_api_key_here
2DEEPGRAM_API_KEY=your_deepgram_api_key_here # Required for STT
3OPENAI_API_KEY=your_openai_api_key_here # Required for LLM

Then load these variables in your Python code using python-dotenv:

1from dotenv import load_dotenv
2load_dotenv()

Advanced Features

Dynamic Voice Changes

You can change the voice dynamically during runtime without recreating the service instance:

1tts.set_voice("en-US-natalie")

This is particularly useful for applications that need to switch between different voices or languages during a conversation.

Requirements

The Murf TTS integration for Pipecat has the following requirements:

  • Python >= 3.10, < 3.13
  • pipecat-ai >= 0.0.87, < 0.1.0
  • websockets >= 15.0.1, < 16.0
  • loguru >= 0.7.3
  • python-dotenv >= 1.1.1

Make sure you have these dependencies installed before using the integration.

Examples

The pipecat-murf-tts repository includes complete working examples that demonstrate various use cases:

  • Basic TTS Pipeline: A foundational example showing how to set up a pipeline with LLM and TTS

To run the examples:

$uv add pipecat-ai
>python examples/foundational/murf_tts_basic.py

For a complete end-to-end example with STT, LLM, and TTS, check out the Pipecat Quickstart repository. You can use it as a starting point and replace the TTS service with Murf to build a full conversational AI application.

Support

If you encounter any issues or have questions about the integration:

Contributing

Contributions to the integration are welcome! If you’d like to contribute, please feel free to submit a Pull Request on the GitHub repository.

License

This project is licensed under the MIT License. See the LICENSE file for details.