Set up Falcon TTS Server

Install and configure the Falcon TTS Server on your server.

The Falcon TTS Server is the core engine of Murf’s on-prem text-to-speech system. It processes text input and generates speech locally, delivering fast, high-quality output without relying on external APIs. Running Falcon within your own infrastructure ensures low-latency performance, data privacy, and full operational control.

Trial Deployment

This guide provides step-by-step instructions for deploying the Murf TTS (Text-to-Speech) service on an AWS EC2 instance using Docker with GPU acceleration.

Set Up TTS Server

Prerequisites

Hardware Requirements

ComponentSpecification
GPUNVIDIA GPU with at least 8GB VRAM (e.g., T4, P3)
CPU4 vCPUs
RAM16 GB
Storage50 GB SSD
CUDACUDA 12.4+ compatible GPU

Software Requirements

Operating System
  • Ubuntu 22.04 LTS (recommended)
  • Amazon Linux 2023 (supported)
Required Software
  • Docker Engine (20.10.0 or later)
  • NVIDIA Container Toolkit (nvidia-docker2)
  • NVIDIA GPU Drivers (version 525.60.13 or later)
  • AWS CLI (configured with appropriate IAM permissions)

Pre-deployment Checklist

NVIDIA GPU Drivers Installation

Check if drivers are already installed:

$nvidia-smi

If you see GPU information displayed, drivers are already installed and you can skip to step 3.

If drivers are NOT installed, install them:

$# Add NVIDIA driver repository
>sudo add-apt-repository ppa:graphics-drivers/ppa
>sudo apt-get update
>
># Install latest driver
>
>sudo apt-get install -y nvidia-driver-535
>
># Reboot
>
>sudo reboot

Run nvidia-smi to verify the installation.

AWS GPU instances launched with Deep Learning AMIs or GPU-optimized AMIs usually have drivers pre-installed.

Docker Installation

Install Docker if not already installed:

$# Update package manager (Ubuntu)
>sudo apt-get update
>
># Amaxon linux
>Sudo yum update
>
># Install Docker
>sudo apt-get install -y docker.io
>
># Amazon linux
>sudo dnf install docker -y
>
># Start and enable Docker
>sudo systemctl start docker
>sudo systemctl enable docker
>
># Add current user to docker group (optional, to run without sudo)
>sudo usermod -aG docker $USER
>
>newgrp docker

Nvidia Container Toolkit Installation

$# Add NVIDIA package repositories
>distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
>curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
>curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
> sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
>
># Install nvidia-docker2
>sudo apt-get update
>sudo apt-get install -y nvidia-docker2
>
># Restart Docker daemon
>sudo systemctl restart docker

Verify GPU Access

Test that Docker can access the GPU:

$sudo docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi

You should see your GPU listed with driver information.

Get the Docker Image

Murf team will provide you with the Docker image for the Falcon TTS Server. You will be provided with a docker pull command to pull the image.

Deployment Steps

1

Pull the Docker Image

$sudo docker pull <docker_image_url>
2

Set Environment Variables

Create environment variables for your deployment:

$# Required: Master secret for TTS authentication
>export TTS_MASTER_SECRET="your-secure-secret-key-here"
>
># Optional: License Logic Agent (LLA) endpoint
># Default: http://localhost:8000
>export LLA_ENDPOINT="http://localhost:8000"

Important: Replace your-secure-secret-key-here with your actual production secret key.

3

Run the Docker Container

Run the Docker container with GPU support:

$sudo docker run -d \
> --gpus all \
> -e TTS_MASTER_SECRET="${TTS_MASTER_SECRET}" \
> -e LLA_ENDPOINT="${LLA_ENDPOINT}" \
> --name murf-tts \
> -p 80:8000 \
> --restart unless-stopped \
> <docker_image_url>

Command breakdown:

  • -d : Run in detached mode (background)
  • --gpus all : Enable access to all available GPUs
  • -e TTS_MASTER_SECRET : Pass the master secret for authentication
  • -e LLA_ENDPOINT : (Optional) Specify custom LLA server endpoint
  • --name murf-tts : Name the container “murf-tts”
  • -p 80:8000 : Map host port 80 to container port 8000
  • --restart unless-stopped : Automatically restart container unless manually stopped
  • Last parameter: Docker image URI
4

Verify Container is Running

$sudo docker ps

Expected output:

CONTAINER ID IMAGE
1234567890ab <docker_image_url>

Verification

Check the Container Logs

Monitor the startup logs to ensure the service initializes correctly:

$sudo docker logs -f murf-tts

This should show the startup logs and the service should be ready to use.

Test the TTS Service

Option 1: Health Check via Browser Open your browser and navigate to:

http://<your-ec2-public-ip>/

You should see:

{"message": "Hello from Murf TTS!"}

Option 2: Interactive Audio Test Page Navigate to the test page:

http://<your-ec2-public-ip>/test-audio

This provides a web interface to:

  • Paste JSON payloads
  • Generate speech
  • Play audio directly in the browser
  • Download WAV files

Option 3: API Documentation

View the interactive API documentation:

http://<your-ec2-public-ip>/docs

This opens the FastAPI Swagger UI for exploring all available endpoints.