Rate Limits
Murf provides dedicated request capacity based on your plan. Each plan includes specific limits for concurrency and WebSocket connections. As your application scales, you can upgrade your plan to increase capacity.
Below is a summary of the limits for each plan:
Concurrency for Non-Streaming requests
Concurrency refers to the maximum number of generation requests that can be processed simultaneously. For all non-streaming endpoints, this is defined as the number of active requests at any given time.
Concurrency for Streaming requests
Our TTS API supports streaming via both HTTP and WebSocket connections. Concurrency for streaming is defined by the number of unique context IDs active at a given time:
- HTTP Streaming: Each request is treated as a unique context ID and counts toward your concurrency limit.
- WebSocket Streaming: Each unique context ID also counts toward your concurrency limit. Thus, when additional requests are sent with the same context_id, it does not increase your concurrency usage. This is because requests to the same context are processed sequentially. If no context ID is provided for request with websocket connection, we create null context ID and count it towards one concurrency.
If the number of active contexts exceeds your concurrency limit, new context IDs will be rejected, and an error message will be returned.
WebSocket Limits
WebSocket limits define the number of parallel WebSocket connections allowed at given time. Each plan supports up to 10X the streaming concurrency limit in parallel WebSocket connections.
- Each WebSocket connection closes automatically after 3 minutes of inactivity.
- If you attempt to open a new WebSocket connection after exceeding your limit, an error will be returned.
Why These Limits Matter
The limits are designed to maintain system performance and ensure a consistent experience for all users. By adhering to the limits and following best practices, you can integrate the Murf API smoothly and efficiently into your applications. If you have additional questions or need guidance on managing API limits, please drop a message in our discord channel or contact our support team.