Skip to main content
New to LiveKit? This guide assumes you’re familiar with the LiveKit Agent Framework. Start there to learn how agents work, then head back here for Telnyx-specific setup.

Install the plugin

The Telnyx plugin is maintained by Telnyx and updated as new models, voices, and features become available.
pip install "telnyx-livekit-plugin @ git+https://github.com/team-telnyx/telnyx-livekit-plugin.git#subdirectory=telnyx-livekit-plugin"
🔌 Plugin source on GitHub

Complete example

Here’s a fully working voice agent using Telnyx STT, TTS, and LLM:
from livekit.agents import AgentSession, AutoSubscribe, WorkerOptions, cli, llm
from livekit.plugins import openai, telnyx

async def entrypoint(ctx):
    await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)

    session = AgentSession(
        stt=telnyx.deepgram.STT(model="nova-3", language="en"),
        tts=telnyx.TTS(voice="Rime.ArcanaV3.astra"),
        llm=openai.LLM.with_telnyx(model="meta-llama/Meta-Llama-3.1-70B-Instruct"),
    )

    await session.start(
        room=ctx.room,
        agent=llm.ChatContext().append(
            role="system",
            text="You are a helpful voice assistant.",
        ),
    )

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
Standard LiveKit agent code — the only Telnyx-specific parts are your STT, TTS, and LLM configuration.

Models and options

The example above uses our recommended defaults. Here’s how to configure each model with all available options.

Speech-to-Text (STT)

Telnyx hosts Deepgram models on dedicated GPUs. Three models available: Latest generation, best accuracy.
from livekit.plugins import telnyx

stt = telnyx.deepgram.STT(
    model="nova-3",
    language="en",
    interim_results=True,
    keyterm=["YourBrand", "custom-term"],  # keyword boosting
)

Nova-2

Previous generation, stable and reliable. Uses weighted keyword boosting.
stt = telnyx.deepgram.STT(
    model="nova-2",
    language="en",
    interim_results=True,
    keywords=["YourBrand:2.0", "custom-term:1.5"],
)

Flux

Experimental, with built-in end-of-turn detection.
stt = telnyx.deepgram.STT(
    model="flux",
    language="en",
    interim_results=True,
    keyterm=["YourBrand", "custom-term"],
    eot_threshold=0.5,
    eot_timeout_ms=3000,
    eager_eot_threshold=0.3,
)

Text-to-Speech (TTS)

from livekit.plugins import telnyx

# Telnyx Natural HD
tts = telnyx.TTS(voice="Telnyx.NaturalHD.astra")

# MiniMax Speech 2.8 Turbo
tts = telnyx.TTS(voice="MiniMax.speech-2.8-turbo.Narrator")
Telnyx offers an extensive library of voices across multiple providers and models, with broad language and accent support. Voice IDs follow the pattern Provider.Model.voice_name. To find a voice:
  1. Browse the voice library
  2. Copy the voice ID (e.g. Telnyx.NaturalHD.astra)
  3. Pass it to telnyx.TTS(voice="...")

Parameters

ParameterDefaultDescription
voiceVoice ID (e.g. Telnyx.NaturalHD.astra)
sample_rate24000Audio sample rate in Hz

LLM

Telnyx hosts models with an OpenAI-compatible API. No concurrency limits. Use the .with_telnyx() helper on the standard OpenAI plugin:
from livekit.plugins import openai

# Hosted open-source model — runs on Telnyx GPUs
llm = openai.LLM.with_telnyx(model="moonshotai/Kimi-K2.5")

# Proprietary model via BYOK (bring your own key)
llm = openai.LLM.with_telnyx(model="openai/gpt-4o-mini")

Hosted models

These run on Telnyx infrastructure — no external API key needed, just your TELNYX_API_KEY:
  • moonshotai/Kimi-K2.5
  • zai-org/GLM-5
  • MiniMaxAI/MiniMax-M2.5

Proprietary models (BYOK)

For models like GPT-4o or Claude, Telnyx proxies the request using your own API key. Add your provider key in the Telnyx Portal under Inference settings. Full models list →

Next steps

  • Deploy — Deploy your agent to the Telnyx LiveKit platform
  • Telephony — Connect your agent to phone numbers