Skip to main content

Audio Transcription

POST /v1/audio/transcriptions

Fully compatible with the OpenAI Audio Transcription API. Send audio as multipart/form-data; the OpenAI SDK does this automatically.

curl https://router-api.0g.ai/v1/audio/transcriptions \
-H "Authorization: Bearer sk-YOUR_API_KEY" \
-F "file=@recording.mp3" \
-F "model=openai/whisper-large-v3" \
-F "response_format=json"

Fields

FieldDescription
modelAudio model ID from /v1/models
fileAudio file (multipart form)
response_formatjson, text, srt, verbose_json, vtt
languageISO-639-1 code, e.g. "en" — optional, improves accuracy
promptOptional text to guide style and vocabulary
temperatureSampling temperature (0 = deterministic)

Response

{
"text": "Hello, this is a transcription of the audio file."
}

0G Router Extensions

Because this endpoint uses multipart/form-data instead of a JSON body, the only Router extension that can be passed today is verify_tee, as a query parameter:

?verify_tee=true

See Verifiable Execution for what tee_verified means in the response. Provider routing fields (provider.address, provider.sort) are not currently parsed on multipart endpoints — use the default round-robin or pin via Provider Routing on the JSON-body endpoints.