Expected responses from LLM Gateway using Claude models

Incident Report for AssemblyAI

Resolved

We have continued to see good performance since releasing the earlier fix and are now marking this issue as resolved.
Posted Nov 25, 2025 - 01:45 UTC

Monitoring

We have released a fix that has resolved the unexpected responses we were seeing when Claude models were used. Everything is working normally now, but we will continue to monitor traffic to ensure continued good performance.
Posted Nov 25, 2025 - 01:15 UTC

Identified

We have identified the root cause of this issue and are working to release a fix.
Posted Nov 25, 2025 - 00:22 UTC

Investigating

We are currently investigating an issue with LLM Gateway that is causing unexpected responses when using Claude models. This issue is not affecting LLM Gateway requests using other models. We are working to identify and resolve the root cause of these issues.
Posted Nov 24, 2025 - 22:39 UTC
This incident affected: APIs (LLM Gateway).