Postmortem -
Read details
Jan 12, 17:20 UTC
Resolved -
We have continued to see good performance around processing times and error rates and all previously throttled jobs have now been complete so we are marking this issue as resolved.
Jan 3, 17:57 UTC
Monitoring -
We have made changes to address the issues we were seeing earlier. Processing times and error rates are now returning to the normal range. There are still some requests throttled as a result of the earlier showdown but that number is steadily declining as we continue to monitor performance.
Jan 3, 17:46 UTC
Update -
We are continuing to see improvements and will be moving into monitoring status shortly.
Jan 3, 17:43 UTC
Identified -
We are working to reset unhealthy instances in our pipeline, and we are now seeing an increase in successful requests and a reduction in the number of failed requests. We are still experiencing longer-than-usual processing times, but we are seeing improvements now.
Jan 3, 17:35 UTC
Update -
We are still seeing issues leading to longer than usual processing times. This slowdown is causing throttling for some accounts as well as some failed requests due to time out errors.
Jan 3, 17:03 UTC
Update -
We are still investigating the current issue we are experiencing but the impact of this issue is slower than usual processing times for requests to our Async endpoint as well as some failed requests with "the operation timed out" errors.
Jan 3, 16:42 UTC
Investigating -
We are currently investigating an issue that is resulting in slowdowns in transcription completions. We will share more details as soon as we learn more.
Jan 3, 16:38 UTC