We root caused the issue. A change to our system resulted in an exceptional network traffic load, which led to higher latencies of our internal API requests. This overall slowed down the creation of new runners.
Now the issue is resolved and queue time is back to expected levels.
Resolved
We root caused the issue. A change to our system resulted in an exceptional network traffic load, which led to higher latencies of our internal API requests. This overall slowed down the creation of new runners.
Now the issue is resolved and queue time is back to expected levels.
Monitoring
We rolled back a change that might have impacted the creation of new instances. Queue time is recovered now. We'll keep monitoring the situation.
Investigating
Our team has been notified of increased queue time for GitHub jobs to be processed by Namespace Runners.