Directory

GitHub Status - Incident History https://www.githubstatus.com Statuspage Sat, 28 Dec 2024 08:07:08 +0000 Disruption with some GitHub services <p><small>Dec <var data-var='date'>20</var>, <var data-var='time'>16:44</var> UTC</small><br><strong>Resolved</strong> - On December 20th, 2024, between 15:57 UTC and 16:39 UTC some of our marketing pages became inaccessible and users attempting to access the pages would have received 500 errors. There was no impact to any operational product or service area. This issue was due to a partial outage with one of our service providers. At 16:39 UTC the service provider resolved the outage, restoring access to the affected pages. We are investigating methods to improve error handling and gracefully degrade these pages in case of future outages.</p><p><small>Dec <var data-var='date'>20</var>, <var data-var='time'>16:42</var> UTC</small><br><strong>Update</strong> - This issue is related to a partner who is working the problem, they in partial recovery.</p><p><small>Dec <var data-var='date'>20</var>, <var data-var='time'>16:20</var> UTC</small><br><strong>Update</strong> - We're seeing issues related to some of our marketing pages. We are investigating.</p><p><small>Dec <var data-var='date'>20</var>, <var data-var='time'>16:18</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Fri, 20 Dec 2024 16:44:36 +0000 https://www.githubstatus.com/incidents/5hjghvwvqztc https://www.githubstatus.com/incidents/5hjghvwvqztc Live updates on pages not loading reliably <p><small>Dec <var data-var='date'>17</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Resolved</strong> - On December 17th, 2024, between 14:33 UTC and 14:50 UTC, many users experienced intermittent errors and timeouts when accessing github.com. The error rate was 8.5% on average and peaked at 44.3% of requests. The increased error rate caused a broad impact across our services, such as the inability to log in, view a repository, open a pull request, and comment on issues. The errors were caused by our web servers being overloaded as a result of planned maintenance that unintentionally caused our live updates service to fail to start. As a result of the live updates service being down, clients reconnected aggressively and overloaded our servers.<br /><br />We only marked Issues as affected during this incident despite the broad impact. This oversight was due to a gap in our alerting while our web servers were overloaded. The engineering team's focus on restoring functionality led us to not identify the broad scope of the impact to customers until the incident had already been mitigated.<br /><br />We mitigated the incident by rolling back the changes from the planned maintenance to the live updates service and scaling up the service to handle the influx of traffic from WebSocket clients.<br /><br />We are working to reduce the impact of the live updates service's availability on github.com to prevent issues like this one in the future. We are also working to improve our alerting to better detect the scope of impact from incidents like this.</p><p><small>Dec <var data-var='date'>17</var>, <var data-var='time'>15:32</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Dec <var data-var='date'>17</var>, <var data-var='time'>15:29</var> UTC</small><br><strong>Update</strong> - We have taken some mitigation steps and are continuing to investigate the issue. There was a period of wider impact on many GitHub services such as user logins and page loads which should now be mitigated.</p><p><small>Dec <var data-var='date'>17</var>, <var data-var='time'>15:05</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded availability. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>17</var>, <var data-var='time'>14:53</var> UTC</small><br><strong>Update</strong> - We are currently seeing live updates on some pages not working. This can impact features such as status checks and the merge button for PRs.<br /><br />Current mitigation is to refresh pages manually to see latest details.<br /><br />We are working to mitigate this and will continue to provide updates as the team makes progress.</p><p><small>Dec <var data-var='date'>17</var>, <var data-var='time'>14:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Issues</p> Tue, 17 Dec 2024 16:00:10 +0000 https://www.githubstatus.com/incidents/fnq063tqh7cc https://www.githubstatus.com/incidents/fnq063tqh7cc Disruption with some GitHub services <p><small>Dec <var data-var='date'> 6</var>, <var data-var='time'>17:17</var> UTC</small><br><strong>Resolved</strong> - Upon further investigation, the degradation in migrations in the EU was caused by an internal configuration issue, which was promptly identified and resolved. No customer migrations were impacted during this time and the issue only affected GitHub Enterprise Cloud - EU and had no impact on Github.com. The service is now fully operational. We are following up by improving our processes for these internal configuration changes to prevent a recurrence, and to have incidents that affect GitHub Enterprise Cloud - EU be reported on https://eu.githubstatus.com/.</p><p><small>Dec <var data-var='date'> 6</var>, <var data-var='time'>17:17</var> UTC</small><br><strong>Update</strong> - Migrations are failing for a subset of users in the EU region with data residency. We believe we have resolved the issue and are monitoring for resolution.</p><p><small>Dec <var data-var='date'> 6</var>, <var data-var='time'>16:58</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Fri, 06 Dec 2024 17:17:36 +0000 https://www.githubstatus.com/incidents/d33mtmnttgsh https://www.githubstatus.com/incidents/d33mtmnttgsh Disruption with some GitHub services <p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:27</var> UTC</small><br><strong>Resolved</strong> - On December 4th, 2024 between 18:52 UTC and 19:11 UTC, several GitHub services were degraded with an average error rate of 8%.<br /><br />The incident was caused by a change to a centralized authorization service that contained an unoptimized database query. This led to an increase in overall load on a shared database cluster, resulting in a cascading effect on multiple services and specifically affecting repository access authorization checks. We mitigated the incident after rolling back the change at 19:07 UTC, fully recovering within 4 minutes. <br /><br />While this incident was caught and remedied quickly, we are implementing process improvements around recognizing and reducing risk of changes involving high volume authorization checks. We are investing in broad improvements to our safe rollout process, such as improving early detection mechanisms.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:26</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:21</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:20</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:18</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:11</var> UTC</small><br><strong>Update</strong> - We have identified the cause of timeouts impacting users across multiple services. This change was rolled back and we are seeing recovery. We will continue to monitor for complete recovery.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:07</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'> 4</var>, <var data-var='time'>18:58</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 04 Dec 2024 19:27:34 +0000 https://www.githubstatus.com/incidents/4349zxvb8stp https://www.githubstatus.com/incidents/4349zxvb8stp [Retroactive] Incident with Pull Requests <p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>23:30</var> UTC</small><br><strong>Resolved</strong> - On December 3rd, between 23:29 and 23:43 UTC, Pull Requests experienced a brief outage and teams have confirmed the issue to be resolved. Due to brevity of incident it was not publicly statused at the time however an RCA will be conducted and shared in due course.</p> Tue, 03 Dec 2024 23:30:00 +0000 https://www.githubstatus.com/incidents/lbdsk3990lz5 https://www.githubstatus.com/incidents/lbdsk3990lz5 Incident with Pull Requests and API Requests <p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Resolved</strong> - On December 3, 2024, between 19:35 UTC and 20:05 UTC API requests, Actions, Pull Requests and Issues were degraded. Web and API requests for Pull Requests experienced a 3.5% error rate and Issues had a 1.2% error rate. The highest impact was for users who experienced errors while creating and commenting on Pull Requests and Issues. Actions had a 3.3% error rate in jobs and delays on some updates during this time.<br /><br />This was due to an erroneous database credential change impacting write access to Issues and Pull Requests data. We mitigated the incident by reverting the credential change at 19:52 UTC. We continued to monitor service recovery before resolving the incident at 20:05 UTC. <br /><br />There are a few improvements we are making in response to this. We are investing in safe guards to the change management process in order to prevent erroneous database credential changes. Additionally, the initial rollback attempt was unsuccessful which led to a longer time to mitigate. We were able to revert through an alternative method and are updating our playbooks to document this mitigation strategy.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>20:04</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>20:02</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>19:59</var> UTC</small><br><strong>Update</strong> - We have taken mitigating actions and are starting to see recovery but are continuing to monitor and ensure full recovery. Some users may still see errors.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>19:54</var> UTC</small><br><strong>Update</strong> - Some users will experience problems with certain features of pull requests, actions, issues and other areas. We are aware of the issue, know the cause, and are working on a mitigation.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>19:48</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests, Actions and Pull Requests</p> Tue, 03 Dec 2024 20:05:05 +0000 https://www.githubstatus.com/incidents/w6g0cmvyx3vm https://www.githubstatus.com/incidents/w6g0cmvyx3vm Disruption with some GitHub services <p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>04:39</var> UTC</small><br><strong>Resolved</strong> - Between Dec 3 03:35 UTC and 04:35 UTC, availability of large hosted runners for Actions was degraded due to failures in background VM provisioning jobs. This was a shorter recurrence of the issue that occurred the previous day. Users would see workflows queued waiting for a large runner. On average, 13.5% of all workflows requiring large runners over the incident time were affected, peaking at 46% of requests. Standard and Mac runners were not affected.<br /><br />Following the Dec 1 incident, we had disabled non-critical paths in the provisioning job and believed that would eliminate any impact while we understood and addressed the timeouts. Unfortunately, the timeouts were a symptom of broader job health issues, so those changes did not prevent this second occurrence the following day. We now understand that other jobs on these agents had issues that resulted in them hanging and consuming available job agent capacity. The reduced capacity led to saturation of the remaining agents and significant performance degradation in the running jobs.<br /><br />In addition to the immediate improvements shared in the previous incident summary, we immediately initiated regular recycles of all agents in this area while we continue to address the issues in both the jobs themselves and the resiliency of the agents. We also continue to improve our detection to ensure we are automatically detecting these delays.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>04:38</var> UTC</small><br><strong>Update</strong> - We saw a recurrence of the large hosted runner incident (https://www.githubstatus.com/incidents/qq1m7mqcl6zk) from 12/1/2024. We've applied the same mitigation and see improvements. We will continue to work on a long term solution.</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>04:16</var> UTC</small><br><strong>Update</strong> - We are investigating reports of degraded performance for Hosted Runners</p><p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>04:11</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 03 Dec 2024 04:39:47 +0000 https://www.githubstatus.com/incidents/v4d2jbm842p4 https://www.githubstatus.com/incidents/v4d2jbm842p4 Disruption with some GitHub services <p><small>Dec <var data-var='date'> 2</var>, <var data-var='time'>01:05</var> UTC</small><br><strong>Resolved</strong> - Between Dec 1 12:20 UTC and Dec 2 1:05 UTC, availability of large hosted runners for Actions was degraded due to failures in background VM provisioning jobs. Users would see workflows queued waiting for a runner. On average, 8% of all workflows requiring large runners over the incident time were affected, peaking at 37.5% of requests. There were also lower levels of intermittent queuing on Dec 1 beginning around 3:00 UTC. Standard and Mac runners were not affected. <br /><br />The job failures were caused by timeouts to a dependent service in the VM provisioning flow and gaps in the jobs’ resilience to those timeouts. The incident was mitigated by circumventing the dependency as it was not in the critical path of VM provisioning.<br /><br />There are a few immediate improvements we are making in response to this. We are addressing the causes of the failed calls to improve the availability of calls to that backend service. Even with that impact, the critical flow of large VM provisioning should not have been impacted, so we are improving the client behavior to fail fast and circuit break non-critical calls. Finally the alerting for this service was not adequate in this particular scenario to ensure fast response by our team. We are improving our automated detection from this to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Dec <var data-var='date'> 2</var>, <var data-var='time'>00:57</var> UTC</small><br><strong>Update</strong> - We've applied a mitigation to fix the issues with large runner jobs processing. We are seeing improvements in telemetry and are monitoring for full recovery.</p><p><small>Dec <var data-var='date'> 2</var>, <var data-var='time'>00:14</var> UTC</small><br><strong>Update</strong> - We continue to investigate large hosted runners not picking up jobs.</p><p><small>Dec <var data-var='date'> 1</var>, <var data-var='time'>23:43</var> UTC</small><br><strong>Update</strong> - We continue to investigate issues with large runners.</p><p><small>Dec <var data-var='date'> 1</var>, <var data-var='time'>23:24</var> UTC</small><br><strong>Update</strong> - We're seeing issues related to large runners not picking up jobs and are investigating.</p><p><small>Dec <var data-var='date'> 1</var>, <var data-var='time'>23:18</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Mon, 02 Dec 2024 01:05:26 +0000 https://www.githubstatus.com/incidents/qq1m7mqcl6zk https://www.githubstatus.com/incidents/qq1m7mqcl6zk Incident with Codespaces <p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>07:01</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>07:00</var> UTC</small><br><strong>Update</strong> - We identified the issue and applied a mitigation, resulting in the cessation of timeouts. While we are considering this incident resolved for now, we are continuing to investigate the root cause and plan to implement a permanent fix. Updates will follow as we progress.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>06:36</var> UTC</small><br><strong>Update</strong> - We are investigating issues with timeouts in some requests in Codespaces. We will update you on mitigation progress.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>06:27</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> Thu, 28 Nov 2024 07:01:12 +0000 https://www.githubstatus.com/incidents/fjntvyfzm8kn https://www.githubstatus.com/incidents/fjntvyfzm8kn Incident with Sporadic Timeouts in Codespaces <p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>05:11</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>05:10</var> UTC</small><br><strong>Update</strong> - We identified and addressed failures in two proxy servers and applied mitigation. Since then, timeouts have ceased, and we are considering the incident resolved. We will continue to monitor the situation closely and provide updates if any changes occur.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>04:34</var> UTC</small><br><strong>Update</strong> - We are investigating some network proxy issues that may be contributing to the timeouts in a small percentage of requests in Codespaces. We will continue to investigate.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>04:03</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with timeouts in a small percentage of requests in Codespaces. We will update you on mitigation progress.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>03:32</var> UTC</small><br><strong>Update</strong> - We are investigating issues with timeouts in some requests in Codespaces. Some users may not be able to connect to their Codespaces at this time. We will update you on mitigation progress.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>03:29</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> Thu, 28 Nov 2024 05:11:03 +0000 https://www.githubstatus.com/incidents/y5pb0bxhlcxz https://www.githubstatus.com/incidents/y5pb0bxhlcxz Disruption with GitHub Search <p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>15:25</var> UTC</small><br><strong>Resolved</strong> - Between 13:30 and 15:00 UTC, repository searches were timing out for most users. The ongoing efforts from the <a href="https://www.githubstatus.com/incidents/dnq4lp93t62f">similar incident last week</a> helped uncover the main contributing factors. We have deployed short-term mitigations and identified longer term work to proactively identify and limit resource-intensive searches.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>15:24</var> UTC</small><br><strong>Update</strong> - Search is now operating normally. We are declaring this issue resolved.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>15:10</var> UTC</small><br><strong>Update</strong> - We are now observing signs of complete recovery in search. We will continue to monitor and assess.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>14:40</var> UTC</small><br><strong>Update</strong> - We're observing signs of recovery in search, we will continue to monitor.<br /><br />Next update within 15 minutes.<br /></p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>14:10</var> UTC</small><br><strong>Update</strong> - We are seeing failures in repos, users, discussions, and wikis search. Customers may see failing searches and searching by topic may fail to load. <br /><br />Code and issues search continue to be available.<br /><br />The team is investigating, next update in 30m.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>13:58</var> UTC</small><br><strong>Update</strong> - We are seeing failures in search. Customers may see failing searches and searching by topic may fail to load.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>13:57</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Mon, 25 Nov 2024 15:25:10 +0000 https://www.githubstatus.com/incidents/ltyqfp67463z https://www.githubstatus.com/incidents/ltyqfp67463z Disruption with some GitHub services <p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>12:17</var> UTC</small><br><strong>Resolved</strong> - On November 25th, 2024 between 10:38 UTC and 12:00 UTC the Claude model for GitHub Copilot Chat experienced degraded performance. During the impact, all requests to Claude would result in an immediate error to the user. This was due to upstream errors with one of our infrastructure providers, which have since been mitigated.<br /><br />We are working with our infrastructure providers to reduce time to detection and implement additional failover options to mitigate issues like this one in the future.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>12:17</var> UTC</small><br><strong>Update</strong> - This incident has been mitigated; we are now seeing requests succeeding to the Claude 3.5 Sonnet model in Copilot.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>11:44</var> UTC</small><br><strong>Update</strong> - The team is continuing to investigate errors using the Claude 3.5 Sonnet v2 model and has engaged our partners.<br /><br />All requests to this model are failing, but other Copilot models are functional and can be used as an alternative.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>11:00</var> UTC</small><br><strong>Update</strong> - Users cannot use the Claude 3.5 Sonnet model in GitHub Copilot currently, in both VS Code and GitHub.com chat. The team is investigating.</p><p><small>Nov <var data-var='date'>25</var>, <var data-var='time'>10:51</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Mon, 25 Nov 2024 12:17:35 +0000 https://www.githubstatus.com/incidents/nbscbkxwxjrx https://www.githubstatus.com/incidents/nbscbkxwxjrx Repository searches not working for some users <p><small>Nov <var data-var='date'>21</var>, <var data-var='time'>16:48</var> UTC</small><br><strong>Resolved</strong> - On November 21, 2024, between 14:30 UTC and 15:53 UTC search services at GitHub were degraded and CPU load on some nodes hit 100%. On average, the error rate was 22 requests/second and peaked at 83 requests/second. During this incident Enterprise Profile pages were slow to load and searches may have returned low quality results.<br /><br />The CPU load was mitigated by redeploying portions of our web infrastructure.<br /><br />We are still working to identify the cause of the increase in CPU usage and are improving our observability tooling to better expose the cause of an incident like this in the future.</p><p><small>Nov <var data-var='date'>21</var>, <var data-var='time'>16:04</var> UTC</small><br><strong>Update</strong> - We are seeing recovery across all searches. The team continues to closely monitor our search system and is working to fully mitigate the cause of the problems.</p><p><small>Nov <var data-var='date'>21</var>, <var data-var='time'>15:33</var> UTC</small><br><strong>Update</strong> - Users will notice that loading an organization profile will sometimes not work. Additionally, the site-wide search is affected, too.<br />This issue does not affect code or issues and pull requests searches.</p><p><small>Nov <var data-var='date'>21</var>, <var data-var='time'>15:30</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 21 Nov 2024 16:48:37 +0000 https://www.githubstatus.com/incidents/dnq4lp93t62f https://www.githubstatus.com/incidents/dnq4lp93t62f Disruption with some GitHub services <p><small>Nov <var data-var='date'>19</var>, <var data-var='time'>12:03</var> UTC</small><br><strong>Resolved</strong> - On November 19, 2024, between 10:56:00 UTC and 12:03:00 UTC the notifications service was degraded and stopped sending notifications. On average, notifications delivery was delayed about 1 hour. This was due to a database host coming out of a regular maintenance process in read only-mode.<br />We mitigated the incident by making the host writable again. After that the notifications delivery recovered and any delivery job that had failed during the incident was successfully retried.<br />We are working to improve our observability across database clusters to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Nov <var data-var='date'>19</var>, <var data-var='time'>11:56</var> UTC</small><br><strong>Update</strong> - We have resolved the issue but are waiting for queues to catch up.</p><p><small>Nov <var data-var='date'>19</var>, <var data-var='time'>11:38</var> UTC</small><br><strong>Update</strong> - We're investigating an issue with two-factor authentication in the GitHub mobile app</p><p><small>Nov <var data-var='date'>19</var>, <var data-var='time'>11:36</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 19 Nov 2024 12:03:43 +0000 https://www.githubstatus.com/incidents/b85tj3x6n4vz https://www.githubstatus.com/incidents/b85tj3x6n4vz [Retroactive] Merge Queues not processing queued Pull Requests in some repositories <p><small>Nov <var data-var='date'> 6</var>, <var data-var='time'>11:00</var> UTC</small><br><strong>Resolved</strong> - Between 2024-11-06 11:14 UTC and 2024-11-08 at 18:15 UTC, pull requests added to merge queues in some repositories were not processed. This was caused by a bug in a new version of the merge queue code, and was mitigated by rolling back a feature flag. Around 1% of enqueued PRs were affected, with around 7% of repositories that use a merge queue being impacted at some time during the incident.<br /><br />Queues were impacted if their target branch had the “require status checks” setting enabled, but did not have any individual required checks configured. Our monitoring strategy only covered PRs automatically removed from the queue, which was insufficient to detect this issue.<br /><br />We are improving our monitors to cover anomalous manual queue entry removal rates, which will allow us to detect this class of issue much sooner.</p> Wed, 06 Nov 2024 11:00:00 +0000 https://www.githubstatus.com/incidents/17yzcr45rg2x https://www.githubstatus.com/incidents/17yzcr45rg2x Incident with Actions <p><small>Oct <var data-var='date'>30</var>, <var data-var='time'>09:42</var> UTC</small><br><strong>Resolved</strong> - On October 30, 2024, between 5:45 and 9:42 UTC, the Actions service was degraded, causing run delays. On average, Actions workflow run, job, and step updates were delayed as much as one hour. The delays were caused by updates in a dependent service that led to failures in Redis connectivity. Delays recovered once the Redis cluster connectivity was restored at 8:16 UTC. The incident was fully mitigated once the job queue had processed by 9:24 UTC. This incident followed an earlier short period of impact on hosted runners due to a similar issue, which was mitigated by failing over to a healthy cluster.<br /><br />From this, we are working to improve our observability across Redis clusters to reduce our time to detection and mitigation of issues like this one in the future where multiple clusters and services were impacted. We will also be working to reduce the time to mitigate and improve general resilience to this dependency.</p><p><small>Oct <var data-var='date'>30</var>, <var data-var='time'>08:48</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate delays to status updates to Actions Workflow Runs, Workflow Job Runs, and Check Steps. Customers may see that their Actions workflows have completed, but the run appears to be waiting for its status to update. We will continue providing updates on the progress towards mitigation.</p><p><small>Oct <var data-var='date'>30</var>, <var data-var='time'>08:05</var> UTC</small><br><strong>Update</strong> - We have identified connectivity issues with an internal service causing delays in Actions Workflow Runs, Workflow Job Runs, and Check Steps. We are continuing to investigate.</p><p><small>Oct <var data-var='date'>30</var>, <var data-var='time'>07:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Wed, 30 Oct 2024 09:42:30 +0000 https://www.githubstatus.com/incidents/9yk1fbk0qjjc https://www.githubstatus.com/incidents/9yk1fbk0qjjc Incident with GitHub Community Discussions <p><small>Oct <var data-var='date'>24</var>, <var data-var='time'>06:55</var> UTC</small><br><strong>Resolved</strong> - On Oct 24 2024 at 06:55 UTC, a syntactically correct, but invalid discussion template YAML config file was committed in the community/community repository. This caused all users of that repository who tried to access a discussion template or attempted to create a discussion to receive a 500 error response.<br /><br />We mitigated the incident by manually reverting the invalid template changes.<br /><br />We are adding support to detect and prevent invalid discussion template YAML from causing user-facing errors in the future.</p><p><small>Oct <var data-var='date'>24</var>, <var data-var='time'>06:13</var> UTC</small><br><strong>Update</strong> - We are aware of an issue that is preventing users from creating new posts in Community Discussions (community.github.com). Users may see a 500 error when they attempt to post a new discussion. We are currently working to resolve.</p><p><small>Oct <var data-var='date'>24</var>, <var data-var='time'>06:12</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 24 Oct 2024 06:55:06 +0000 https://www.githubstatus.com/incidents/xy3hrpmg4r3k https://www.githubstatus.com/incidents/xy3hrpmg4r3k Disruption with some GitHub services <p><small>Oct <var data-var='date'>12</var>, <var data-var='time'>01:11</var> UTC</small><br><strong>Resolved</strong> - On October 11, 2024, starting at 05:59 UTC, DNS infrastructure in one of our sites started to fail to resolve lookups following a database migration. Attempts to recover the database led to cascading failures that impacted the DNS systems for that site. The team worked to restore the infrastructure and there was no customer impact until 17:31 UTC. <br /><br />During the incident, impact to the following services could be observed:<br /><br />- Copilot: Degradation in IDE code completions for 4% of active users during the incident from 17:31 UTC to 21:45 UTC.<br />- Actions: Workflow runs delay (25% of runs delayed by over 5 minutes) and errors (1%) between 20:28 UTC and 21:30 UTC. Errors while creating Artifact Attestations.<br />- Customer migrations: From 18:16 UTC to 23:12 UTC running migrations stopped and new ones were not able to start.<br />- Support: support.github.com was unavailable from 19:28 UTC to 22:14 UTC. <br />- Code search: 100% of queries failed between 2024-10-11 20:16 UTC and 2024-10-12 00:46 UTC.<br /><br />Starting at 18:05 UTC, engineering attempted to repoint the degraded site DNS to a different site to restore DNS functionality. At 18:26 UTC the test system had validated this approach and a progressive rollout to the affected hosts proceeded over the next hour. While this mitigation was effective at restoring connectivity within the site, it caused issues with connectivity from healthy sites back to the degraded site, and the team proceeded to plan out a different remediation effort.<br /><br />At 20:52 UTC, the team finalized a remediation plan and began the next phase of mitigation by deploying temporary DNS resolution capabilities to the degraded site. At 21:46 UTC, DNS resolution in the degraded site began to recover and was fully healthy at 22:16 UTC. Lingering issues with code search were resolved at 01:11 UTC on October 12.<br /><br />The team continued to restore the original functionality within the site after public service functionality was restored. GitHub is working to harden our resiliency and automation processes around this infrastructure to make diagnosing and resolving issues like this faster in the future.</p><p><small>Oct <var data-var='date'>12</var>, <var data-var='time'>00:46</var> UTC</small><br><strong>Update</strong> - We’re continuing to work towards recovery of code search service.</p><p><small>Oct <var data-var='date'>12</var>, <var data-var='time'>00:14</var> UTC</small><br><strong>Update</strong> - We’ve identified the issue with code search and are working towards recovery of service.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>23:31</var> UTC</small><br><strong>Update</strong> - We’re continuing to investigate issues with code search.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>22:57</var> UTC</small><br><strong>Update</strong> - We’re continuing to investigate issues with code search. Copilot and Actions services are recovered and operating normally.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>22:16</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>22:14</var> UTC</small><br><strong>Update</strong> - We are rolling out a fix to address the network connectivity issues. Copilot is seeing recovery. support.github.com is recovered.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>21:46</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>21:28</var> UTC</small><br><strong>Update</strong> - We continue to work on mitigations. Actions is starting to see recovery.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>20:52</var> UTC</small><br><strong>Update</strong> - The mitigation attempt did not resolve the issue and we are working on a different resolution path. In addition to the previously listed impacts, some Actions runs will see delays in starting.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>20:48</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>20:15</var> UTC</small><br><strong>Update</strong> - We continue to work on mitigations. In addition to previously listed impact, code search is also unavailable.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Update</strong> - A mitigation for the network connectivity issues is being tested.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>19:28</var> UTC</small><br><strong>Update</strong> - We continue to work on mitigations to restore network connectivity. In addition to the previously listed impact, access to support.github.com is also impacted.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - We have identified the problem and are working on mitigations. In addition to previously listed impact, new Artifact Attestations cannot be created.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>18:41</var> UTC</small><br><strong>Update</strong> - We have identified the problem is related to maintenance performed in our networking infrastructure. We are working to bring back the connectivity.<br /><br />Copilot users in organizations or enterprises that have opted into the Content Exclusions feature will experience disabled completions in their editors.<br /><br />Customer migrations remain paused as well.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>18:25</var> UTC</small><br><strong>Update</strong> - We are investigating network connectivity issues. Some Copilot customers will see errors on API calls and experiences. We have also paused the remaining customer migration queue while we investigate due to an increase in errors.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>17:58</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with service(s): Copilot. We will continue to keep users updated on progress towards mitigation.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>17:56</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>17:53</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Sat, 12 Oct 2024 01:11:28 +0000 https://www.githubstatus.com/incidents/myvz2tsj2dh8 https://www.githubstatus.com/incidents/myvz2tsj2dh8 Isolated Codespaces creation failures in the West Europe region <p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>23:32</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>23:32</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>23:32</var> UTC</small><br><strong>Update</strong> - Codespace creation has been remediated in this region.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>22:54</var> UTC</small><br><strong>Update</strong> - We are once again seeing signs of increased latency for codespace creation in this region, but are at the same time recovering previously unavailable resources.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>22:10</var> UTC</small><br><strong>Update</strong> - Recovery continues slowly, and we are investigating strategies to speed up the recovery process.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>21:39</var> UTC</small><br><strong>Update</strong> - We are continuing to see gradual recovery in the region and continue to validate the persistent fix.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>21:06</var> UTC</small><br><strong>Update</strong> - The persistent fix has been applied, and are beginning to see improvements in the region. We are still working on follow-on effects, however, and expect recovery to be gradual.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>20:26</var> UTC</small><br><strong>Update</strong> - We are nearing full application of the persistent fix and will provide more updates soon.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Update</strong> - Mitigations we have put in place are yielding improvements in Codespace creation success rates in the affected region. We expect full recovery once the persistent fix fully rolls out.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - We are continuing to work on mitigations while the more persistent fix rolls out.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>18:44</var> UTC</small><br><strong>Update</strong> - We are continuing to apply mitigations while we deploy the more persistent fix. Full recovery is expected in 2 hours or less, but more updates will be coming soon.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>18:08</var> UTC</small><br><strong>Update</strong> - We have applied some mitigations that are improving creation success rates while we work on the more comprehensive fix.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Update</strong> - We have identified a possible root cause and are working on the fix.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>17:11</var> UTC</small><br><strong>Update</strong> - Some Codespaces are failing to create successfully in the Western EU region. Investigating is ongoing.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>17:08</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>17:02</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 08 Oct 2024 23:32:54 +0000 https://www.githubstatus.com/incidents/m17w6x7syhx0 https://www.githubstatus.com/incidents/m17w6x7syhx0 Incident with Codespaces <p><small>Sep <var data-var='date'>30</var>, <var data-var='time'>11:26</var> UTC</small><br><strong>Resolved</strong> - On September 30th, 2024 from 10:43 UTC to 11:26 UTC Codespaces customers in the Central India region were unable to create new Codespaces. Resumes were not impacted. Additionally, there was no impact to customers in other regions.<br /><br />The cause was traced to storage capacity constraints in the region and was mitigated by temporarily redirecting create requests to other regions. Afterwards, additional storage capacity was added to the region and traffic was routed back. <br /><br />A bug was also identified that caused some available capacity to not be utilized, artificially constraining capacity and halting creations in the region prematurely. We have since fixed this bug as well, so that available capacity scales as expected according to our capacity planning projections.</p><p><small>Sep <var data-var='date'>30</var>, <var data-var='time'>11:26</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Sep <var data-var='date'>30</var>, <var data-var='time'>11:25</var> UTC</small><br><strong>Update</strong> - We are seeing signs of recovery in Codespaces creations and starts. We are continuing to monitor for full recovery.</p><p><small>Sep <var data-var='date'>30</var>, <var data-var='time'>11:24</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Sep <var data-var='date'>30</var>, <var data-var='time'>11:09</var> UTC</small><br><strong>Update</strong> - We are investigating a high number of errors in Codespaces creation and start.</p><p><small>Sep <var data-var='date'>30</var>, <var data-var='time'>11:08</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Codespaces</p> Mon, 30 Sep 2024 11:26:53 +0000 https://www.githubstatus.com/incidents/bdzwt47fwg51 https://www.githubstatus.com/incidents/bdzwt47fwg51 Disruption with some GitHub services <p><small>Sep <var data-var='date'>27</var>, <var data-var='time'>15:30</var> UTC</small><br><strong>Resolved</strong> - Between September 27, 2024, 15:26 UTC and September 27, 2024, 15:34 UTC the Repositories Releases service was degraded. During this time 9% of requests to list releases via API or the webpage received a `500 Internal Server` error.<br /><br />This was due to a bug in our software roll out strategy. The rollout was reverted starting at 15:30 UTC, which began to restore functionality. The rollback was completed at 15:34 UTC.<br /><br />We are continuing to improve our testing infrastructure to ensure that bugs such as this one can be detected before they make their way into production.</p> Fri, 27 Sep 2024 15:30:00 +0000 https://www.githubstatus.com/incidents/wlb83pxg009y https://www.githubstatus.com/incidents/wlb83pxg009y Degraded performance for some Copilot users <p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>05:08</var> UTC</small><br><strong>Resolved</strong> - Between September 25, 2024, 22:20 UTC and September 26, 2024, 5:00 UTC the Copilot service was degraded. During this time Copilot chat requests failed at an average rate of 15%.<br /><br />This was due to a faulty deployment in a service provider that caused server errors from multiple regions. Traffic was routed away from those regions at 22:28 UTC and 23:39 UTC, which partially restored functionality, while the upstream service provider rolled back their change. The rollback was completed at 04:41 UTC.<br /><br />We are continuing to improve our ability to respond more quickly to similar issues through faster regional redirection and working with our upstream provider on improved monitoring.<br /></p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>05:08</var> UTC</small><br><strong>Update</strong> - Monitors continue to see improvements. We are declaring full recovery.</p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>05:03</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>03:51</var> UTC</small><br><strong>Update</strong> - We've applied a mitigation to fix the issues and are seeing improvements in telemetry. We are monitoring for full recovery.</p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>02:34</var> UTC</small><br><strong>Update</strong> - We believe we have identified the root cause of the issue and are monitoring to ensure the problem does not recur.</p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>01:46</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate the root cause of the latency previously observed to ensure there is no reoccurrence, and better stability going forward.<br /></p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>01:03</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate the root cause of the latency previously observed to ensure there is no reoccurrence, and better stability going forward.</p><p><small>Sep <var data-var='date'>26</var>, <var data-var='time'>00:29</var> UTC</small><br><strong>Update</strong> - Copilot users should no longer see request failures. We are still investigating the root cause of the issue to ensure that the experience will remain uninterrupted.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>23:55</var> UTC</small><br><strong>Update</strong> - We are seeing recovery for requests to Copilot API in affected regions, and are continuing to investigate to ensure the experience remains stable.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>23:40</var> UTC</small><br><strong>Update</strong> - We have noticed a degradation in performance of Copilot API in some regions. This may result in latency or failed responses to requests to Copilot. We are investigating mitigation options.<br /></p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>23:39</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Thu, 26 Sep 2024 05:08:45 +0000 https://www.githubstatus.com/incidents/zp4gzrqfzhrq https://www.githubstatus.com/incidents/zp4gzrqfzhrq Incident with Actions Runs <p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>19:19</var> UTC</small><br><strong>Resolved</strong> - On September 25th, 2024 from 18:32 UTC to 19:13 UTC, Actions service experienced a degradation during a production deployment, leading to actions failing to be downloaded at the start of a job. On average, 21% of Actions workflow runs failed to start during the course of the incident. The issue was traced back to a bug in an internal service responsible for generating the URLs used by the Actions runner to download actions.<br /><br />To mitigate the impact, we rolled back the affecting deployment. We are implementing new monitors to improve our detection and response time for this class of issues in the future.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>19:14</var> UTC</small><br><strong>Update</strong> - We're seeing issues related to Actions runs failing to download actions at the start of a job. We're investigating the cause and working on mitigations for customers impacted by this issue.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>19:11</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions and Pages</p> Wed, 25 Sep 2024 19:19:01 +0000 https://www.githubstatus.com/incidents/1g9v7rry4z86 https://www.githubstatus.com/incidents/1g9v7rry4z86 Incident with Git Operations <p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>16:03</var> UTC</small><br><strong>Resolved</strong> - On September 25, 2024 from 14:31 UTC to 15:06 UTC the Git Operations service experienced a degradation, leading to 1,381,993 failed git operations. The overall error rate during this period was 4.2%, with a peak error rate of 12.5%. <br /><br />The root cause was traced to a bug in a build script for a component that runs on the file servers that host git repository data. The build script incurred an error that did not cause the overall build process to fail, resulting in a faulty set of artifacts being deployed to production.<br /><br />To mitigate the impact, we rolled back the affecting deployment. <br /><br />To prevent further occurrences of this cause in the future, we will be addressing the underlying cause of the ignored build failure and improving metrics and alerting for the resulting production failure scenarios.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>15:34</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with both Actions and Packages, related to a brief period of time where specific Git Operations were failing. We will continue to keep users updated on progress towards mitigation.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>15:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations</p> Wed, 25 Sep 2024 16:03:30 +0000 https://www.githubstatus.com/incidents/q3xqwmcxzkqq https://www.githubstatus.com/incidents/q3xqwmcxzkqq Incident with Codespaces start and creation <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>21:04</var> UTC</small><br><strong>Resolved</strong> - On September 24th, 2024 from 08:20 UTC to 09:04 UTC the Codespaces service experienced an interruption in network connectivity, leading to 175 codespaces being unable to be created or resumed. The overall error rate during this period was 25%. <br /><br />The cause was traced to an interruption in network connectivity caused by SNAT port exhaustion following a deployment, causing individual Codespaces to lose their connection to the service.<br /><br />To mitigate the impact, we increased port allocations to give enough buffer for increased outbound connections shortly after deployments, and will be scaling up our outbound connectivity in the near future, as well as adding improved monitoring of network capacity to prevent future regressions.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>21:04</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>21:01</var> UTC</small><br><strong>Update</strong> - We have successfully mitigated the issue affecting create and resume requests for Codespaces. Early signs of recovery are being observed in the impacted region.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>21:00</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>20:56</var> UTC</small><br><strong>Update</strong> - We are investigating issues with Codespaces in the US East geographic area. Some users may not be able to create or start their Codespaces at this time. We will update you on mitigation progress.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>20:54</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Codespaces</p> Tue, 24 Sep 2024 21:04:59 +0000 https://www.githubstatus.com/incidents/2dxdzc3fdgdx https://www.githubstatus.com/incidents/2dxdzc3fdgdx