History
Feb 2026 - Apr 2026
Outage
Post-Maintenance Incident
Affected services:
šŸ‡©šŸ‡Ŗ FC [9555P] vulpes
Following the upstream maintenance window that concluded earlier yesterday, we identified an additional incident caused by human error on the datacenter engineers side, which resulted in extended degraded performance on one affected node.

During the power-related works, the engineers were reseating and replacing PSU units on the physical hosts. After replacing one of the power supply units, the server was powered on - however, the second PSU was reconnected only after the system had already fully booted into the OS. As a result of this incorrect power-on sequence, the motherboard triggered an automatic TDP cap, significantly limiting CPU power delivery. This caused severe CPU throttling and led to notable performance degradation on the affected node.

This issue has now been fully resolved. The node has been rebooted under correct power conditions, the TDP limitation has been cleared, and full CPU performance has been restored. All services on the affected node are confirmed to be operating normally.

Only one node (šŸ‡©šŸ‡Ŗ FC [9555P] vulpes) was impacted by this incident. All other nodes remained fully operational and unaffected throughout.

As compensation for this additional disruption, all customers hosted on the affected node will receive three extra days of service, on top of the other three days already credited for the earlier upstream maintenance.

We sincerely apologize for the inconvenience. This incident was caused by an engineering oversight during the post-maintenance restoration process, and we are reviewing our procedures to prevent recurrence, as well as add more monitoring for cases like this.

Thank you for your patience and understanding.
Apr 03, 8:40 AM