History
Feb 2026 - Apr 2026
April 2026
Outage
Post-Maintenance Incident
Affected services:
šŸ‡©šŸ‡Ŗ FC [9555P] vulpes
Following the upstream maintenance window that concluded earlier yesterday, we identified an additional incident caused by human error on the datacenter engineers side, which resulted in extended degraded performance on one affected node.

During the power-related works, the engineers were reseating and replacing PSU units on the physical hosts. After replacing one of the power supply units, the server was powered on - however, the second PSU was reconnected only after the system had already fully booted into the OS. As a result of this incorrect power-on sequence, the motherboard triggered an automatic TDP cap, significantly limiting CPU power delivery. This caused severe CPU throttling and led to notable performance degradation on the affected node.

This issue has now been fully resolved. The node has been rebooted under correct power conditions, the TDP limitation has been cleared, and full CPU performance has been restored. All services on the affected node are confirmed to be operating normally.

Only one node (šŸ‡©šŸ‡Ŗ FC [9555P] vulpes) was impacted by this incident. All other nodes remained fully operational and unaffected throughout.

As compensation for this additional disruption, all customers hosted on the affected node will receive three extra days of service, on top of the other three days already credited for the earlier upstream maintenance.

We sincerely apologize for the inconvenience. This incident was caused by an engineering oversight during the post-maintenance restoration process, and we are reviewing our procedures to prevent recurrence, as well as add more monitoring for cases like this.

Thank you for your patience and understanding.
Apr 03, 8:40 AM
Outage
Upstream Maintenance Notice
Affected services:
šŸ‡©šŸ‡Ŗ SNK Germany, Frankfurt am Main
šŸ‡©šŸ‡Ŗ FC [9555P] vulpes
šŸ‡©šŸ‡Ŗ FC [9950X] lagopus
šŸ‡©šŸ‡Ŗ FC [9950X] ferrilata
šŸ‡³šŸ‡± [9950X] fulva
🌐 DNS Panel
+3 more
All nodes are now back online. Please allow up to 20 minutes for VMs to fully start. As compensation for the inconvenience, we're extending our original offer to three days instead of two - all affected customers will receive their compensation within today.

Thank you for your patience through this incident.
Apr 02, 2:08 PM
VM Node šŸ‡©šŸ‡Ŗ FC [9950X] lagopus is back online since 1:39 PM UTC.

The remaining node is šŸ‡©šŸ‡Ŗ FC [9555P] vulpes, as well as our internal node that hosts our DNS Panel and ns1.senkodns.net.
Apr 02, 1:41 PM
VM Nodes šŸ‡©šŸ‡Ŗ FC [9950X] pallida, šŸ‡³šŸ‡± [9950X] fulva and šŸ‡©šŸ‡Ŗ FC [9950X] ferrilata are back online since 1:20 PM UTC
Apr 02, 1:32 PM
Unfortunately, maintenance is still being carried out. We apologize for the delay we are currently experiencing with our upstream provider.
Apr 02, 12:29 PM
šŸ‡³šŸ‡± ams-9575f-macrotis node is back online as of 11:36 AM UTC. Work is ongoing to restore full operation across the remaining nodes.
Apr 02, 11:36 AM
We have just been informed by our upstream provider of planned maintenance currently underway, which is affecting a portion of our virtual server infrastructure.
As a result of this maintenance, 7 out of 69 virtual server nodes, both in Netherlands and Germany locations are currently experiencing downtime. All other nodes remain fully operational and unaffected.

Estimated downtime: 30 – 45 minutes

ā„¹ļø This is an unplanned incident. No action is required from customers at this time. We will automatically provide two extra days of service to all impacted customers as compensation for this incident.
We sincerely apologize for the short notice and the inconvenience caused.

The maintenance is being carried out on the upstream provider's end, and our team is actively monitoring the situation to ensure services are restored as quickly as possible. Affected customers will see their nodes return to full operation automatically once the maintenance window concludes.
We appreciate your patience and understanding, and we will provide a follow-up update once all nodes are confirmed back online.
Apr 02, 11:15 AM
March 2026
Maintenance
Emergency Maintenance - Core Router Update at Upstream
07 Mar 00:00 - 07 Mar 00:16 2026 UTC
Estimated duration: 16 minutes 42 seconds
Affected services:
šŸ‡«šŸ‡® SNK Finland, Helsinki
šŸ‡«šŸ‡® HT [7950X3D] velox
šŸ‡«šŸ‡® HT [7950X3D] bengalensis
šŸ‡«šŸ‡® HT [9454P] zerda
šŸ‡«šŸ‡® HT [7950X3D] cana
šŸ‡«šŸ‡® HT [7950X3D] chama
+13 more

Our upstream provider will be performing an emergency software upgrade on their core router in Helsinki. Our network will be unavailable for the duration of this maintenance window, with an expected downtime of up to 30 minutes.

We apologize for the inconvenience and will post an update once maintenance is complete.

Mar 06, 4:29 PM
Incident
[Finland] Ongoing Network Disruption Due to DDoS Attack
Affected services:
šŸ‡«šŸ‡® HT [7950X3D] cana
šŸ‡«šŸ‡® SNK Route Reflector 2
šŸ‡«šŸ‡® HT [7950X3D] odessana
šŸ‡«šŸ‡® HT [7950X3D] bengalensis
šŸ‡«šŸ‡® HT [7950X3D] rueppellii
šŸ‡«šŸ‡® HT [7950X3D] chama
+15 more
The maintenance has been completed successfully. The core router of our upstream provider is back online as of 00:12 AM UTC and traffic has shifted back to Helsinki. All services should be operating normally.
Thank you for your patience.
Mar 07, 12:15 AM
Our network has been stable since 03:51 PM UTC. The attack is continuing at a reduced intensity and is being actively reflected, with blackholing applied where necessary.

During mitigation, our upstream provider discovered a software bug in Arista EOS on their core router - a SIGSEGV crash in the flow sampling daemon that caused the flow collector to become unstable. This most likely allowed a portion of attack traffic to slip through to the upstream network, which contributed to the network downtime our customers have experienced. An emergency upgrade has been scheduled to address this.

Please be aware of the following upcoming maintenance: 7th March 2026 - 00:00 to 00:30 UTC

Our upstream provider will perform an Arista EOS upgrade on their core router in Helsinki. The maintenance is expected to take up to 30 minutes, though completion may come sooner.
Our Finland network will be unavailable for the duration of this window.

In parallel, traffic will be once again routed through our Germany filtering PoP to reflect the ongoing attack. As a result, latency to services hosted in Finland will be elevated both before and after the maintenance window - this is expected and will normalize once the attack subsides and direct routing is restored.
We apologize for the continued disruption and appreciate your understanding. Further updates will follow.
Mar 06, 4:24 PM
We are observing a recurrence of yesterday's DDoS attack targeting our Finland location. The attack follows the same pattern as before but at a lower volume. Our upstream provider has been notified and is already actively working to mitigate the attack.

We will continue to monitor the situation and provide updates as the mitigation progresses. Thank you for your patience.
Mar 06, 3:44 PM
Traffic has been successfully rerouted back to our Helsinki location and all services have returned to normal. The attack ceased approximately one hour ago and no further malicious traffic is being observed.

We want to thank our customers for their patience throughout this incident. As a follow-up, we will be implementing automated mitigation solutions to enable faster blackholing and traffic filtering in the future - minimizing the impact of any similar events on your services.

If you experience any lingering issues, please don't hesitate to reach out to our support team.
Mar 05, 9:31 PM
As part of our ongoing mitigation efforts, we have rerouted all traffic from our Finland location through a scrubbing PoP in Frankfurt, Germany. Active filtering is now in place and network connectivity is being restored.

The attack has since escalated to 8.4 Tbit/s and is carpet-bombing our entire /24 subnet. Despite the scale, our team and upstream provider are successfully filtering the malicious traffic.

As a result of the temporary reroute, you may notice higher latency than usual when accessing Finland-hosted services - this is expected behavior while traffic is being routed through Germany for scrubbing. Normal routing will be reinstated once the attack has subsided.

We continue to monitor the situation in real time and will post further updates as conditions change. Thank you for your patience.
Mar 05, 5:50 PM
Since 03:44 PM UTC, we have been experiencing a large-scale DDoS attack targeting customers in our Finland location. The attack is causing significant packet loss, increased latency and intermittent inaccessibility of customer services.

The attack volume has exceeded 7 Tbit/s and originates from a globally distributed botnet. Our network engineering team is actively working with upstream providers to mitigate the attack. Traffic filtering and rerouting measures are currently being applied.

We will continue to post updates as the situation develops. Thank you for your patience.
Mar 05, 3:44 PM
February 2026
Incident
Node Outage
Affected services:
šŸ‡©šŸ‡Ŗ FC [9575F] zerda
We are pleased to confirm that the elevated CPU steal time issue on the fra-9575f-zerda cluster node has been fully resolved and proper operations have been restored.

After thorough analysis, we determined that the root cause was related to how virtual machines were resuming simultaneously after the initial restart, causing excessive resource contention. To address this, we performed a full power-down and cold start of the node, and implemented changes to our virtualization configuration - specifically, VMs now start gradually with properly defined CPU quotas to prevent resource saturation during boot sequences.

All virtual machines on the node are back online and operating normally. We will continue monitoring the node closely over the coming hours to ensure sustained stability.

Regarding compensation: we will be providing three additional days of service to all impacted customers, on top of the two days already credited from yesterday's initial outage - totaling five extra days. We recognize this goes well beyond the actual downtime experienced and exceeds our SLA obligations, but we believe it's the right thing to do here.
Our customers put their trust in us, and we understand the disruption this incident caused. This is not the first time we've gone above and beyond our policy in situations like this, and it won't be the last.

We sincerely apologize for the inconvenience and thank you for your patience throughout this incident.

No further action is required from customers. If you experience any remaining issues with your VM, please don't hesitate to reach out to our support team.
Feb 23, 5:19 PM
We are aware of the elevated CPU steal time currently occurring on the fra-9575f-zerda node.

Our team is actively monitoring and analyzing the situation. Prior to the power interruption and subsequent reboot, no abnormal steal time behavior was ever observed. Based on this, the issue is most likely related to a software-level condition that manifested after the restart.

In parallel with our investigation, we are preparing a new cluster node to migrate a portion of the affected virtual machines within today. This will allow us to further distribute the load and mitigate the impact while we continue deeper analysis.
Feb 23, 9:24 AM
Approximately one hour ago, the fra-9575f-zerda cluster node experienced an unexpected outage due to a power rack feed issue within the data center. The interruption affected the entire cluster node.

To safely restore stability, we performed a controlled power-down of the server, re-established stable power to the rack feed, and then brought the node back online.

All affected virtual machines are now booting back up and should be operational going forward. Approximately 1/3 of all the VMs hosted on the node is already up. No further impact is expected at this time.

We will continue monitoring the node to ensure stability.

This is an unplanned incident. No action is required from customers at this time. We will automatically provide two extra days of service to all impacted customers from this incident. We apologize for any inconvenience caused.
Feb 22, 6:58 PM
Incident
[All locations] Ongoing Network Disruption Due to DDoS Attack
Affected services:
šŸ‡©šŸ‡Ŗ FC Storage 1
šŸ‡«šŸ‡® HT [7950X3D] cana
šŸ‡©šŸ‡Ŗ SNK Germany, Frankfurt am Main
šŸ“š Wiki
šŸ‡«šŸ‡® SNK Route Reflector 2
šŸ‡©šŸ‡Ŗ FC [9950X] corsac
+66 more
As of 16:10 UTC, the DDoS attacks have stopped.

Over the following two hours, we gradually rolled out routing adjustments to direct traffic back through our primary routers and restore full network capacity across all locations.

All services are now operating normally. No further network issues are expected at this time.

We will continue to closely monitor the situation and maintain heightened alertness to ensure ongoing stability.
Feb 21, 5:41 PM
Unfortunately, DDoS attacks have resumed and are once again targeting all of our locations and public services.

This may result in intermittent packet loss, increased latency, or temporary service inaccessibility across multiple regions.

We are actively coordinating with our upstream providers to mitigate the attack and stabilize network connectivity as quickly as possible. Additional filtering and traffic engineering measures are being applied.

Following resolution, we will be implementing enhanced automated countermeasures to improve response times and reduce the impact of similar incidents in the future.

Further updates will be provided as the situation develops.
Feb 21, 3:41 PM
We have manually rerouted traffic to the available routers, and network connectivity has now been restored.

All VPS locations and web services are operational. The earlier disruption was related to upstream null-routing in Germany following the DDoS event. While we continue coordinating with the upstream provider regarding the affected routers, traffic is currently stable and flowing normally through alternative paths.

We will provide further updates once we have more information.
Feb 21, 10:23 AM
Our Finland and Netherlands locations have been fully restored as of 09:25 UTC. The DDoS attack has ceased, and additional mitigation measures have been implemented across our network to reduce the risk of similar incidents in the future.

We are currently awaiting resolution from our upstream provider in Germany. At this time, four of our routers in that location remain unavailable. This appears to be related to null-routing applied upstream, which has not yet been lifted on their end.

We are in active communication with the provider to restore connectivity as soon as possible.

Further updates will follow once additional information becomes available.
Feb 21, 10:07 AM
We are currently experiencing a large-scale DDoS attack targeting all VPS locations and our public web services.

The attack is affecting network connectivity and may cause packet loss, increased latency, or temporary inaccessibility of services across multiple regions.

Our network engineering team is actively working with upstream providers to mitigate the attack. Traffic filtering and rerouting measures are being applied to stabilize connectivity and restore full service availability as quickly as possible.

We will continue to provide updates as mitigation progresses.

Thank you for your patience.
Feb 21, 8:45 AM
Incident
Network Degradation in Netherlands
Affected services:
šŸ‡³šŸ‡± [9575F] vulpes
šŸ‡³šŸ‡± SNK Netherlands, Amsterdam
šŸ‡³šŸ‡± [9555P] velox
šŸ‡³šŸ‡± [9950X] velox
Network connectivity in our Netherlands location has been fully restored as of 10:56 PM UTC.

The upstream provider ZetNet has confirmed that the issue affecting their wavelength carrier and multiple transport waves has been resolved. No further interruptions or instability are expected. The provider also noted that they are continuing internal work to improve resilience and prevent similar incidents in the future.

There was no complete downtime during this incident. However, internal packet loss caused by the upstream fault resulted in intermittent connectivity degradation and brief packet loss for some services while the issue was ongoing.

We will continue to monitor the location to ensure ongoing stability.
Feb 17, 10:56 PM
We are currently observing intermittent network connectivity issues in our Netherlands location caused by a fault at the upstream provider ZetNet.

According to the provider, their wavelength carrier is experiencing problems, with multiple transport waves currently down. This is impacting overall connectivity within the facility and may result in packet loss, increased latency, or temporary reachability issues for affected services.

The upstream provider is actively working to restore the failed transport waves. We are monitoring the situation closely and will provide further updates as more information becomes available or once stability is fully restored.

We apologize for the disruption and appreciate your patience.
Feb 17, 5:20 PM
Incident
Cluster Node Degradation
Affected services:
šŸ‡©šŸ‡Ŗ FC [9474F] arctic
The issue affecting cluster node fra-9474f-arctic has been fully resolved since 8 minutes ago.

The root cause was identified as a cable-related fault that prevented the second power supply unit from operating correctly. Resolution required coordination with the data center provider, which unfortunately introduced additional delay.

The cluster node is now operating under normal conditions, and no further impact is expected.

As a goodwill gesture, three days of additional service time have been credited to all affected customers.

We sincerely apologize for the inconvenience caused once again.
Feb 11, 2:40 PM
Since last night, we have observed major CPU performance degradation on the cluster node šŸ‡©šŸ‡Ŗ FC [9474F] arctic.

After further investigation, the issue was traced to a failure of one of the server's redundant power supply units (PSUs), causing the system to operate at reduced performance. Due to the redundant power delivery design, the node remained online for an extended period, however, sustained high CPU load ultimately led to a crash.

Service was restored via IP-KVM by temporarily lowering the CPU frequency. The cluster node is currently operational, though users hosted on this node may continue to experience reduced performance.

We are actively working with the data center provider to replace the failed PSU as soon as possible and will provide further updates as the situation progresses.

This is an unplanned incident. No action is required from customers at this time. We will automatically provide three extra days of service to all impacted customers from this incident, once it gets fully resolved. We apologize for the inconvenience and appreciate your patience while we resolve the issue.
Feb 11, 7:55 AM
Outage
Multiple nodes outage
Affected services:
šŸ‡©šŸ‡Ŗ FC [9950X] zerda
šŸ‡©šŸ‡Ŗ FC [9950X] pallida
The issue affecting a part of our DE-RZ9 cluster has now been fully resolved, and all services are operating normally again.

Compensation in the form of three additional days of service will be credited automatically to all impacted customers. No action is required on your side.
Feb 05, 11:39 AM
A part of our DE-RZ9 cluster is currently unavailable due to issues that occurred following the planned maintenance. Our team is actively working to restore full service as quickly as possible.

Affected nodes:
- fra-9950x-pallida
- fra-9950x-zerda

All customers impacted by this unplanned downtime will receive compensation in the form of three additional days added to their service period. We sincerely apologize for the inconvenience and appreciate your patience while we resolve the issue.
Feb 05, 9:30 AM
Maintenance
Planned Cluster Maintenance Notice - DE-RZ9 Lineup
05 Feb 07:00 - 05 Feb 09:30 2026 UTC
Estimated duration: 2 hours 30 minutes
Affected services:
šŸ‡©šŸ‡Ŗ FC [9950X] lagopus
šŸ‡©šŸ‡Ŗ FC [9950X] pallida
šŸ‡©šŸ‡Ŗ FC [9950X] zerda
šŸ‡©šŸ‡Ŗ FC [9950X] corsac

Our upstream provider at the Firstcolo Frankfurt facility will be performing scheduled maintenance today, February 5th, between 08:00 and 10:00 CET (in approximately 9.5 hours from the time of this notice).


This maintenance involves the replacement of Power Distribution Units (PDUs) to prevent potential power-related issues caused by a defective PDU model currently installed in the rack.


Affected cluster nodes (DE-RZ9)

- fra-9950x-lagopus

- fra-9950x-pallida

- fra-9950x-zerda

- fra-9950x-corsac


Expected impact

Each affected node may experience downtime of up to 30-60 minutes during the maintenance window. Servers will automatically return online once the PDU replacement is completed. It is also possible that services will be restored sooner than the full planned window.


We apologize for the inconvenience, and appreciate your understanding, as this preventive work is necessary to ensure long-term reliability of our Germany cluster.

Feb 04, 9:35 PM
Outage
Node Outage
Affected services:
šŸ‡©šŸ‡Ŗ FC [BUDGET] EPYC-6
We have now resolved the issue and all affected virtual servers are coming back online.
Feb 03, 6:55 PM
Unfortunately, due to an unexpected issue with the node's network connectivity, 90% of the virtual servers hosted on the cluster node became unavailable. We are currently working on rebooting the VM node and resolving the issue as soon as possible.
Feb 03, 6:49 PM
Maintenance
Planned Network Maintenance Notice (Upstream Provider)
02 Feb 23:00 - 03 Feb 00:00 2026 UTC
Estimated duration: 1 hour
Affected services:
šŸ‡©šŸ‡Ŗ FC [9950X] lagopus
šŸ‡©šŸ‡Ŗ FC Game 1
šŸ‡©šŸ‡Ŗ FC Storage 1
šŸ‡©šŸ‡Ŗ FC [9474F] arctic
šŸ‡©šŸ‡Ŗ FC [9474F] velox
šŸ‡©šŸ‡Ŗ FC [9555P] vulpes
+27 more

We would like to inform you about a scheduled maintenance being performed by our upstream provider affecting network connectivity in the Frankfurt region.


What is happening

Our upstream provider will be interconnecting dedicated server rack pairs via MLAG (Multi-Chassis Link Aggregation) between their top-of-rack switches. This upgrade is being carried out to further improve network resilience and operational flexibility. The maintenance is scheduled to take place on Feb 3rd, at 00:00 AM CET, and the estimated duration of the maintenance is 1 hour.


Expected impact for VPS customers

During the maintenance window, one of our upstream routers will experience a brief restart/link re-negotiation, which may cause a short network interruption.

Expected downtime is up to ~30 seconds.


This interruption is planned and should not exceed the stated duration.


We will keep the impact as minimal as possible and complete the work as quickly as possible. Thank you for your understanding.


--


The planned network interruption lasted slightly longer than initially expected due to an upstream-side issue. At 00:25 AM CET, our provider applied a planned change as part of the maintenance at the firstcolo FRA4 location, which immediately caused severe packet loss and impacted a large number of their customers, including us. The incident was escalated internally on their end, and the provider rolled back the configuration change at 00:31 AM CET, restoring connectivity.


As a result, most of our servers and routers in Germany located in the firstcolo DC experienced a brief ~3-5 minute network disruption. Connectivity has been restored, and the maintenance is still ongoing.


--


Due to the issue caused by the upstream provider, they have cancelled tonight's maintenance window and will reschedule it for another time. We will announce a new maintenance window as soon as we receive further details from our upstream provider.


We apologize for the disruption. On our end, the downtime was kept to an absolute minimum, and the maintenance was fully planned in advance during non-peak hours to reduce customer impact as much as possible.


Jan 26, 8:00 PM