Scaleway
Update - Due to capacity issue, Dedibox and Elastic-metal servers in DC5 could experienced reduced bandwidth and some packet loss.
Mar 28, 2024 - 18:55 CET
Investigating - There is a fiber cut between DC5 and DC3 impacting the redundancy.
The Teams are on the way to figure out the exact location of the cut.

Mar 28, 2024 - 18:21 CET
Update - This url is the one given to users in DevTools, Community Slack.
Users are not being redirected to the correct url.

Developers API site is still up on its correct url : https://www.scaleway.com/en/developers/

Mar 28, 2024 - 11:09 CET
Investigating - This url is the one given to users in DevTools, Community Slack.
Users are not being redirected to the correct url.

Mar 28, 2024 - 11:05 CET
Update - We're blacklisted by Spamhaus on our main IP.
Measures have been taken since detection, the impact is moderate, pending IP delist.

Mar 28, 2024 - 11:00 CET
Update - We are continuing to work on a fix for this issue.
Mar 28, 2024 - 10:59 CET
Identified - We're blacklisted by Spamhaus on our main IP.
Measures have been taken since detection, the impact is moderate, pending IP delist.

Mar 28, 2024 - 10:57 CET
Identified - Ingestion path and query path were impacted due to the earlier VPC incident.
Mar 27, 2024 - 16:36 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 27, 2024 - 11:37 CET
Investigating - We are currently investigating this issue.
Mar 27, 2024 - 11:09 CET
Update - We are continuing to monitor for any further issues.
Mar 27, 2024 - 10:54 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 27, 2024 - 10:41 CET
Update - We are continuing to investigate this issue.
Mar 27, 2024 - 10:00 CET
Investigating - DNS resolutions for services within a VPC are currently very slow and might end up in a timeout.
It might prevent especially k8s clusters to boot properly when using VPC DNS

DHCP was impacted as well for a shorter period of time but with less latencies than DNS.

Mar 27, 2024 - 10:00 CET
Investigating - Our service is currently experiencing disruption due to blacklisting by Microsoft.
We are actively working with Microsoft to resolve this issue as soon as possible.

Mar 25, 2024 - 12:51 CET
Investigating - The rack exhibits power instabilities.

We will replace the power distribution units in this rack.

Mar 20, 2024 - 12:31 CET
Identified - Some lifecycle rules didn't work as expected from 15/03 to 19/03
Mar 20, 2024 - 11:40 CET
Identified - The service is stable since 11:30 CET, we are continuing investigation.
Mar 19, 2024 - 17:35 CET
Investigating - We have encountered an increase of the latency on ingestion and query path. This means that you may have problems to send logs, metrics and query them.
Our team is already setting up a fix.

Mar 19, 2024 - 09:12 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 12, 2024 - 18:17 CET
Investigating - We are currently investigating on an issue on the webmail interface that is unreachable currently
https://webmail.online.net/

Mar 12, 2024 - 11:21 CET
Investigating - The issue is more widespread and there seems to be a global issue with the reachability between Scaleway and many Internet network operator located in Egypt. The same issues of reaching Egyptian ressources are also observed from other (than Scaleway) internet operators. We are still investigating what is causing this issue, and trying to escalate the problem to the Egyptian network operators, but at this point data we have indicated that the issue is global and out of our control
Jan 25, 2024 - 15:35 CET
Update - Our investigation is suggesting that the issue is located on Orange Egypt side, and we are trying to get in touch with them to solve the issue.
Jan 24, 2024 - 10:45 CET
Identified - The issue has been identified, and is located on the provider side, Orange Egypt.
Jan 19, 2024 - 11:21 CET
Investigating - Our network is unreachable from one ( or more ) Egypt internet provider.
We are currently investigating this issue.

Jan 19, 2024 - 09:03 CET
Update - You can still manage datacenter intervention from your dedibox console, in Housing
Dec 19, 2023 - 17:47 CET
Update - We are continuing to investigate this issue.
Dec 19, 2023 - 16:48 CET
Investigating - Ticketing directed to Opcore datacenters is currently unavailable to our dedirack clients.
Our team is currently investigating.

Dec 19, 2023 - 16:48 CET
Investigating - We have noticed that problems with connecting to the dedibackup service can occur.
We will get back to you as soon as we have more information on the situation.

Apr 06, 2023 - 12:23 CEST
Elements - AZ Degraded Performance
90 days ago
99.88 % uptime
Today
fr-par-1 Degraded Performance
90 days ago
99.78 % uptime
Today
fr-par-2 Degraded Performance
90 days ago
99.9 % uptime
Today
fr-par-3 Operational
90 days ago
99.9 % uptime
Today
nl-ams-1 Operational
90 days ago
99.78 % uptime
Today
pl-waw-1 Operational
90 days ago
99.98 % uptime
Today
nl-ams-2 Operational
90 days ago
99.78 % uptime
Today
pl-waw-2 Operational
90 days ago
100.0 % uptime
Today
nl-ams-3 Operational
90 days ago
99.78 % uptime
Today
pl-waw-3 Operational
90 days ago
100.0 % uptime
Today
Elements - Products Degraded Performance
90 days ago
98.77 % uptime
Today
Instances Operational
90 days ago
94.48 % uptime
Today
BMaaS Operational
90 days ago
100.0 % uptime
Today
Object Storage Operational
90 days ago
100.0 % uptime
Today
C14 Cold Storage Operational
90 days ago
100.0 % uptime
Today
Kapsule Operational
90 days ago
95.91 % uptime
Today
DBaaS Operational
90 days ago
93.97 % uptime
Today
LBaaS Operational
90 days ago
94.48 % uptime
Today
Container Registry Operational
90 days ago
98.01 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
Elements Console Operational
90 days ago
100.0 % uptime
Today
IoT Hub Operational
90 days ago
100.0 % uptime
Today
Account API Operational
90 days ago
99.98 % uptime
Today
Billing API Operational
90 days ago
100.0 % uptime
Today
Functions and Containers Operational
90 days ago
99.89 % uptime
Today
Block Storage Operational
90 days ago
99.98 % uptime
Today
Elastic Metal Operational
90 days ago
100.0 % uptime
Today
Apple Silicon M1 Operational
90 days ago
100.0 % uptime
Today
Private Network Degraded Performance
90 days ago
99.66 % uptime
Today
Hosting ? Operational
90 days ago
100.0 % uptime
Today
Observability Operational
90 days ago
99.07 % uptime
Today
Transactional Email Degraded Performance
90 days ago
100.0 % uptime
Today
Dedibox - Datacenters Degraded Performance
90 days ago
99.06 % uptime
Today
DC2 Operational
90 days ago
99.61 % uptime
Today
DC3 Degraded Performance
90 days ago
97.23 % uptime
Today
DC5 Degraded Performance
90 days ago
99.71 % uptime
Today
AMS Operational
90 days ago
99.7 % uptime
Today
Dedibox - Products Major Outage
90 days ago
99.15 % uptime
Today
Dedibox Major Outage
90 days ago
93.27 % uptime
Today
Hosting Degraded Performance
90 days ago
100.0 % uptime
Today
SAN Operational
90 days ago
100.0 % uptime
Today
Dedirack Operational
90 days ago
100.0 % uptime
Today
Dedibackup Operational
90 days ago
100.0 % uptime
Today
Dedibox Console Operational
90 days ago
100.0 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
RPN Operational
90 days ago
100.0 % uptime
Today
Miscellaneous Operational
90 days ago
100.0 % uptime
Today
Excellence Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
Update - Update: Scheduled for April 1, 2024
Feb 28, 2024 - 11:01 CET
Scheduled - In order to future-proof your infrastructure and harden security, legacy public-only Kapsule clusters (i.e without any private endpoint) will be End of Life as of 18 March 2024.

Ending the deprecation cycle, Kapsule clusters still with public-only endpoints will be migrated to Private Networks over the following weeks. The migrations will happen region per region, in this order:
1. PL-WAW
2. NL-AMS
3. FR-PAR

With the new default isolation configuration offered by Kapsule, your worker nodes will keep their public IPs to access the Internet. After migrating, past security groups configuration won’t be overridden and RR wildcard DNS still point to public IPs.

Warning: during the migration, all pods of the CNI are restarted. The pod network of your cluster will thus be temporarily unavailable for 1 to 10 minutes (depending on the size of your cluster and the CNI you are using).

Find our dedicated documentation on Kapsule with Private Networks https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/secure-cluster-with-private-network/#how-can-i-migrate-my-existing-clusters-to-regional-private-networks

Dec 01, 2023 - 10:08 CET
Update - Update: Scheduled for April 1, 2024 to Apr 5, 2024
Feb 28, 2024 - 10:55 CET
Scheduled - Kubernetes Kapsule clusters in the PL-WAW region with public-only endpoints will be migrated to Private Networks.

Network downtime: this migration will result in a temporary network loss of 1 to 10 minutes.

With the new default isolation configuration, worker nodes still have their public IPs to access the Internet. After migrating, existing security groups configuration won’t be overridden and RR wildcard DNS still point to public IPs.

Find our dedicated documentation on Kapsule with Private Networks https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/secure-cluster-with-private-network/#how-can-i-migrate-my-existing-clusters-to-regional-private-networks

Dec 01, 2023 - 10:12 CET
The default IP creation value for an instance api will shift from NAT to Routed IP. If you have any script calling the API directly to create instances, please check that it will continue working after the default switch. If you are using our CLI or Terraform adapter, please update to the latest version to support these changes.
Posted on Mar 13, 2024 - 11:16 CET
Update - Update: Scheduled for Apr 15, 2024 - Apr 19, 2024
Feb 28, 2024 - 11:02 CET
Scheduled - Kubernetes Kapsule clusters in the NL-AMS region with public-only endpoints will be migrated to Private Networks.

Network downtime: this migration will result in a temporary network loss of 1 to 10 minutes.

With the new default isolation configuration, worker nodes still have their public IPs to access the Internet. After migrating, existing security groups configuration won’t be overridden and RR wildcard DNS still point to public IPs.

Find our dedicated documentation on Kapsule with Private Networks https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/secure-cluster-with-private-network/#how-can-i-migrate-my-existing-clusters-to-regional-private-networks

Dec 01, 2023 - 10:18 CET
Update - Update: Scheduled for Apr 29, 2024 - May 10, 2024
Feb 28, 2024 - 11:04 CET
Scheduled - Kubernetes Kapsule clusters in the FR-PAR region with public-only endpoints will be migrated to Private Networks.

Network downtime: this migration will result in a temporary network loss of 1 to 10 minutes.

With the new default isolation configuration, worker nodes still have their public IPs to access the Internet. After migrating, existing security groups configuration won’t be overridden and RR wildcard DNS still point to public IPs.

Find our dedicated documentation on Kapsule with Private Networks https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/secure-cluster-with-private-network/#how-can-i-migrate-my-existing-clusters-to-regional-private-networks

Dec 01, 2023 - 10:21 CET
Past Incidents
Mar 28, 2024

Unresolved incidents: [NETWORK]- Fiber cut between DC5 and DC3, [WEBSITE] developers.scaleway.com is unavailable, [TEM] IP blacklisted by Spamhaus.

Mar 27, 2024
Resolved - This incident has been resolved.
Mar 27, 21:59 CET
Identified - Support will be unavailable until 22:00CET.

We apologize for this inconvenience and thank you for your understanding.

Mar 27, 20:28 CET
Resolved - Some control planes in the NL-AMS location were unreachable due to an etcd issue. This has been fixed by our engineering team at 23:15 CET. The incident is currently being monitored.
Mar 27, 20:00 CET
Mar 26, 2024
Resolved - This incident has been resolved.
Mar 26, 13:56 CET
Update - All the services are back to nominal state. We are still monitoring them closely.
Mar 26, 12:29 CET
Monitoring - The service is recovering, and that bucket creation is now available
Mar 26, 12:12 CET
Update - Following a power issue on AMS3 AZ, some object storage servers had trouble returning to nominal state.
Some latencies can be expected on NL-AMS Region, our teams are working in improving them.
Bucket creation is currently unavailable in ams too.

Mar 26, 10:23 CET
Investigating - Following a power issue on AMS3 AZ, some object storage servers had trouble returning to nominal state.
Some latencies can be expected on NL-AMS Region, our teams are working in improving them.

Mar 26, 09:35 CET
Resolved - This incident has been resolved, and connectivity restored.
Mar 26, 09:10 CET
Monitoring - Around half of all API calls to create, update and delete functions and containers in the Warsaw region were failing or timing out. The remaining API calls were executing normally. Retrying several times ought to have fixed any issues.

The incident was due to a drop in connectivity to a message queue used to connect a subset of hosts running the API in the region. We have now re-established connections to the queue, and events are flowing smoothly. There should be no other issues on the API, but we continue to monitor the situation.

Mar 25, 19:38 CET
Update - Around half of all API calls to create, update and delete functions and containers in the Warsaw region will fail or time out. The remaining API calls should execute normally, and so retrying several times ought to fix any issues.

The incident is due to a drop in connectivity to a message queue used to connect a subset of hosts running the API in the region.

Mar 25, 18:53 CET
Update - Around half of all API calls to create, update and delete functions and containers in the Warsaw region will fail or time out. The remaining API calls should execute normally, and so retrying several times ought to fix any issues.

The incident is due to a drop in connectivity to a message queue used to connect a subset of hosts running the API in the region.

Mar 25, 18:52 CET
Update - Around half of all API calls to create, update and delete functions and containers in the Warsaw region will fail or time out. The remaining API calls should execute normally, and so retrying several times ought to fix any issues.

The incident is due to a drop in connectivity to a message queue used to connect a subset of hosts running the API in the region.

Mar 25, 18:52 CET
Identified - Around half of all API calls to create, update and delete functions and containers in the Warsaw region will fail or time out. The remaining API calls should execute normally, and so retrying several times ought to fix any issues.

The incident is due to a drop in connectivity to a message queue used to connect a subset of hosts running the API in the region.

Mar 25, 18:51 CET
Mar 25, 2024
Resolved - This incident has been resolved.
Mar 25, 10:01 CET
Investigating - Grafana user creation is unstable.
We are currently investigating on this part

Mar 25, 09:44 CET
Resolved - This incident has been resolved.
Mar 25, 09:45 CET
Investigating - Cockpit activation times out due to long provisioning time.
Our team is setting a solution up at the moment.

Mar 21, 11:24 CET
Mar 24, 2024

No incidents reported.

Mar 23, 2024
Resolved - This incident has been resolved.
Mar 23, 20:49 CET
Investigating - Dedibox servers in the rack are unreachable
Mar 23, 19:16 CET
Resolved - This incident has been resolved.
Mar 23, 10:36 CET
Update - Latencies are back to normal since 09H UTC this morning.
We are currently applying fixes to spread up the load of the next cleanup operations in order to make the next one less impactfull.

Mar 22, 11:35 CET
Monitoring - Since the 21/03 20h UTC, block storage latencies of BSSD storage class on PAR1 are rising because of a cleanup process having an above average workload.
The situation is closely monitored, and our team is working on it. Latencies are expected to stabilize at their current level for the next hours at least.
We will keep you informed of any changes.

Low latencies volumes are not impacted.

Mar 22, 08:18 CET
Mar 22, 2024
Mar 21, 2024
Resolved - The fix is functional and no latencies are observed anymore.
Mar 21, 15:16 CET
Update - A fix is in prod to prevent the situation from happening again
Mar 20, 11:16 CET
Update - Higher latencies from 09:41 UTC+1 to 10:14 UTC+1 in nl-ams
Mar 20, 11:12 CET
Update - Higher latencies from 09:41 UTC+1 to 10:14 UTC+1 in nl-ams
Mar 20, 10:45 CET
Update - We are continuing to monitor for any further issues.
Mar 20, 10:45 CET
Monitoring - Higher latencies from 09:41 UTC+1 to 10:14 UTC+1 in nl-ams
Mar 20, 10:42 CET
Resolved - The switch has been replaced, and the servers are up and running again.
If you have any problems, please open a support ticket.

Mar 21, 07:52 CET
Investigating - We have detected a switch down in DC5, Room : 1 1, rack : A59.
Servers in that rack currently have no public network access and are unreachable.

===================

21/04/2024 at 5:39 UTC
The issue has been forwarded to our team for resolution.

Mar 21, 06:41 CET
Mar 20, 2024

Unresolved incidents: [Dedibox] DC3 room 4-6 Rack F13-Replacing power distribution units, [STORAGE] Lifecycle rules issue.

Mar 19, 2024
Resolved - This incident has been resolved.
Mar 19, 10:28 CET
Monitoring - A configuration issue impacted one of our devices in DC5 room 1 rack D29 after a reboot. This configuration was fixed today at 10:00 CET. Servers SD-129348 to SD-129395 are reachable.
Mar 19, 10:28 CET
Update - We are continuing to investigate this issue.
Mar 19, 08:41 CET
Investigating - We are currently investigating about an issue that impact servers in DC5 room 1 rack D29 (SD-129348 to SD-129395)
Mar 19, 08:40 CET
Completed - The scheduled maintenance has been completed.
Mar 19, 10:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 19, 09:00 CET
Scheduled - Our Database Team is scheduling a maintenance for all Redis instances in region WAW on Tuesday, March 19th at 9.00 CET. These will be migrated to IP Mobility and might experience up to 10 minutes unavailability.
Mar 12, 11:53 CET
Mar 18, 2024
Resolved - We encountered an increase of the latency on ingestion and query path. The affected timeframe is between 16:00-17:00 and 18:00-19:30.
Affected product : Serverless functions & containers

Mar 18, 22:41 CET
Completed - The scheduled maintenance has been completed.
Mar 18, 18:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 18, 09:00 CET
Scheduled - Clusters using this specific version will automatically be upgraded to 1.24. Find more details in our version support policy: https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/version-support-policy/
Mar 14, 12:53 CET
Resolved - This incident has been resolved.
Mar 18, 11:45 CET
Investigating - Dedibox servers in the rack are unreachable
Mar 18, 11:28 CET
Resolved - This incident has been resolved.
Mar 18, 11:32 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 12, 13:49 CET
Investigating - We are currently investigating the issue.

Servers SD-91078 to SD-91095 are currently powered down

Mar 11, 11:00 CET
Resolved - This incident has been resolved.
Mar 18, 10:30 CET
Monitoring - A public switch in Room 1 rack A56 was ureachable since 06:35 CET.
The device was replaced and the service is up and running since 10:06 CET.

Mar 18, 10:29 CET
Investigating - Public switch in s1-a56 is unreachable for now.
Servers in that rack currently have no network access.
Our team is already working on it

Mar 18, 07:31 CET
Mar 17, 2024

No incidents reported.

Mar 16, 2024

No incidents reported.

Mar 15, 2024

No incidents reported.

Mar 14, 2024
Resolved - This incident has been resolved.
Mar 14, 14:44 CET
Monitoring - The fix has been deployed, the functionality is back but our team are monitoring it.
Mar 14, 14:29 CET
Investigating - E-mail delivery is impacted. Some operations on the console may fail
Mar 14, 14:12 CET