OVHcloud Network Status

Current status
Legend
  • Operational
  • Degraded performance
  • Partial Outage
  • Major Outage
  • Under maintenance
SBG
Incident Report for Network & Infrastructure
Resolved
Nous vennons de perdre l'intégralité de l'alimentation électrique des équipements réseaux.
Nous investiguons.

Update(s):

Date: 2017-11-18 01:31:41 UTC
*English version*


Hello,
Here is the post-mortem of the incident.

On Thursday, November 9, at 07:04, the Strasbourg site, hosting 4 datacenters, experienced an electrical power cut. Despite all the security measures put in place, the power outage spread to the other datacenters and caused an electrical shutdown of the 40,386 servers hosted on the site.

At 10:39 electrical power was restored on the site and the services gradually restarted. By 6:00 pm, 71% of the servers were functional again, and on Friday, November 10th at 11:00 pm, 99% of the servers were functional. A minority of services remained impacted until Sunday, November 12th.


Timeline of the incident (Thursday, November 9th):
----------------------------------------------------------
7:04:07 : The power grid of electrical power supplier ESR (Électricité de Strasbourg Réseau) experiences à power failure leading to a loss of power supply on both lines.
7:04:17 : The High Voltage Power Generators (HV) do not start.
7:12:48 : Inverter 6 (UPS) reaches the end of its battery life.
7:15:48 : Inverter 5 reaches the end of its battery life.
7:17:25 : Inverter 2 reaches the end of its battery life.
7:18:00: The first manual attempts to restart the HV Generators are unsuccessful.
7:18:39 : Inverter 1 reaches the end of its battery life.
7:19:19 : Inverter 4 reaches the end of its battery life.
7:21:00 : Inverter 3 also reaches the end of its battery life.
7:21:00 : Routing centers are no longer electrically powered.
7:21:03 : New attempt to manually start the #1 HV Generator group.
7:22:42 : New attempt to manually start the #2 HV Generator group.
7:30:00 : Local crisis team is operational.
7:50:00 : Central crisis team at Roubaix HQ is operational.
Between 7:50 and 10:39: multiple manual attempts to restart the power generators with the help of our electrical engineering experts.
10:39 : ESR restores the power supply.
10:58 : The routers are reachable again.
11:00 : Interventions on the servers requiring attention are in progress.
14:00 : Arrival of a first team of reinforcements
16:00 : Arrival of reinforcements from our sites in Frankfurt (Germany) and Roubaix.
17:30 : A 38-ton truck, filled with spare parts arrives on site.
22:00 : 97% of the servers are up and running again, 91% respond to ping.


Why did the power supply break down at ESR ?
------------------------------------------------
The entire site is powered by one 20MVA power supply via two 20kV cables. The cause of the power failure is linked to an alteration of one of the 2 underground cables, which ESR repaired quickly. The causes of the alteration of this cable are not yet determined. Investigations are ongoing by ESR.


Why did the failure of one cable cause a power cut?
--------------------------------------------------------------------
The Strasbourg site is powered by two cables delivering 20MVA and therefore connected to the same circuit breaker. The tripping of the circuit breaker caused the two lines to break.


Why didn't the high-voltage generators start up?
------------------------------------------------------------------------
SBG1 and SBG4 are powered by two High Voltage (HV) Power Generators, each delivering 2MVA, which are meant to take over in case of power failure. The normal inverter/emergency inverter engine failed to function properly and did not start the Power Generator groups.

After investigation, we found that the PLC driving the inverter had not sent the command to start the High Voltage (HV) Power Generators.

The manufacturer of this automated device has assessed the failure. It turns out that the PLC was locked in a default \"locked automatics\" mode, which explains why the command to start the HV Generators was never sent. Investigations are underway to understand the origin of this blockage.

The manufacturer's response team returned the PLC to normal operation. As of now, we have no explanation for this error. As we wait for the conclusions of the enquiry, we ensure the permanent rotation of a dedicated person, 24 hours a day, 7 days a week, in order to manually throw the switch, should the automatic device again fail to operate.

In the coming days, we will be running stress tests and performance tests on site in order for us to warrant the proper functioning of the automatic device.


Why did the attempts to start the HV Generator groups fail?
----------------------------------------------------------------------
The SBG2 datacentre is powered by two 1.4MVA LV Generator groups. One of these two LV units was in \"maintenance mode\". When one of the units is in \"maintenance mode\", should an electrical power failure occur, the 2 HV Generator sets of SBG1 also supply SBG2 with power, in replacement of the LV generator that is under maintenance.

On Thursday, November 9th, when the site experienced a power failure, the normal inverter / emergency inverter engine did not perform its function properly and did not send the signal to start the HV Generators.

We therefore made numerous attempts to start them manually.

To operate the electrical load of SBG1, SBG4 and SBG2 when one of the two LV units is in \"maintenance mode\", it is imperative that the 2 HV units work together in order to be able to provide 4MVA. As the 2 HV Generator groups failed to synchronize, we then decoupled the 2 HV Generators in order to operate them separately. But a single group, delivering only 2MVA, cannot hold the requested load and thus the Generators went into emergency stop. We performed multiple tests in different configurations, without success.


How long did it take to restore services?
----------------------------------------------------------
Exceptional resources were put in place to restore services as quickly as possible.


General overview :
------------------------
At 22:00 on Thursday, 97% of the servers (hardware) were back up and running and 91% of the services (software) were running again. By midnight on Friday, 99% of the servers were operational again as well as 96.2% of the services.

In details :

Private Cloud:
----------------
Thursday, November 9th
· 23:00: 78,59% of vCenters are operationnal 

Friday, November 10th
05:00: 100% of vCenters are operationnal


Object Storage / Cloud Archive:
-------------------------------
Thursday, November 9th, 13:35: 100% operational


PCS:
-----
Thursday, November 9th, 13:35: PCS / PCA 100% operational

PCI/VPS *: (*PCI zoning: \"PCI regions\" have a different nomenclature than datacenters)
------------------------
11:30 : API is UP for the region SBG1/SBG2/SBG3
17:00 : 98% instances OK for region SBG3
20:00 : 98% instances OK for region SBG1
21:00 : 92% instances OK for region SBG2

Friday 10/11
16:00: 100% instances OK for region SBG1
16:30: 100% instances OK for region SBG2

Saturday 11/11
18:00 : 100% instances OK for region SBG3


SD:
----
Thursday 9/11
21:00 : 93.05% of the dedicated servers are operational

Friday 10/11
17:00 : 99.1% of the dedicated servers are operational


How did you handle the situation?
--------------------------------------
From 7:50 am, a crisis team was activated in Roubaix to coordinate all the actions of the different teams. Octave Klaba, the CEO and founder of OVH, reported in real-time on the evolution of the situation, via social networks. Detailed explanations were also provided on the work task.
 
In parallel, the French support teams coordinated with their Quebec counterparts to be able to respond to a maximum number of calls from clients. Key accounts customers were contacted to provide them with quick and effective solutions.
 
In Strasbourg, the datacenter teams were quickly reinforced by technicians from our German (Frankfurt) and French (Roubaix) datacenters. A true road and rail bridge was set up. Around 17:30, a 38-ton truck from the OVH logistics center in the Lille metropolitan area arrived on site to provide the teams with all the additional material resources needed for the coming hours. Several other trucks would arrive in the following days, following the establishment of a logistics standby system in Roubaix.

These teams worked tirelessly, night and day, to restore the services of all clients, even justifying the organization and implementation of an airlift between Lille and Strasbourg to speed up the rotation of teams on site during the weekend and all week long.


What action plan has been implemented following this incident ?
---------------------------------------------------------------
As mentioned above, we immediately took measures to prevent this type of incident from happening again, in Strasbourg (SBG) as well as on all our sites.

This action plan will be deployed in 2 phases.

Short term
-------------
We requested a detailed report from the vendor of the automated PLC controller.

Since the automatic switchover of the normal inverter/emergency inverter engine did not work, we now have a 24/7 dedicated presence on site, in order to be able to throw the switch manually, should the PLC fail again. This 24/7 standby ensures the power security of the site until a series of stress and performance tests can confirm the proper functioning of the controller.

As far as the normal/emergency inverter is concerned, we are going to quickly replace the automated controller with an \"in-house\" controller, which will allow us to fully master its functioning and monitor it. An identical system is already used in production in Gravelines.

We asked ESR for a detailed report on the origin of the fault.

A feasibility study for the connection of a second 20MVA electrical line is also underway. In the meantime, we have launched a second study: the implementation of two isolated circuit breakers, one for each cable, which would help to circumvent a possible failure on one of the two cables.

We are going to separate SBG2's electricity network from SBG1/SBG4's as well as separate the future SBG3 from SBG2 and SBG1/SBG4. In this way, each datacenter will have its own independent backup power supply.

An electrical audit is also underway for all of our sites.

Note: Currently, when a client orders a server on the Strasbourg site, it appears by default within the client area as being hosted on SBG1, even if it is hosted on SBG2 or SBG4. This is a bug in the display and will be corrected very quickly in order to indicate the actual datacenter on which the server is hosted.


Long-term
------------
The technology based on maritime containers will no longer be used by OVH. Indeed, this setup has only been used to build SBG1 and SBG4 and it thus inherited all the design flaws related to the initial low ambitions we had for this site. Today, we realize that this setup is no longer adapted to the requirements of our business and does not align with OVH standards. We are therefore going to dismantle SBG1 and SBG4.

In order to do this, we will migrate all of our customer's services hosted on SBG1 and SBG4, moving them either to SBG2 and SBG3 or to other OVH datacentres.

We are truly sorry for this breakdown and we are doing everything necessary to ensure that such an incident never happens again.

Sincerely
Octave
 

Date: 2017-11-17 21:28:34 UTC
Tous les services sont UP depuis Dimanche très tard dans la nuit.

Date: 2017-11-11 21:08:26 UTC
Tous les hosts PCI sont UP. On regarde maintenant les VPS/PCI
qui resteraient DOWN. Si vous avez encore de souci, merci de
m'envoyer un Direct Message (DM) via le twitter @olesovhcom
avec les informations techniques: IP / NOM de VPS


*English version*

All PCI hosts are now UP. We're now watching VPS/PCIs that are remaining DOWN. If you're still having trouble, thank you for
sending me a Direct Message (DM) via twitter @olesovhcom with technical information: IP / VPS NAME


Date: 2017-11-11 06:40:45 UTC
PCI/VPS: there is 10 hosts that has to be reparted. the
host is very complex and we need 1H per host.

Servers (SYS/OVH)
We have 200 serveurs that the hardware issues that we
are working on.

Date: 2017-11-11 02:41:33 UTC
22:11
A new Team from RBX has now arrived on site to help the SBG team. The infrastructure has been UP since 11:00 yesterday. The priority is to get the servers re-started. Normally, the servers re-start automatically, except that there are always a small percentage of servers with various problems: hardware problems, motherboard to be replaced, power supply which couldn't stand the power cut, boot problems, disk not mounting properly, kernel issue, compatibility between kernel and motherboard, client's firewall not configured properly preventing the client to start his servers…

We have just under 400 servers left. We have all types of hardware problems with these servers and we replace the defective parts server by server thanks to the stock of spare parts arrived by truck yesterday, end of the afternoon.

A technician manages about 25 heavy interventions per day. With 400 issues to solve, the calculation is simple: we need between 15 to 25 technicians to complete the incident. That's why the teams take turns since yesterday noon thanks to the staff who arrived from the others DCs. #OneTeam

RUN teams in Canada have resumed software issues and help the DC teams to progress faster. We have IPMI configurations that jumped, boot problems, kernel problems, etc.
Also, for services such as PCI/VPS/PCC, product teams will manage infrastructure to power up virtual machines of clients. There are still 64 PCI hosts down and we replace the parts for these hosts. Progressively VPS and the PCIs that are still down will come back during the night.

We are thinking of reaching less than 100 servers down by the start of the day on Saturday 11th. We've arranged for the teams to be able to work this Saturday morning in the DC, RUN as well as the support. The war room in RBX will continue to coordinate all actions.

22:27

We have now 30 technicians on site who will work all night long with 5 managers to coordinate the work. We're hoping to get off
to less than 100 servers in the morning and then finish the 100 by noon tomorrow.

Date: 2017-11-10 17:22:12 UTC
100% of the infrastructure has been UP since yesterday 11h00.
We are working the hardware issues. We still have:
- dedicated servers
380 serveurs

- pci / vps
64 hosts with
1000 VPS/PCI
no issue on ceph

- pcc
88 hosts are down. no issue on storage.
all the VMs are running on vSpheres:
no customer is impacted.


Date: 2017-11-10 15:05:22 UTC
We still have 100% of the infrastructure UP.
At this stage, 99% of the serveurs are up and running.
We are working on the 1%: we change the hardware, remplace the pieces ..

All the teams stay mobilized to resolve remaining isolated issues.

Date: 2017-11-10 08:34:46 UTC
Hello,
This morning at 7:23 am, we had a major outage in our Strasbourg site (SBG): a power outage that left three datacenters without power for 3.5 hours. SBG1, SBG2 and SBG4 were impacted. This is probably the worst-case scenario that could have happened to us.

The SBG site is powered by a 20kV power line consisting of 2 cables each delivering 10MVA. The 2 cables work together, and are connected to the same source and on the same circuit breaker at ELD (Strasbourg Electricity Networks). This morning, one of the two cables was damaged and the circuit breaker cut power off to the datacenter.

The SBG site is designed to operate, without a time limit, on generators. For SBG1 and SBG4, we have set up a first back up system of 2 generators of 2MVA each, configured in N+1 and 20kv. For SBG2, we have set up 3 groups in N+1 configuration 1.4 MVA each. In the event of an external power failure, the high-voltage cells are automatically reconfigured by a motorized failover system. In less than 30 seconds, SBG1, SBG2 and SBG4 datacenters can have power restored with 20kV. To make this switch-over without cutting power to the servers, we have Uninterrupted Power Supplies (UPS) in place that can maintain power for up to 8 minutes.

This morning, the motorized failover system did not work as expected. The command to start of the backup generators was not given by the NSM. It is an NSM (Normal-emergency motorised), provided by the supplier of the 20kV high voltage cells. We are in contact with the manufacture/suplier to understand the origin of this issue. However, this is a defect that should have been detected during periodic fault simulation tests on the external source. SBG's latest test for backup recovery were at the end of May 2017. During this last test, we powered SBG only from the generators for 8 hours without any issues and every month we test the backup generators with no charge. And despite everything, this system was not enough to avoid today’s soutage.

Around 10am, we managed to switch the cells manually and started to power the datacenter again from the generators. We asked ELD to disconnect the faulty cable from the high voltage cells and switch the circuit breaker on again with only 1 of the 2 cables, and therefore were limited to 10MVA. This action was carried out by ELD and power was restored at approximately 10:30 am. SBG's routers were back online from 10:58 am onwards.


Since then, we have been working on restarting services. Powering the site with energy allows the servers to be restarted, but the services running on the servers still need to be restarted. That's why each service has been coming back gradually since 10:30 am. Our monitoring system allows us to know the list of servers that have successfully started up and those that still have a problem. We intervene on each of these servers to identify and solve the problem that prevents it from restarting.

At 7:50 am, we set up a crisis unit in RBX, where we centralized information and actions of all the different teams involved. A truck from RBX was loaded with spare parts for SBG. It arrived at its destination around 5:30 pm. To help our local teams, we sent teams from the LIM datacenter located in Germany and personnel from RBX datacenter, all of which have been mobilized on site since 4 PM. Currently, more than 50 technicians are working at SBG to get all services back online. We are preparing the work through night and if necessary into tomorrow morning.

In order to avoid catastrophic scenarios such as this one, over the past 18 years, OVH has developed electrical architectures that can withstand all sorts of power outages. Every test, every flaw, every new idea has enriched our experience allowing us to build reliable datacentres today.

So why this failure? Why didn’t SBG withstand a simple power failure? Why couldn’t all the intelligence that we developed at OVH, prevent this catastrophe?

The quick answer: SBG's power grid inherited all the design flaws that were the result of the small ambitions initially expected for that location.

Now here is the long answer:

Back in 2011, we planned the deployment of new datacenters in Europe. In order to test the appetite for each market, with new cities and new countries, we invented a new datacenter deployment technology. With the help of this internally developed technology, we were hoping to get the flexibility that comes with deploying a datacenter without the time constraints associated with building permits. Originally, we wanted the opportunity to validate our hypotheses before making substantial investments in a particular location.

This is how, at the beginning of 2012, we launched SBG1 datacenter made of shipping containers. We deployed 8 shipping containers and SBG1 was operational in less than 2 months. Thanks to this ultra-fast deployment which took less than 6 months we were able to confirm that SBG is indeed a strategic location for OVH. By the end of 2012, we decided to build SBG2 and in 2016, we launched the construction of SBG3. These 2 datacenters were not constructed from containers, but were based on our \"Tower\" technology. The construction of SBG2 took 9 months and SBG3 will be put in production within a month. In order to address the issue of space, at the beginning of 2013, we built SBG4 very quickly, based again on the much talked about shipping containers.

The issue was that, by deploying SBG1 with the technology based on shipping containers, we were unable to prepare the site for a large-scale project.

We made 2 mistakes:

1) We did not make the SBG site compliant with internal standards which require 2 separate 20kV electrical feeds just like all our DC locations, which are equipped with dual electrical feeds. It is a major investment of about 2 to 3 million euros per electrical feed but we believe this is part of our internal standard.

2) We built SBG2's power grid by placing it on SBG1's power grid instead of making them independent of each other, as in all our data centers. At OVH, each datacenter number indicates that the power grid is independent of other datacenters. Anywhere except on the SBG site.

The technology based on shipping containers was only used to build SBG1 and SBG4. As a matter of fact, we realized that the container datacenter doesn't fit the requirements of our trade. Based on SBG's growth rate, the minimum size of a site must be equal to several datacenters, and therefore have a total capacity of 200,000 servers. That's why in order to deploy a new datacenter today, we are only using 2 types of designs that have been widely tested and planned for large-scale projects and reliability:

1) the construction of 5 to 6-story towers (RBX4, SBG2-3, BHS1-2), for 40,000 servers.
2) purchasing buildings (RBX1-3,5-7, P19, GRA1-2, LIM1, ERI1, WAW1, BHS3-7, VIH1, HIL1) for 40,000 or 80,000 servers.

Even if this morning's incident was caused by third-party automaton, we cannot deny our own liability for the breakdown. We have some catching uptp do on SBG to reach the same level of standards as other OVH sites.

During the course of the afternoon, we decided on the following action plan:
1) the installation of a second, completely separate 20MVA electrical feed;
2) separating SBG2 power grid from SBG1/SBG4, as well as the separation of the future SBG3 from SBG2 and SBG1/SBG4;
3) migration of SBG1/SBG4 customers to SBG3;
4) closing SBG1/SBG4 and the uninstallation of the shipping containers.

This is a EUR 4-5 million investment plan, which we are launching tomorrow and hope will enable us to restore our customers' confidence in SBG and OVH.

Our teams are still hard at work to restore services to the last of the impacted customers. Once the incident is completely resolved we will apply the SLA under our contracts.

We are deeply sorry for this incident and we thank the trust that you place in us.

Best,
Octave

Edited on the 11th November at 10PM : Unit error correction (KVA was used instead of kV)

Date: 2017-11-09 11:09:47 UTC
Voici un etat des lieu :

-SBG1 est de nouveau alimenté
-SBG2 est de nouveau alimenté, excepté les baies 73A01 a 73A18
-SBG4 est toujours down, nos equipes travaillent a alimenter SBG4.


Date: 2017-11-09 09:58:36 UTC
ERDF a réparé un des 2 liens 20KV. Le second est toujours down.
Les générateurs sont UP, les 2 salles de routage sont en cours de démarrage
SBG2 devrait être UP d'ici 20min
SBG1/SBG4 d'ici 1h-2h.

Date: 2017-11-09 06:53:03 UTC
Comment by OVH - Thursday, 09 November 2017, 07:55AM

Nous avons un problème d'alimentation électrique, apparemment
les groupes électrogènes n'ont pas fonctionné correctement.
Dans SBG1 se situe la salle de routage du site. Les routeurs
sont down depuis 7h14.
Posted Nov 09, 2017 - 06:52 UTC
This incident affected: Infrastructure || SBG (SBG1, SBG3, SBG4).