Over the weekend, Salesforce, a tool used by 150,000 companies and millions of people around the globe, experienced one of a global enterprise corporation’s worst nightmares: a database outage. According to Salesforce, “On May 17, 2019, Salesforce blocked access to certain instances that contained customers affected by a database script deployment that inadvertently gave users broader data access than intended”.

In other words, an error in a manual database script deployment led to, in many instances, complete revocation of authorizations and permissions in place, leaving things wide open for all users to make any and all changes they desired. Salesforce had gone well beyond the leaky sieve stage and jumped directly to “The door’s wide open, come on in!”

Companies, from global industrial behemoths to small startups all protect their databases with the ferocity of a mother bear protecting its cub, and for one simple reason. Database downtime, as estimated by Gartner, has an average cost is $5,600 per minute. According to this report by Statista, nearly 60% of companies reported the average hourly downtime cost of their servers as being between over $400,000. For nearly one-third of the companies who responded, each hour of downtime cost $1,000,000 or more, and 14% estimated each hour at OVER $5,00,000. Yes, FIVE MILLION dollars for each hour of downtime.

Internal-salesforce-database-downtime

But surely, if downtime costs so much and companies all were aware of this, they would take steps to ensure this doesn’t happen, right? On the surface, you’d be forced to agree, because the people running global enterprises such as Salesforce didn’t get to where they were by not being smart. If you scratch just a bit deeper, though, you’d see why this assumption turns out to be false.

The world of DevOps took the world by storm earlier this decade, with the drive towards mainstream coming with the help of Gene Kim and his seminal book “The Phoenix Project”. Since then, most major companies have adopted DevOps and CI/CD as the way to move forward. This has held true for the world of software development, application development and a multitude of other fields. The only area which DevOps seems to not have penetrated is the database. In the 2018 DBmaestro DevOps Survey, two-thirds of the respondents (66%) replied that manual execution of changes was still the main method of performing changes to the database.

Blog_CTA_White_Paper

Salesforce seems to have fallen in the same hole, running manual database changes instead of automating them. As we just saw, this leaves themselves open to massive errors of the type which sidelined them for over half a day. Running manual changes exposes the databases to configuration drifts, errors in the code, and overall sloppy handling. Let’s face it, even the best DBAs are still only human, and we all make mistakes. It is exactly for this reason why you need a database automation platform, to remove the human factor and to decrease errors and downtime.

Using a platform such as the DBmaestro DevOps Platform, which provides a full set of database DevOps tools, such a massive downtime could have been avoided. DBmaestro’s visual database pipeline builder, drift detection, prerun and checks, security & governance, version control and business monitoring combine to allow DBmaestro to provide a comprehensive DevSecOps platform. Integrations with dozens of other platforms, including Jenkins, Bamboo, TeamCity, jFrog, Sonatype, IBM UrbanCode, Azure DevOps, git, Bitbucket, JIRA, Actifio, CyberArk and numerous others ensure that regardless of what your DevOps environment looks like, DBmaestro can help.

According to Salesforce, their service disruption lasted slightly over 15 hours. Even using extremely conservative estimates, that downtime could have potentially cost them over $5 million. Higher estimates place the cost at over $15 million. If you don’t want a similar catastrophic downtime to take out your databases, request your demo now!