The Question of Multiple Databases & Pre-Production Complexity

You may have heard the familiar story where a team ships a performance bug in production. Then, during a retrospective, the team decides (for good reason) that this kind of bug won’t make it to production again.

So, now they use a dedicated performance testing environment to catch bugs early. Conducting these performance tests reduces the number of performance bugs, but as a result, the entire setup becomes more costly, difficult to manage, and time-consuming.

Does this story sound familiar? A recent survey found that almost 1/3 of companies have experienced a database crash in the last month >>

Tensions will always exist between the conflicting desires to test thoroughly, maintain speed, and avoid excessive complexity. Agile and DevOps methodologies bring their own values and perspectives to the process as well. It becomes exceedingly difficult to satisfy all stakeholders with so many competing goals and ideas in the mix.

So, is dealing with multiple environments in general and multiple databases in particular worth the trouble?

In this post, we explore some of the factors that need to be weighed in order to strike a balance between these many demands while also keeping your team — and environments — happy and productive.

DevOps is Great, But Only if You Can Manage It

Happy and productive teams keep the focus on shipping code and maintaining reliable production environments. DevOps teams achieve this through continuous delivery (CD), which requires automated testing throughout each stage in the promotion process all the way through to automated deployment.

The DEV environment is a playground for all of the engineers to tinker with and test changes. Quality engineers do functional testing on QA before promoting code to PERF. Next, they move on to STAGE, and then finally to PROD. This sounds like a common workflow, but it creates complexity.

Automating everything across these environments is a real challenge. The truth is you may not need all of them. Here are some questions to consider when evaluating how many environments you need:

  • Whose responsibility is it to promote code across environments?
  • Who maintains their infrastructure?
  • How are databases kept in-sync across environments?
  • How are testing environments populated with realistic data?
  • How often are databases built from scratch?
  • How is deployment automated across these environments?
  • How do the environments fit the branching model?
  • Can some environments be eliminated without impacting production?
  • How flexible are you about all of the above?

Naturally, answers will vary from team to team, as they should. The key thing to remember with DevOps is to focus on making continuous, small improvements. Do not get lost trying to create the perfect solution.

Long Pipelines Don’t Mean Big Changes

The lowest hanging fruit is simply shipping smaller changes, more often. This approach removes much of the annoyance around managing multiple environments and all other forms of process machinery.

environment-complexity-multiple-databases.png

Working in small batches means breaking up changes into smaller and smaller chunks, which are easier to develop, test, and deploy.

Consider this scenario: Would you rather test a large database schema migration along with new product features, or a large database schema migration that does not impact current functionality?

The second option is clearly the better choice. It’s easier on engineers, product owners, QA, and everyone else involved in shipping the software.

Working in small batches is a powerful balancing force against long release cycles and unreliable deployments. This CD tenet reduces risk and increases production reliability when it is carried out correctly. However, it is only beneficial when used with automation.

Small batches work particularly well with distributed applications. These applications inevitably create more moving parts, databases, and independently deployable components.

As you can imagine, keeping all of these parts under control is extremely difficult. Smaller batches can help combat the complexity that comes along with increasingly complex architectures.

Distributed Applications Bring Distributed Problems

Distributed applications are composed of multiple services. Each service might have multiple databases, and different services across the same application may not use the same database technology. One service may use an RDMS, while another might use a NoSQL model.

This complexity spills over into other areas. Hopefully, each engineer has their own sandbox that can run versions X, Y, and Z of services A, B, and C. This setup provides each engineer with extreme flexibility, but it comes with trade-offs.

The “laptop build” is one of the more problematic trade-offs. Engineers might produce a working build on their laptop that does not behave correctly when it is deployed (for one reason or another). Common reasons include differences in database setups, subtle differences in versions, different deployment methods, variations in test data, and environmental complexity.

Working with multiple databases is the most problematic issue since a release may need to coordinate between databases’ changes across a number of services. Distributed applications open up a whole new can of worms.

There are a few questions you should consider when it comes to distributed applications:

  • How can engineers run the entire product on their machines?
  • What do we consider “releases” to be (i.e., are they deployments of individual services or the sum of all configurations between services)?
  • How do we roll back a deployment?
  • Who owns and operates individual services?
  • How does the existing approach and infrastructure scale for introducing new services?
  • What happens—or are we even able to—deploy multiple services at once?
  • How does QA get production-like data?

These questions are not going away. They are, in fact, becoming more prevalent as more organizations move to microservices and other distributed architectures. Your team can succeed in this landscape with purpose-built tooling.

Tipping the Scales with Insight Tooling

Specialized automation and insight tooling tips the scale in your favor. Tooling helps everyone in the team quickly identify the state of all relevant databases and determine whether they match the relevant application. This insight keeps deployments moving since potential problems are identified early.

Such tooling is especially helpful for detecting differences between STAGE and PROD before they impact your uptime. Tooling also clearly communicates how the deployment affects different databases.

This practice makes it easier to review changes and transition them through the pipeline. Insight tooling also helps achieve the DevOps goal of making more changes, more often. Use of these tools creates more frequent and more reliable deployments by providing engineers insights and automation tools so things always go as expected.

Managing Multiple Databases: A Matter of Art & Science

The balance between a number of database environments, engineering needs, cost, and efficiency will vary from team to team. The ultimate goal is to ensure that all commits are deployable to production. Some teams may need seven environments while others might only need three. One team may be building a monolith while another is working on a distributed application.

All teams, however, accept this fact:

Each environment is its own database, and each database change must be synchronized with their other environments.

They must also be aware of the large amount of engineering effort that is required to maintain multiple environments. Keeping these important considerations in mind, teams should strive to minimize their total number of environments.

If this is not realistically possible, then they can turn to specialized tooling to automate database management across an increasing number of environments.

If you liked reading about Multiple Databases, check out these posts:

3 Release Mismanagement Practices That Can Lead to Disaster

What You’ll Learn

  • Common mismanagement practices that can lead to bad software releases.
  • How Big Bang releases, poor telemetry, and manual processes create risks.
  • Key DevOps strategies for avoiding release failures.
  • Practical ways to incorporate automation and telemetry for smoother releases.

I hate to break it to you, but it’s only a matter of time before your organization experiences a bad release. Unfortunately, they happen far more often than we’d like.

Bad releases can occur for a variety of reasons. Something may have gone wrong during the rollout, or maybe there was a traffic increase during  deployment. Perhaps a database migration failed against production data or critical features broke in unexpected ways.

Regardless of why or how they happen, bad releases hurt teams and businesses. Continuous, bad releases drive customers away over time, but one catastrophically bad release can seriously damage — or even destroy — a business.

Release mismanagement occurs at all stages in the software development process. Early problems tend to become more worrisome later on. For example, insufficiently tested software may break under production traffic, engineers may write software that doesn’t pass security compliance, teams may omit telemetry code that makes things harder to debug and resolve, etc.

Unfortunately, these and other troublesome practices are often repeated time and time again, essentially guaranteeing the same, unfavorable results.

One of the goals of the DevOps culture is to root out the mismanagement that causes problems in the first place. The DevOps Handbook offers real-world approaches that have been proven to mitigate bad releases and help companies succeed in the market.

In this post, we examine three common mismanagement practices that set teams up for software release failure. We also offer a few DevOps solutions that will help you avoid bad releases.

What Constitutes a Bad Release?

A bad release is one that causes unexpected technical or business problems. It may include some code that creates 500 error responses, or a script that creates problems by integrating a new, third-party service. Both impact users, but they have different resolutions and different business impacts.

A bad release can quickly turn into a horrible release if problems aren’t identified and resolved promptly. For example, Knight Capital experienced a famously bad release that led to financial ruin for the company. The post-mortem revealed a catastrophic chain of events.

The report reads like a software horror story: A previously unused feature flag was repurposed for new behavior. The previous, nine-year-old code wasn’t removed and wasn’t tested. A poorly executed manual deployment left incorrect code on one of eight servers. Then, the trading system routed orders to that one machine, which triggered the incorrect, nine-year-old code. Actions undertaken to repair the system triggered new bugs, which exacerbated the problem further.

This unfortunate series of events cost Knight Capital $440 million dollars in 30 minutes and a $12 million dollar fine to boot.

What Went Wrong?

You know what a bad release looks like, but you’re probably wondering what caused the problems in the first place. Your initial thought might be that the release simply wasn’t tested enough, and that therefore the bad release could have been avoided with more testing. If so, your impression is generally correct, but only to a certain point.

First of all, testing the software is easier said than done. It requires significant infrastructure. Let’s say an organization released software with a performance issue. The team decides that from now on, it will do performance testing to prevent performance issues and potential regressions. Now, testing must happen in a specially designed performance testing environment. The resulting pipeline is longer, more complex, and more costly.

Second, testing software before going to production only identifies the problems that can be found before going to production. In other words, testing prevents shipping known bugs into production, but it doesn’t ensure that production is bug-free.

Testing prior to the time of production doesn’t identify or prevent infrastructure issues, like a server running out of disk space or memory, for example.

Testing also doesn’t protect against changes that were only pushed to production. These types of changes are critical, so teams may not have time to push them to pre-production environments, or it may not even be possible. Such cases lead to environment drift over time. A drifting environment leads to inconsistent results since testing occurs on different configurations, only to break later on in production.

In addition, testing doesn’t eliminate other workflow problems that were only caused by pushing to production. A fix may be pushed to production, then replaced by a new version coming from staging or QA. In this scenario, the problem is unwittingly reintroduced into production.

But if testing practices alone don’t cause bad releases, what else might be contributing? There are three major mismanagement practices that are common sources of release issues.

Three Classic Practices Auguring Release Mismanagement

Practice #1: Big Bang Releases

“Big bang” releases are the culmination of many different features, bug fixes, improvements, or any type of changes intended to go live in a single deployment. This type of release pretty much guarantees that you’ll have design problems.

If you’ve used Big Bang releases, you’ve probably experienced that familiar, unsettling feeling many of us get when something is about to go wrong. That sixth sense is one you should trust — it’s telling you there’s too much to account for.

Some software projects increase in scope over time until they culminate in a horrific, Big Bang release. This type of release makes it harder to test, deploy, monitor, and troubleshoot your software.

Pro Tip: Break down complex releases into smaller, manageable batches and use feature flags to control rollout gradually.

Big Bang releases may even require deploying new services from scratch, creating new infrastructure, integrating untested components, or all of the above. They are especially troublesome because most teams don’t plan for failure.

Big Bang releases can fail spectacularly, and teams are usually unprepared to and unsure of how to roll back the release — if a rollback is even possible.

This also creates complications for the business. Consider a release with multiple features. Maybe one feature is broken, but other, new business critical features are working great. With Big Bang releases, there is no option to keep business critical features working while you fix the broken one.

Big Bang releases are hard to develop, test, verify, and operate. Given that we know this to be true, it just makes sense that the opposite is also true: smaller releases are easier to test, verify, and operate. DevOps principles say that teams should work in small batches and deploy as frequently as possible.

Small batches paired with feature flags effectively render Big Bang releases a thing of the past. Feature flags give you fine-grained control over which features are available at any point in time.

So, how do the two work together?

Consider a change that requires modifying database schema, new code, and high infrastructure impact. First, you would apply small batches. You need to create one deployment that changes the database so that it works with the current code and future code. Next, perform one deployment with the code change. Now, you can leverage the feature flag to manage rollout.

Start small by enabling the new feature for your staff and gradually add more users. If something goes wrong, flip the feature flag.

Practice #2: Poor Telemetry

Telemetry data tells the team what’s happening on the ground. This may be data such as CPU usage, memory, 500 error responses, or text logs. It can also be business-level data like total logins, purchases, or sign-ups.

Telemetry data accurately describes what’s happening at any point in time. It’s not enough to exclusively focus on error-related metrics. Teams must use telemetry data to verify that what should be happening, is happening.

Pro Tip: Set up a real-time dashboard for critical business metrics alongside technical metrics to monitor releases comprehensively.

Releases become bad releases when teams don’t realize something is going wrong. Knight Capital could have mitigated their problems if they had telemetry data about the percentage of servers running the correct code.

However, this kind of technical telemetry too is on its own inadequate. Teams must also consider business telemetry.

Here’s a prescriptive solution from Etsy that is discussed in The DevOps Handbook: Create a real-time deployment dashboard with your top 10 business metrics. Metrics can include new sign ups, logins, purchases, posts, etc. These metrics should be common knowledge within the team. A deployment dashboard can help you see quickly if things are heading in the right direction.

Having this information is critical after each release since most problems come during and after releases.

Business-level telemetry can help you identify issues that used to take months in just a few hours. If you see a significant change on the deployment dashboard, this is an indicator that the release is bad and must be investigated.

Practice #3: Working Manually

The more humans that are involved in a release, the more likely it is to become a bad release. This is not a knock against humans. (To the contrary, I proudly count myself among their rank and file!) It’s simply a realistic acknowledgement that we, as people, can’t complete the same task 10,000 times and do it exactly the same way each time. Bad releases tend to happen because of people more often than because of machines.

You’ve likely seen this play out before. The 30-step release process for DevOps usually works fine, but a special case release requires running a one-off script. Somehow, the script doesn’t make it into the instructions and something goes wrong. Or, an engineer makes a fat-fingers change on one of ten servers.

Manual releases create a plethora of problems. Slow, manual release processes tend to permit fewer deployments, which leads to the use of those troublesome Big Bang deployments.

Manual work isn’t very helpful, so it’s best to remove as much of it as possible. This is especially true for releasing software since tasks can be completed much faster automatically. Typically, businesses that use database release automation have higher release rates. These higher release rates enable faster product development, which makes them more profitable than their competitors.

Ideally, software releases should happen at the touch of a button, which requires specially designed release software. This, just like any other piece of software, may be tested and maintained along with the primary business code. More importantly, it segues into the best way to mitigate release mismanagement: continuous delivery (CD).

CD makes all changes deployable to production. It requires automating the software development process. Remember, manual work doesn’t only apply to releasing software; it also includes manual testing. Replacing manual testing requires automated testing of all code changes and deployment automation to put the changes into production.

CD is a powerful practice that can transform your team and business, so don’t underestimate it.

Conclusion

There usually isn’t just one cause for a bad release (even though it may feel like it). There may be a single line of code or a person to blame for parts of the problem, but that analysis doesn’t go deep enough. Software development is a complex process. There are many compounding factors like technical debt, business conditions, and past engineering decisions. This melting pot can create the conditions for bad practices to be put in place.

However, these problematic practices can be exchanged for better ones — specifically, the successful practices that guide the DevOps movement.

  • First, replace Big Bang deployments with smaller batches.
  • Next, put key business telemetry on a deployment dashboard to immediately identify issues.
  • Finally, automate as many of the processes as possible.

This approach improves how teams build, develop, and release software. However, the changes shouldn’t stop there. Teams should work to continuously improve their processes. No single solution will completely eliminate bad releases, but using best practices will reduce their occurrence and business impacts.

How can you overcome release management challenges? Read more to find out.

Key Takeaways

  • Big Bang releases increase release risk; small, incremental releases with feature flags reduce this risk.
  • Lack of telemetry leaves teams blind to potential issues; business-level and technical telemetry is essential for visibility.
  • Manual processes slow down releases and increase errors; automation enhances speed and reliability.
  • Following DevOps best practices, including continuous delivery, minimizes bad releases and their impact on the business.

What’s Up Ahead? Anticipated 2018 DevOps Trends

At the end of 2017, we got in touch with those who live an breathe DevOps—DevOps engineers, evangelists and leads—to get their insights about the DevOps trends most likely to develop in  2018 and beyond.

We’ve collected their insights and organized them by topic for your reading pleasure. Here’s how it breaks down:

Enterprise Adoption of DevOps

DevOps becomes the norm, from day one

“Mature global customers I work with have shifted from ‘should we try DevOps’ to ‘every feature team should have at least one DevOps expert!’ While this mindset still isn’t perfect, it shows major change and demonstrates how software engineering is improving. Over the next 2-3 years, I expect my customers to run projects with questions like ‘should we plan 10 or 20 percent of capacity for continuous improvement?’ instead of ‘should we do automated testing?’

Uldis-Karlovs-Karlovskis-emerging-devops-trends

Uldis Karlovs-Karlovskis, Nordics DevOps Lead @ Accenture

Bigger companies are struggling to adopt new norms

“There are some big companies out there, making billions in annual revenue, that haven’t adopted DevOps practices. More and more such companies, which have ‘old ops’ views on application development/delivery, are starting to look in the direction of adopting DevOps practices. Even if company leads and management understand that trend, it’s going to take a lot of resources, energy, and time to convince their old-fashioned colleagues to implement… Big ships are harder and slower to steer in the right direction.”

Dmitry-Mihailov-emerging-devops-trends

Dmitry Mihailov, DevOps Engineer @ Accenture

DevOps is only just beginning to be understood

“In the past years, we’ve just scratched the surface of the DevOps world, we’ve just started learning and moulding new tools. In 2018, the word ‘DevOps’ will carry much more weight and it will be a very important piece of the technological industry, showing its effectiveness and the true power of combining development and operations.”

Jalil-Bonilla-Gasca-emerging-devops-trends

Jalil Bonilla Gasca, DevOps Team Lead @ IBM

Achieving DevOps success will require a mindset shift

“In order to be truly successful at DevOps in 2018, one must be open minded to a new culture that stimulates people to get better constantly by providing meaningful learning moments and challenging opportunities where failing fast is accepted. This culture should be promoted at all levels within an organization. Only then true success can be achieved.”

Kim-Uyttebrouck-emerging-devops-trends

Kim Uyttebrouck, DevOps Engineer @ IBM

DevOps foundations a key part of enterprise success roadmap

“As digital transformation becomes a key success factor for many organizations, it’s imperative that they have a strong foundation of DevOps built into the roadmap for this journey. Faster delivery of IT services ties into top priority initiatives for 2018, and more organizations are now breathing DOpAir (DevOps Air). I’m seeing C-level execs increasingly ask for strategic initiatives to transform their delivery pipelines to support their digital strategy, which is a testament to DevOps winds in motion.

Fawwaz-Sayed-emerging-devops-trends

Fawwaz Sayed, MEA DevOps Practice Leader @ IBM

The year of agility

“2018 will be the year where the real value of being Agile becomes mainstream. There is a growing  realization that in order to amplify responsiveness, operational resilience, and faster time-to-market throughout the software delivery lifecycle, you need to synergistically/holistically connect development with IT operations (mainly through the introduction of automation).”

Russell-Pannone-emerging-devops-trends

Russell Pannone, Senior Business Consultant/Agile-DevOps Guide @ Tata Consultancy Services

Automation is the name of the game

“In DevOps, there’s a lot of talk about automation of deployment, testing, release, monitoring and feedback. If made possible, zero-touch automation is what the future is going to be. Application of automation in all stages is the key and this is the main goal to strive for in 2018.”

Jayaprakash-Jayabalan-emerging-devops-trends

Jayaprakash Jayabalan, DevOps Advisory Consultant @ IBM

Methods will continue to improve

“The DevOps movement will continue to improve. Many companies will embrace DevOps and agile development as their mainstream methodologies. For those who are already using DevOps, they will continue to improve their methods and try to standardize simpler CI/CD pipelines and tools.”

Noor-Fairoza-emerging-devops-trends

Noor Fairoza, DevOps Engineer @ IBM

continuous-delivery-tools

The DevOps momentum — bigger than Agile

“I see substantial change in how people perceive the concept of DevOps and how it can change how they develop and maintain systems. What I hear from my customers is that we are in the midst of the DevOps momentum, and it will be much more significant than Agile concept was alone, now it is about a complete game-changer.”

Bartosz-Chrabski-emerging-devops-trends

Bartosz Chrabski, World Wide Hybrid Cloud Competitive Sales Team – DevOps and BlueMix Leader @ IBM

Testing: The Next Frontier

The dawning of DevTestOps

“We have done alot of Dev as we as Ops, but where are we heading from the testing perspective? There has been a lot of  development of automated testing framework, but hasn’t had the same importance as  Dev and Ops have. DevTestOps might be the biggest shift in the DevOps community in 2018.

Hemanth-Babu-emerging-devops-trends

Hemanth Babu, Senior Software Engineer @ Visa

 Testing at the core of DevOps

“From 2018 onwards, more and more enterprises shift left with hypothesis testing, dark launch, canary and experimental flags to predict users. The rise of false information—in the form of crawlers, Bots, AI and machine learning algorithms—will require organizations to be cautious and daring at the same time; QA folks will become pioneers, rather than gatekeepers.”

Aditya-Chourasiya-emerging-devops-trends

Aditya Chourasiya, DevOps Technical QA Lead @ Tata Consultancy Services

Security

Security will become central earlier in the process

“The current trend is towards imbibing security and infrastructure considerations earlier down the timeline into coding, architecture, and pre-production systems. Moving forward, DevSecOps and DevNetOps would be forthcoming DevOps trends to watch in 2018.”

Abhishek-Roy-emerging-devops-trends

Abhishek Roy, DevOps Engineer @ Accenture

Cyberattacks will force a shift in DevOps focus

“DevOps as corporate culture is still in its maturing phase, while cyberattacks are increasing and we are seeing a move to much more subtle, stealthy threats, security practices across all phases of SDLC will rapidly grow.”

Oren-Ashkenazy-emerging-devops-trends

Oren Ashkenazy, DevOps Expert @ CyberArk

Security automation is next

“When companies started to implement DevOps, they started with Deployment automation in low-risk level environments, then they moved to CI practices; after that, full CI/CD was the goal. Last year the topic was testing automation. I think this year organizations are going to start moving into UAT testing and security automation.”

Guillermo-Martinez-emerging-devops-trends

Guillermo Martinez, Technology Architect Consultant @ Accenture

Secure deployments are key

“In 2018, we will talk a lot about DevSecOps.  A new focus on security for DevOps programs that ensures full, secure development projects for professional customers.

The question about security in the digital world is very important, and we must be ready to ensure full, secure deployments for tomorrow’s projects and new technology models like IOT, AI database storage, and webapp.”

Slim-Triki-emerging-devops-trends

Slim Triki, Platform Engineer – DevOps @ AXA

DevOps Trends and Organizational Shifts

The demands are changing for IT skills

“The remarkable increase in demand for DevOps expertise has led to the reshaping and broadening of the profile and skill set of the modern IT professional, now spanning an ever-growing set of diverse sub-specializations. It is clear to me that both these DevOps trends will accelerate over the upcoming years.”

Salvador-Verde-Rodriguez-emerging-devops-trends

Salvador Verde Rodríguez, Sr. DevOps Engineer @ Accenture Germany

The dawning of the Delivery Trinity

In 2018, we will see the meteoric rise of the ‘Delivery Trinity’. This dream team will consist of 3 primary roles, the ‘Anchor Developer’, the ‘Site Reliability Engineer (SRE)’ and the ‘End User’ (with Product Owner, as proxy).

With the End User/Product Owner driving the feature content, the Anchor Developer implementing, testing, deploying, and the SRE relentlessly ensuring the deployed features exploit the operational environment to ensure service characteristics meet End User expectations. All other squad members (designers, testers, analysts, architects) will decorate that core team of 3.”

Andrea-Crawford-emerging-devops-trends

Andrea Crawford, DevOps & Cloud Native Development @ IBM

Startups will adopt DevOps from the get-go

“Based on my last years’ experience working closely with different customers from startups to big enterprises, I think that 2018 will see the DevOps methodology widely adopted by startups. Actually, startups, which do not follow DevOps practices, will not succeed as expected or they just fail.”

Victor-Varza-emerging-devops-trends

Victor Varza, DevOps Engineer @ IBM

New Technologies and Tools

Increasing efficiency and productivity

2018 will be the year of continuing progress of DevOps tooling such as the likes of Kubernetes, the cloud native space, and serverless technology and ideals; bringing greater advances to points of usability and usefulness within the DevOps space.”

Lee-Clench-emerging-devops-trends

Lee Clench, DevOps Engineer @ Capgemini

Improvements throughout the pipeline

2018 will see more utilization of available DevOps tools, reducing the re-work process or resulting in zero rework. I’d like to see a customizable tool for creating a trend in % format from continuous feedback metrics, which can be published across the team to identify critical check points. This will assist in improving tracking, archiving trends of all releases and thus improving the quality of a product.  I also expect we’ll see a more stable Docker swarm.”

Albert-Sebastian-emerging-devops-trends

Albert Sebastian, Sr. Analyst , DevOps @ Accenture

New buzzwords in the field

PaaS and SaaS will become keywords for the DevOps world, even more so than they have been until now.

Docker and Docker-like structures will dominate the markets, as long as microservices are easier to configure and develop on.”

Flavio-Duffizi-emerging-devops-trends

Flavio D’Uffizi, DevOps Engineer and Systems Administrator @ Sourcesense

Microservices, containers, and the DevOps between them

Microservices are currently the hottest topic in software development, and many companies embraced the microservices approach when developing their product.

A change in thinking is needed when it comes to adopting best practices for developing a workflow to fit to microservices and all of its aspects.

In order to adopt the microservices approach, it isn’t enough for developers to step forward and take action into this transition; DevOps teams will also need to change their approach and state of mind and will need to equip themselves with the right technologies, tools, techniques, and processes for the whole product success.”

Shahar-Rajuan-emerging-devops-trends

Shahar Rajuan, DevOps Leader @ mPrest

Containers and containers and containers. Oh my!

“I feel containerization is very hot trend in the current market and also organizations moving to Hybrid cloud instead of relying on one public cloud is one of the trends I’m recently looking at these days.”

Avinash-Reddy-emerging-devops-trends

Avinash Reddy, DevOps/Site Reliability Engineer (AWS) @ Tata Consultancy Services

The cloud and open source lead the way

“It seems that the mainstream is currently trending to open source technology and there is a great migration to the cloud, off loading the whole need of it makes it easy and reachable. Many security matters will arise, yet the flexibility is very welcome.”

Ilya-Gurenko-emerging-devops-trends

Ilya Gurenko, CM/DevOps @ Varonis

Measurement tools and higher adoption rates

“In 2018 we will see an increased focus on using data to analyse the results of DevOps. We might even see vendors move into the DevOps dashboarding space. Early in the year, Forsgren and Humble will be publishing a book called “Accelerate”, discussing measurements in DevOps.

I also expect more organizations to adopt the cloud vendor native products like Amazon’s DevOps pipeline tools, and a further consolidation of vendors on the one side, with new tools becoming prominent on the other side. “

Micro-Hering-emerging-devops-trends

 Micro Hering, APAC Agile & DevOps lead @ Accenture

What are your predictions and insights for 2018? Comment below to take part in the discussion!