The Top 5 Blockers to Successfully Implementing DataOps in 2020

By John Welch on August 8, 2020

DataOps is a methodology and set of practices to improve quality and speed of delivery for data and data analytics. It incorporates agile development methodologies, as well as approaches from DevOps to improve the lifecycle of data initiatives, from preparation through making it available to end users for reporting and analysis. As organizations try to adopt these approaches, they often encounter challenges that slow or stop their progress toward a full DataOps approach. I’d like to share the most common DataOps implementation challenges that I have experienced and some thoughts on how to successfully remove these roadblocks.

Why Does DataOps Matter?

Data estates are growing more and more complex. At the same time, users are having their expectations set by the pace of change that they see with software and applications. It’s not uncommon to get apps on your phone updated weekly or daily. However, in many organizations, the processes around data haven’t kept up with the pace set on the application side. DataOps is a way to address that disparity.

The Blockers

As you read through the list below, you’ll notice that some of these issues could apply to any attempt to change process or culture. That’s because moving to DataOps is a change to process and culture, so it faces similar challenges.  

So, what are the impediments to applying DataOps to an organization, and how do we work around them?

#5—Bad Training

There are multiple things that can make up bad training. Sometimes, organizations don’t invest in enough training or, in some cases, any training at all. In other cases, there’s plenty of training, but it’s done at the wrong time or for the wrong people. 

It’s important to provide a consistent level of understanding across the organization through shared training. This should set common goals, terminology, and general guidelines for how DataOps will be applied. With this level of training as a baseline, groups or individuals can do more targeted training for their role, while still understanding how it fits into the larger context. This also lets you tailor the targeted training for individual learning styles—classroom training might be appropriate for some, while others might want to crack open a book or learn by doing. 

Training also shouldn’t come to a halt after the initial stages. Reinforcement of the concepts and principles is important, even more so as you start using DataOps on real problems. The training can help promote better outcomes when you encounter challenges.

#4—Not Respecting the Differences Between Code and Data

I’m defining code here as “a set of instructions that can be provided as input to a program, and will produce a deterministic output.” For example, if I input the instructions to add 4 to 6 (4+6) into a calculator, I will get back a sum of 10 every time. At least, I will if the calculator isn’t broken or otherwise flawed—which is why we test.

All code is data, but not all data is code. (At a very abstract level, all data is potentially code, but you have to have a program that understands the instructions that data represents.) The important thing with DataOps is to distinguish which parts of your data estate are code and which parts are data because those are handled differently.

Code is generally anything defining schema or structure (SQL Data Definition Language for relational databases, partition schemes for non-relational stores) or instructions for manipulating data (SQL Data Manipulation Language for relational databases, Python or R scripts for non-relational). The data is the information that resides in our data stores. Note that because we can have code that manipulates or creates data (think static data sets or code tables), the lines can get a little blurry.

We want to treat the things created by code (schema, structure, or static data) as a herd and the data as a pet. What does that mean? Well, if you have a herd of dairy cows, each cow is important, but you manage the herd and the cows are replaceable. In the same way, you can recreate the same database schema over and over from the appropriate code. Any individual instance of the database schema is replaceable.

However, a family pet gets individualized attention and is treated as a unique member of the family. Data is often irreplaceable, and you want to treat it as such. This would include backup and recovery strategies, and rollback / roll forward considerations for updates or changes.

These two approaches might seem mutually exclusive, but as long as they are applied appropriately, they can work well together. Use the herd approach for everything that falls into the code category. It should all be source controlled, and the product of that code should be reproducible easily, ideally in an automated way (see #2 below). The data needs more specialized attention and careful consideration for handling changes.

#3—Not Having Clear Objectives

“If you don’t know where you are going, any road will get you there.” – Lewis Carrol

I’ve often seen initiatives around DataOps (and many other types of improvement projects) with long mission statements and objective lists. More often than not, these can be summed up as “make it better.” That’s a start—but it’s no substitute for having objectives and metrics that clearly define what “make it better” means. 

The following are a few core metrics that I find useful:

  • Cycle Time—How long does it take from the time your data engineers start working on a request to when you get it into the hands of the users?
  • Lead Time—Similar to cycle time but includes the time from when the user requests something to when the data engineers start working on it.
  • Availability—Is the availability of data improving over time? This is more than just measuring uptime on your data warehouse. It’s also measuring how long it takes to get refreshed or updated data available to the users. 

A small amount of time spent at the beginning agreeing on how success will be measured, and actually measuring the current state, can make a world of difference when implementing DataOps. It gives you a clear test for “better” or “worse.” It also helps filter and prioritize what you will focus on—if it doesn’t address the metrics established, should you be working on it?

#2—Not Automating Enough

Automating as much as possible is important to successful DataOps. This is in part because automation speeds things up, but an additional important aspect is that it reduces risk and makes the processes repeatable. Computers execute operations consistently, unlike people, who can make mistakes or miss steps. That’s not to say that there’s no need for humans—but you want them focusing on making sure the automation works consistently, not on executing the operations themselves.

A lack of automation often surfaces in manual testing, manual deployment steps, manual monitoring of the deployed solutions, or updates being applied manually instead of automatically. These aren’t easy to automate—they will take time and effort. But it is possible, and there are tools available that can help.

Another benefit is that the more you automate, the easier it is to deploy, which takes me to our next point…

#1—Not Deploying Frequently

When talking about deploying, I’m referring to taking changes that are being made in development and making them available to users, whether that be all users or a subset of users. The less frequently that you deploy, the longer it will take you to deploy. It will also be much more stressful and risky when you deploy. Conversely, the more frequently you deploy, the less stressful and risky it will be. There are multiple reasons for this, including:

  • Smaller increments of change are less risky.
  • Frequent deployments force you to resolve ongoing problems.
  • Frequent deployments mean more opportunities to adjust your direction based on feedback.

Frequent deployments change it from being a scary, epic event into “just part of the job.” It can be challenging to get there, but it truly changes the dynamic of delivering data solutions.

Successful DataOps Implementations

There can be many challenges to implementing DataOps, and this is not an all-inclusive list. However, the challenges can be overcome, and the results can be transformative to your organization. If your organization relies on data (and it should), the need to keep pace with the organization and your users’ expectations should encourage you to look closely at DataOps and how you can implement it. If you have tried to implement DataOps, and the efforts are stalled, I hope this article encourages you to resume your efforts. It does require work and effort, like all change, but the end result of a good DataOps implementation can positively impact your organization in many ways.

John Welch

John Welch

John Welch is the Chief Technology Officer at SentryOne, where he leads a team in the development of a suite of data and BI products that make monitoring, building, testing, and documenting data solutions faster and more efficient. John has been working with data, business intelligence, and data warehousing technologies since 2001. He was awarded as a Microsoft Most Valued Professional (MVP) 2009 - 2016 due to his commitment to sharing his knowledge with the IT community, and is an SSAS Maestro. John is an experienced speaker, having given presentations at Professional Association for SQL Server (PASS) conferences, the Microsoft Business Intelligence conference, Software Development West (SD West), Software Management Conference (ASM/SM), SQL Bits, and others. He has also contributed to multiple books on SQL Server and business intelligence.

Related Posts