DataOps: Analytics Revolution or Operational Burden?

women writing on a white board

2019 will bring a shift towards DataOps, with new tools empowering business users to put data to work in exciting new ways. The key will be to get the right balance of access, control and cost.

In 2018, DataOps made its debut appearance in Gartner’s data management Hype Cycle as an ‘innovation trigger’. In its December Innovation Insight paper, Gartner went on to describe DataOps as a way to “drive organizational change and predictability for using data without massive investment”.

If you trust the analysts, it looks like 2019 is going to be the year for DataOps.

I recently explored the rise of DataOps in my Big Data London recap, where it also cropped up as a key trend for 2019. But what does it mean in practical terms for organisations keen to implement it?

Data has more value when everyone can access it

Firstly, it’s useful to clarify that, while the term ‘DataOps’ suggests an extension of DevOps, it’s really its own separate evolution. Think of it more as a ‘spiritual successor’ – organisations are applying the principles of DevOps, such as agile techniques and continuous delivery, to drive transformation in the way they store, process and use their data.

As the technologies and techniques of DevOps become more firmly established in businesses – and familiar to its leaders – organisations are starting to look for ways to extend their value. A recent survey found that 73% of IT and data professionals are planning on investing in DataOps in 2019 through new resources and hires.

Because as awareness of data’s importance grows, so does awareness of its versatility. Data has value far beyond supporting planning and reporting in individual business units, and many businesses are starting to understand the strategic potential of making it available organisation-wide.

When business users have access to all of the organisation’s data, and the right tools to work with it, they can build a better understanding of what’s going on now and what the organisation could be doing next.

Data lakes make data widely accessible

That might sound like a nightmare for IT. But in our experience, far from adding to IT’s burden, DataOps is a key opportunity to take the pressure off specialist resources.

In a DataOps world, business users and analysts are empowered with technologies like PowerBI, Tableau and Python to experiment with structured and unstructured data, and uncover new insights that can drive the business forwards.

When data is made available to the business by means of a data lake and a catalogue of what the lake contains, users can prototype and put it to work: interrogating the data, visualising it, or developing models and running algorithms. They no longer have to rely on IT to do this work for them.

For example at one of our customers, a UK water utility, there was no ability for team members to be self-supporting in their use of data for reporting and data discovery. We’ve worked with them to build a data lake-based platform with reusable templates and patterns to enable faster delivery timelines. The increased pace has enabled business analysts to provide insights more rapidly, in turn improving decision-making in the organisation.

Data lakes won’t replace IT-intensive tools like data warehouses – many critical processes rely on data being stored and reported on from tightly controlled sources. But for widespread access to data, data lakes really come into their own.

Governance, cost control and forward planning will be critical

A DataOps approach can support agility and innovation – but it’s vital that businesses don’t let things get out of hand in their quest to build a more sandbox-esque data environment.

Strong governance is still essential: business users will still need to comply with company data policy and external regulations like GDPR, for example. There’s no consensus yet on where this responsibility lies – although likely to be with the Chief Data Officer, it’s up to the individual business to decide exactly what its governance framework and policies are, and who’s responsible for them.

Cost control is another essential. Like DevOps, DataOps has become synonymous with cloud, but compute requirements can quickly get expensive without careful monitoring and management. With a data lake, multiple people across the organisation may be working with the same data at the same time – racking up compute costs that can go unnoticed until it’s too late.

Finally, a successful DataOps approach means always keeping an eye to the future. Business needs and current technologies evolve continuously, so the DataOps environment must evolve too – otherwise you risk falling behind and having to make major changes again in a few years.

A managed service can help

Getting the full value from DataOps means finding the right approach for your business. And that doesn’t always mean doing everything in-house. With a managed service from an experienced third party, you can rely on external experts to help with things like building and maintaining your data lake, and monitoring and optimising your cloud compute costs – while you focus on building your DataOps specialisms internally.

If you’d like more guidance on what DataOps could look like for your organisation, Adatis offers free half-day workshops to help you focus on your business needs and which models and solutions suit you best. We are also growing our specialist Managed Services team. If you’re interested in either, don’t hesitate to email me at dan.perrin@adatis.co.uk.