Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Using DataOps to create business value from big data

Source: searchdatamanagement.techtarget.com

Data is not only “big,” it’s also unruly. It populates every pocket of the enterprise. Every information system, every cloud, is dripping with it. And not unlike Jed Clampett’s “bubbling crude (oil that is),” it takes a lot of machinery, a lot of refining, to make it useful.

It took the 2010s to build out the infrastructure of big data, the constellation of platforms and applications that store, tag, govern, manage and deliver it. In that regard, I see the past decade as a test lab with the focus on developing, implementing and integrating a large swath of heterogenous solutions to streamline turning data into real business intelligence. 

That period of testing has paid off. In 2020, we’ve not only turned the page on a new decade, we’ve turned the corner on making data the true currency of value creation. And, the good news for every enterprise — whether large or small, brimming with IT staff or manned (or womanned) by a hearty few — is that turning data into insights at speed and scale is available now for everyone. (And it doesn’t cost a lot or take years to implement.)

The rise of DataOps

Two complementary developments have delivered this transformation. One is the evolution of an Agile mindset, a framework for approaching, implementing, and demanding more from data management solutions, called DataOps.

The other development is a series of technological breakthroughs, perhaps not obvious amid the sheer volume of data management solutions in the market, that make end-to-end data management not only possible, but safer, faster and more useful than ever before. 

First up is DataOps, which Gartner calls “a collaborative data management practice focused on improving the communication, integration, and automation of data flows between data managers and data consumers across an organization.” 

I can say from firsthand experience that the rise of DataOps, modeled after the success of DevOps (an Agile engineering framework for enterprise IT that streamlined application development, deployment and continuous improvement), is the result of reorienting data management around value creation. It’s a get-to-value-first, get-to-value-fast philosophy that enables the enterprise to either fail or succeed quickly and rapidly build on what works.

A lot is riding on this shift. In Getting DataOps Right, O’Reilly’s authors summarized: 

“The necessity of DataOps has emerged as individuals in large traditional enterprises realize that they should be using all the data generated in their company as a strategic asset to make better decisions every day.” They concluded, “Just like the internet companies needed DevOps to provide a high-quality, consistent framework for feature development, enterprises need a high-quality, consistent framework for rapid data engineering and analytic development.”

For the purposes of this article, suffice it to say that a DataOps mentality, one that emphasizes cross-functional collaboration in data management, learning by doing, rapid deployment and building on what works, is beginning to sweep the enterprise, and early adopters have strong results to show for it.

While the rise of DataOps may prove to be the tip of the spear in 2020’s data management, the heft behind it, which is making it so effective, is a new generation of great technology.

DataOps technology drivers

Without the best tools, great teams can only go so far. We now have at our disposal a new generation of platforms and applications that 1) make all the data management solutions amassed by the enterprise over the last decade work better together and 2) offer a quick-to- implement, low-cost alternative for smaller enterprises looking to play and win the data management game at scale.

Here are three key technology drivers enabling DataOps excellence and the pursuit and attainments of rapid time-to-value:

Extensible platforms. Enterprise data can live anywhere — on premises, in the cloud and, as is often the case, among multiple clouds. For many enterprises, this data sprawl across siloed systems has seemed insurmountable. However, it’s not. Extensible platforms, which can easily pull data from myriad sources and align them in a metadata catalog, solve sprawl without requiring the building of a data lake. A win for agility, internet-native technology and for all enterprise users.

Augmented data catalog. Just as an extensible platform enables companies to leverage data regardless of where it resides, next-generation metadata catalogs, where data is easily accessed, tagged, annotated, enriched and shared, allows companies to orchestrate their current data management systems and turbocharge their performance. As Gartner urges companies to evolve from “storage-centric” to “streaming-centric” data management solutions to speed time to value, metadata catalogs, which greatly reduce administrative costs through machine learning, hold the key. Consider this the new cockpit for end-to-end data visibility and management. 

Self-service. Drawing on the advances of extensible platforms and augmented metadata catalogs, today’s data management systems provide a breakthrough capability once only dreamed of by most enterprises — true self-service data provisioning. In the past, analysts might have to wait weeks or months to have IT find, pull, and perform jujitsu on required data sets to empower better decision making. Now, analysts, data scientists and business intelligence users can “shop” for the data they need at data marts, sparing valuable IT resources. As an added bonus, that data will be pre-commissioned, quality-checked, tokenized and enriched by DataOps collaborative efforts through the platform. Today, through self-service, the Amazonification of data is upon us, and everyone is invited.

How to succeed with DataOps

The best part of these developments is that they are happening now. I am not describing a “future state.” Peer companies are taking advantage of these tools today as part of successful DataOps initiatives that are delivering business value as we speak, more quickly and at less cost than ever before.

One such success, recognizable to any business with loyalty rewards programs, is occurring at a regional financial services company. Leveraging an extensible data management platform, the firm integrated data that had been siloed across five lines of business and third-party sources to create a 360-degree view or “golden record” of its customers. It then made that data available to users through a metadata catalog, providing self-service access to its data scientists that resulted in eight hours per day of saved data engineering work and the ultimate prize of increased revenues through personalized, golden-record-driven sales.

Oh, and the firm started seeing great results in less than six months.

Democratizing data gains with DataOps

Data, measured in petabytes across the enterprise, the cloud and third-party sources, has the potential to be one of every company’s most valuable resources. The Agile approach of DataOps, turbocharged by a powerful new generation of data management platforms and tools, is flipping the script on who gets to benefit most from data. 

Big, small or anywhere in between, enterprises today have at their disposal the methodology and technology needed to tap data at scale, turn on their data pipelines and deliver their people and businesses a game-changing intelligence advantage.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence