Source – techtarget.com
On a recent commute to TechTarget’s Newton, Mass., office, I passed by an Apple Maps vehicle — a white SUV topped with cameras and spinning light detection and ranging equipment that collects vast amounts of data about streets all over the world.
I have a terrible sense of direction and rely on map apps more often than I’d care to admit — so much so that Google Maps knows my routine inside and out. It proactively updates me on the time it will take to get from home to work or lets me know that I’m only 10 minutes from a Target store. I don’t take for granted what a technological feat this level of personalization is; it requires massive amounts of data to be collected in real time, stored and validated before it can be put to use in a handy iPhone app.
Commanding lakes of data
Yet, with so many connected devices and widely available sensor technologies, access to data isn’t the problem. Indeed, most companies are swimming in data. The real challenge is managing all that data and putting it to use. But once the hard part’s figured out, companies are able to improve products, services and their business operations by leaps and bounds.
As we learn in our October feature story, inexpensive sensors and internet of things connectivity capable of collecting big data have improved supply chain visibility. Manufacturers can see not only where their goods are, but also precisely when their goods will arrive — and that’s just the beginning.
In our cover story, we explain how big data systems have allowed vendors to make significant strides in software development. One example is iPass, which launched SmartConnect software about two years ago. The software uses algorithms to identify Wi-Fi access points and rank performance so that mobile users can connect to the fastest, most reliable hotspots nearby. Previously, iPass was only able to provide static lists of hotspots. The software advances in SmartConnect were developed, thanks to a data management system built around the Spark data processing platform, which crunches data in real time.
There are many other examples in this month’s issue of how big data systems underpin technology advances. But to make further strides, industry experts say data management systems must also evolve.
Managing data in the AI era
The demand for instant data access, whether by mobile applications or back-end machine learning systems, means data management systems must be agile. Once viewed as platforms for data controls, big data management systems also need to be viewed as delivery systems, and the data they deliver must be validfor the models to work. That requires data engineers to spend a significant portion of their time analyzing raw data before feeding it to machine learning algorithms. And that, too, is expected to change.
The same data-hungry algorithms that will transform the data management game are being applied to more quickly find data in the first place. Smart data management systems that integrate machine learning are used to move data from management platforms to their destinations faster than ever.
As exciting as all that is, I’m fairly certain just the thought of an automated, hands-off approach to data management is enough to give data management and governance professionals an all-out anxiety attack. With so much riding on data, companies need to ensure that the data they collect and analyze meets a specific level of quality and reliability for it to be trustworthy.
My prediction — based on instinct, not data — is that many companies will maintain their traditional data management practices for years to come, while making compromises for access and speed where it’s safe to do so. But like so many predictions these days, this one could be wrong.