Source – forbes.com
In retail, supply chain efficiency is essential. Inventory management, picking, packing and shipping are all time and resource-intensive processes which can have a dramatic impact on a business’s bottom line.
The problem is these are complex processes, particularly when it comes to large scale operations covering multiple outlets and territories. The fact they are often dependent on outside forces – suppliers, service providers and even weather – make getting it right even more difficult.
This is why retailers – both big and, increasingly, smaller operations too – are keen adopters of Big Data-driven analytics technology. Creating efficiencies in complex systems which involve multiple, often compartmentalized processes is an area where this technology excels. In short, it’s about the ability of machines to make lots of little savings and efficiencies, which together add up to very large ones.
Monte Zweben – CEO of Splice Machine, which provides predictive systems for industry, talked me through three key areas where retailers are increasingly looking towards data-driven analytics in order to drive efficiencies in their supply chains. We also talked about why this approach is going to become increasingly important for businesses in all sectors which want to stay ahead of the pack and foster innovation.
Filling your customers’ needs more quickly
Today’s Internet of Things industry means that everything is connected and capable of collecting and sharing data on how it is operating. This means that everything can be measured and – through the use of advanced analytics tools such as machine learning – rigorously interrogated until it gives up all its secrets on how it works, and, crucially, how it interacts with every other part of an operation.
All of that data can be collected on an inventory – origins, transit routes, times when it is scanned or its location and status are reported by RF (Radio Frequency) tags.
“So, now you can build a machine learning model,” Zweben says, “and that model could make a prediction about any aspect of the operation based on the data it’s got.
“What’s the likelihood you’re not going to be late with this order? What’s the likelihood you’ll be a day late? Five days? It’s basically a classification problem.”
This means that in-depth simulations can be run, allowing the implications and knock-on effects of lateness or missed deadlines to be assessed before they become an issue, even if they can’t be entirely eliminated due to a reliance on external influences. Where this is the case, remedial action can be taken ahead of inconvenience being caused to customers, who are certainly likely to be appreciative of an email apology when a shipment is likely to be delayed, rather than simply to be kept waiting.
Bernard Marr is a best-selling author & keynote speaker on business, technology and big data. His new book is Data Strategy. To read his future posts simply join his network here.
Reducing downtime due to faults and breakage
Technology will always go wrong, wear out or run down – this is certainly true in industrial applications which rely on complex machinery with moving parts carrying out specialized tasks.
“If you can build machine learning models that predict the mean time between failure of parts in large scale engineered networks, and learn the true lead time of replacing those parts, you can get a real-time dashboard of what you should be buying, based on those predictions of what’s going to break, and how long it’s going to take to replace it,” Zweben tells me.
This predictive maintenance was initially pioneered in heavy industry where downtime can have catastrophic cost implications. In supply chain logistics, it is starting to be used in technology-driven “picking and packing” operations as well as across transport fleets of trucks and ships.
Cutting shrinkage and maximizing stock
In retail, not all stock that comes in will end up being sold to customers – it’s a given that a certain amount will be lost due to damage, inventory mismanagement, errors of stocktaking, fraud and theft.
In a supply chain operation being effectively monitored for the purposes of data-driven predictive analytics, there are a multitude of opportunities to reduce – and perhaps in some areas eliminate – this “shrinkage.”
“If you’re constantly ordering and there’s a particular situation where you order a hundred units of something, it’s highly likely there’s damaged product.
“Only 90 units get delivered and you have to call your supplier and say you need another 10 units – you can ‘learn’ this shrinkage through your supply line. This means you can predict how much you really need to order,” Zweben tells me.
The future
Going forward, Big Data-driven analysis such as machine learning is only going to play an increasingly prominent role in supply chain optimization. Today it is more likely to be the domain of big, national and international networks, due to the need for large volumes of up-to-the-minute data and the associated cost and complexity of pulling it all together.
Increasingly, however, infrastructure provided “as-a-service” and out-of-the-box analytical platforms, combined with new markets for purchasing external data, will open it up to smaller scale operations. This means that an organization’s appetite for innovation, rather than the size of its analytics budget, will increasingly become the deciding factor between those which can, and those which can’t, leverage this game-changing technology.
Bernard Marr is a best-selling author & keynote speaker on business, technology and big data. His new book is Data Strategy. To read his future posts simply join his network here.