Source – https://www.forbes.com/
Deep learning is the current darling of AI. Used by behemoths such as Microsoft, Google and Amazon, it leverages artificial neural networks that “learn” through exposure to immense amounts of data. By immense we mean internet-scale amounts — or billions of documents at a minimum.
If your project draws upon publicly available data, deep learning can be a valuable tool. The same is true if budget isn’t an issue.
But depending on your project, the data you need might be behind a wall, or there simply might not be billions of data points in your dataset. If this is the case, deep learning probably isn’t the solution you need, but you can still draw on machine learning to get results.
Non-Deep-Learning Solutions: High Value And High Efficacy
Let’s assume you work in the pharmaceutical industry. The data volumes in this domain are enormous but are often protected and difficult to get at en masse. It’s also an area with rigorous regulatory requirements that necessitate detailed content classification and auditable results. These factors make it a bad fit for a deep learning solution. But other machine learning approaches can still provide high-value outcomes.
In the pharmaceutical industry, everything is tracked and categorized, so an all-knowing deep learning model isn’t really needed. A more basic type of model (MaxEnt, for example) is sufficient for the task of matching up content with a known taxonomy or identifying new patterns or trends in drug research data.
Top 25 Machine Learning Startups To Watch In 2021 Based On CrunchbaseUnderstanding The Value Of Artificial Intelligence Solutions In Your BusinessUsing Artificial Intelligence To Transform An Industry? Nominations For The 2021 AI 50 List Are Open
Why is this a better solution than deep learning? Because these models are so specific, unlike the more generalized deep learning models, they can be trained on much smaller amounts of data — think hundreds of thousands or millions of documents instead of billions. These models are easier and cheaper to build and are therefore much easier to maintain and update as new data becomes available. Beyond this, the sheer size and hardware demands of a deep learning solution mean that it’s the wrong hammer to use for many of the common problems you encounter in pharma.
A Case In Point: Compendia
Let’s look at the specific case of drug compendia in the pharmaceutical industry. Drug compendia, historically known as “price books,” are essentially summaries of drug information for a specific condition that are shared to pharmacy retail chains, government databases, distributors and EHR databases. They outline which drugs are most preferred by insurers and approved for off-label uses, pricing, and which combinations of drugs do and don’t interact well together.
Compendia are watched incredibly closely because they significantly affect the revenues of pharmaceutical companies. If one drug moves up the list to become the favored drug of providers, this can mean millions of dollars in profits to the drug producer.
The challenge is that compendia are published frequently, and changes aren’t uncommon. This means a significant amount of human time is currently spent tracking changes to compendia and analyzing what these changes mean for a company’s bottom line. Given the number of recognized medical conditions and number of drugs available to treat them, staying on top of these changes is a massive chore for any pharma company.
However, a relatively simple ML differencing model can track and report on changes to compendia over time, significantly reducing the human cost and effort involved while improving the accuracy of the process. Sure, you could solve the problem with a deep learning model, but it wouldn’t be any more accurate and would be dramatically more expensive.
Another example of an ML solution in pharma that doesn’t require internet-scale deep learning comes from work that our company, Lexalytics, has done with Biogen Japan and its Medical Information Department (MID). In this instance, we configured Biogen’s core NLP to identify relevant conditions, ailments, drugs, issues, therapies, and other entities and products within its FAQ and other resources. We used Biogen’s data to train and deploy custom machine learning models into the underlying NLP; the resulting system now understands complex relationships between conditions, ailments, drugs, issues, therapies, and other entities and products. MID operators can type in keywords or exact questions and get back best-fit answers and related resources in just seconds. The system provides more accurate and faster customer service and reduces costs by minimizing the number of calls that need to be escalated to costlier higher-ups within the organization.
Sometimes It Pays To Think Small
While deep learning is the technology du jour, it’s not always the right solution. Deep learning techniques require phenomenal investment as well as access to enormous amounts of data, which, for most business problems, simply isn’t feasible. But for targeted problems with smaller content volumes, less cutting-edge but long-established machine learning techniques or the use of multiple small models can and will improve business outcomes and, with them, bottom lines.