Source: cmswire.com
Organizations deploying artificial intelligence (AI) in the enterprise should start with a small use case that solves a specific business problem and ties back to the organization’s core values, according to a Google AI executive.
Tracy Frey, director of strategy for Google Cloud AI, shared these thoughts with the crowd at the MIT Technology Review’s EmTech conference last week at the Massachusetts Institute of Technology (MIT) in Cambridge, Mass.
“What I tell companies, and what I think is really important about this space, is that the most important thing is to start with a business problem,” Frey said. “Identify what the problem is that you’re trying to solve.”
You’re Google, You Tell Us
Frey gave attendees an inside look on how the search giant is living by its promise to be an AI-first company. But she also discussed problems she sees with organizations that want to leverage AI in the enterprise. Namely, it’s at the starting gate; too often, they start astray.
“There’s an extraordinary amount of hype about AI in enterprises around the world,” Frey said. “And a lot of the experience that we have in Google Cloud AI is that companies come to us and they say, ‘We really, really, really want AI. And we say, ‘Great, we would love to help you. Tell us what problem you’re trying to solve, so that we know what products we can help you deploy.’ And usually the next thing that companies say is, ‘I don’t know. You’re Google. You tell us what we should be doing.’”
Do You Know Your Organization’s Core Values?
Naturally, pinning an entire project on a vendor is not healthy. AI projects should begin with knowing your organization’s core values and “cultural pillars,” according to Frey. Ensure you spend time identifying those, and understand how you want your company to operate.
“Because if you don’t start there, then if you start deploying things like AI and new technologies, you run that risk of everything being called into question,” Frey said. “Build your own principles, or whatever it is the process that speaks to you that feels like the right thing for your organization, and then identify one or a set of business problems. And start working with how AI can solve those business problems.”
Talent, Change Management
Frey likely recognizes she’s blessed to work in a company loaded with data scientists across the world and one that has its own AI Residency Program. She also recognizes that deploying AI in the enterprise is not only about technology and strategy but also having talent and change management practices. Data scientists are out there, but it’s not exactly easy — nor cheap — to get good ones into your front door. IBM predicted an increased demand for 700,000 more data scientists by 2020 in the US, but talented data scientists “remain hard to find and expensive,” according to a report from IDG.
“AI has been around for a long time, but for the most part, enterprises that have been able to adopt AI are doing so because they have the ability to hire in top talent,” Frey said. “They are going to be likely only working on things that are really unique and customized to them and built in house and completely proprietary.”
That’s partly why it’s a “giant leap of faith” to invest in AI. It’s also “easy to underestimate the amount of change management that organizations should invest in when they are undertaking any AI project.” With the large volume of the unknown in the space, organizations without a change-management program will have a range of feelings across their organization with having AI part of their day-to-day work life.
AI Needs to Be Built on Trust
No discussion of AI comes without ethics. Google has its own AI Principles manifest “because we fundamentally believe that you cannot have successful AI without being responsible and careful,” Frey said.
According to a Capgemini report, executives in nine out of 10 organizations believe that ethical issues have resulted from the use of AI systems over the last two to three years. Examples include: collection of personal patient data without consent in healthcare; over-reliance on machine-led decisions without disclosure in banking and insurance.
Trust needs to be the foundation of any new type of technology, Frey said. Without it, there’s a “great risk of stopping progress and making this incredibly beneficial technology available.”
Playing AI Defense
No matter how organizations feel about how AI can advance enterprises, such machine learning deployments can leave organizations vulnerable without playing sound AI defense, according to Tim Grance, senior computer scientist at the National Institute of Standards and Technology (NIST).
NIST, a division of the US Department of Commerce, has its own take on AI technology development. Last month it released a plan for prioritizing federal agency engagement in the development of standards for AI. The plan recommends that the federal government “commit to deeper, consistent, long-term engagement” in activities to help the US “speed the pace of reliable, robust and trustworthy AI technology development.”
Organizations must be aware of potential vulnerabilities AI exposes in their enterprise and deploy an “attack and defend mentality.” Recognize, though, that once you fix something, people are going to try to do something else, according to Grance. “If you’re betting the enterprise on some particular solution especially around AI, you want to address those questions of can people attack the data on which the system is training?” Grance said. “Can they attack our assumptions? Does it give us a real business advantage that we can maintain?”
Grance recognizes high-quality data, having the right people in place, knowing your business problem and executive buy-in as some key pillars in building a sound machine learning strategy.
“Everybody thinks about bias and can you protect the system so there are not some unintended side effects that would cause problems,” Grance said. “AI is just another cold-hearted, hard business decision you have to make. Is putting in this much worth it?”