Source – scientificamerican.com
Every day we read of some new area where artificial intelligence has matched or exceeded the proficiency of human experts on some well-defined task. Beyond the well-publicized successes of go- and poker-playing AI agents, machines have shown superiority in complex real-world tasks like interpreting x-ray images and assessing the cancer risk of dermatological lesions. The combination of cheap computing power and memory, connected devices, abundant data and advances in algorithm design create such successes—which then attract further attention and investment, driving further progress. Many speculate that machines will replace humans in many roles and that we will need to reinvent the nature of work itself as a result.
Yet a newly published report by MIT Sloan Management Review and The Boston Consulting Group shows there is an enormous gap between these expectations and the current reality for most organizations: Whereas 85 percent of 3,000 executives polled expect AI to result in competitive advantage within five years, only 5 percent engage in substantial AI-centric activities and only 20 percent use any AI all. How can we reconcile this gap between potential and reality?
The pattern is typical for an emerging technology: Expectations run ahead of reality, and only later align as initial expectations are met, exceeded or disappointed. Although it may be that this bubble effect serves a social purpose by lowering the cost of capital for uncertain emerging technologies, thereby speeding their diffusion, the MIT/BCG report cites several specific factors that explain the current gap between knowing and doing.
There are huge differences in experience and understanding between pioneer companies and others, even within individual sectors like insurance. The majority of companies underestimate the importance of rich and diverse data sets to train algorithms, and especially the value of “negative data” associated with failure to successfully execute a task. Talent shortages and unequal access to data engineers and AI experts compound matters. Privacy and other regulations as well as consumer mistrust also temper progress.
Whereas such barriers may be expected to decrease over time, there are also more subtle barriers to AI’s adoption that will need to be overcome to unlock its full potential. Algorithmic prowess is often deployed locally, on discrete tasks; but improved learning and execution for one step of a process does not usually improve the effectiveness of the entire process. And improving the effectiveness of a single process does not necessarily translate into overall organizational effectiveness. Although this might not apply to some single operations that can be easily disentangled from others, like loan approval, most business activities are embedded in larger processes and systems.
Will automating the classification of dermatological lesions or any number of similar applications improve the efficiency or effectiveness of the health care system, with its complex organization silos, constraining regulations and misaligned incentives? Doing so depends on organizational and institutional reengineering, not just the optimization of specialized inference. To argue from precedent: technological advances like the electric motor did not result in overall productivity improvements until the layout of factories was changed to fully exploit them. What then is the nature and scope of the organizational intelligence required to fully exploit the potential of artificial intelligence?
Key to a reconceptualization of collective intelligence in the post-AI age is the design of the interface between humans and machines so as to leverage the comparative advantages of both. Humans have the advantage in a number of areas like solving problems with little or no data, switching levels of abstraction and learning via emulation and empathic attunement—rather than through simulation and systematic inference.
Designing the right task boundaries and the human–machine interfaces that delineate them will be critical to achieving synergies between brains and machines. Key to achieving this will be a far more precise understanding of what machines and humans can uniquely do. Current rules of thumb—“machine learning can do whatever it takes humans less than a second to do”; “machines can be used for prediction, humans for judgment”; “machines make calculations, humans produce interpretations”—are both simplistic and factually wrong.
Machines can outperform humans on tasks requiring hours or even days, like chess and go matches. Humans can outperform machines on emotional judgments made in fewer than 800 milliseconds. “Judgment” can be modeled as a series of predictions that can in principle be subcontracted to machines. Prediction, as Karl Friston argues in Cell, is what most brains can be inferred to be doing most of the time—modulating synaptic gains to minimize informational free energy. And deep-learning networks can be used to generate sophisticated visual and textual interpretations, even as humans outperform machines in the calculations required to optimally balance trays of teacups while navigating a crowded café.
AI likely holds much promise for organizations—but we must first get far more precise about what the promise is in order to understand how AI can be deployed to fulfill it. Doing so requires we think through both human and artificial intelligence from foundational principles rather than from the empirics of past data—not least because we are entering an age in which the future will likely not resemble the past. And doing that will certainly require advances in understanding organizational intelligence, as much as ones in algorithmic efficiency.