The potential — and hype — surrounding machine learning, artificial intelligence, and especially generative AI is everywhere. Some are predicting a full suite of “this changes everything” advances in all industries, for all professions, and for people in their public and private lives. This technology is unmatched at recognizing patterns in data, and its proponents argue it has the potential to be an enormous research laboratory that never stops working, a paradigm-buster that unlocks human creativity, an accelerator for human ingenuity, and a window into reality that is currently beyond reach. Sundar Pichai, of Google likens it potential to fire and electricity.
I too am genuinely excited, if somewhat more reservedly. Today, AI offers opportunities to improve productivity, which has remained flat for a long time, and to tackle heretofore intractable problems, such as the search for antibiotic-resistant drugs, an understanding of how proteins fold, and finding materials with properties needed to build better batteries. Impressive successes ranging from Amazon’s recommendation engine, to Callaway Golf’s design of its next generation drivers, to PepsiCo’s efforts to manufacture more consistent Cheetos help justify the excitement.
But I also think that progress will take longer and prove far tougher than most expect, especially in commercial settings. As I’ll explain, success with AI demands concerted efforts that extend far beyond the technology. Thus, they demand the full commitment of a company’s most senior leadership.
It is important to note that AI has generated considerable excitement in the past, quelled by AI winters in the mid 1970s and early 1990s. And just three years ago, in 2020, The Economist noted that “Another full-blown winter is unlikely. But an autumnal breeze is picking up.” As one example, self-driving cars benefited from considerable investment, and always seem to be “just around the corner,” but are more probably decades away. Further, during the pandemic, when insights were are a premium, none of the hundreds of AI tools built to catch Covid have passed muster. Indeed, the failure rate of AI projects appears to be north of 80%. Finally, a recent study by Meta (formerly Facebook) researchers under controlled conditions suggest that large language models don’t get the facts right two-thirds of the time.
Still, I’m less concerned about the technology per se and more concerned about the other advances that must accompany AI. For history suggests it takes wide range of related technologies, organizational innovations, and accommodations between the new technologies and society for any new technology to flower. I’ll use electrification and the printing press to illustrate these points, then explore how they apply to AI.
As the Austrian political economist Joseph Schumpeter pointed out, successful technologies tend to arrive in clusters. With electrification came dynamos, generators, switch gears, and power-distribution systems. With the printing press came the technologies to make large quantities of cheap paper and ink. And, though not a technology per se, new materials other than the Bible and other classics were needed to fuel demand for printed materials.
A single missing component can impede the adoption of the new technology. Today, for example, the lack of enough fast-charging stations is slowing the advance of electric cars.
Next, new technologies require new organizational capabilities. While the benefits of electricity and electric motors were easy to see, they required that factories be redesigned. It took 40 years of learning, experimentation, and investment along multiple fronts to fully electrify the factory. Similarly, it took about that long for a publishing industry, which helped match supply, demand, and price, to emerge.
Sooner or later, new technologies and societies must come to accommodate one another. In the beginning, electricity was dangerous — mistakenly touching a live wire could prove fatal! Over time, standard sockets and plugs helped ease that concern. Few people could read when the printing press was invented. But as societies became increasingly literate, the benefits of the printing press grew. Looking at these past clusters can help us understand the moment we’re in now, and what the future of AI might really look like.
Barriers to the Adoption of AI
Each of these areas presents formidable barriers for the widespread adoption of AI. (In using the term artificial intelligence, or AI, I am casting a wide net, including machine learning, deep learning, computer vision, natural language processing, robotics, expert systems, fuzzy logic, and other areas of tasks typically associated with human intelligence.)
First, consider the technologies related to AI. The two most important are massive computing power and large quantities of high-quality data. Increasingly sophisticated models require more computing power, which, as predicted by Moore’s Law, appears to be growing apace. Probably not fast enough for those on the leading edge, but plenty fast for most commercial uses.
The situation is more complex when it comes to “large quantities of high-quality data.” In teaching an AI to play chess, for example, researchers provided algorithm the rules and instructed it to play games against itself, obviating real-world quality issues. In contrast, poor data quality sunk IBM’s Watson’s initial entry into health care. To further underscore the importance of high-quality data, researchers generally acknowledge that today’s AI models would not be possible without the highly-curated, trusted data scraped from Wikipedia.
All this stands to reason: Even a technology as remarkable as AI can produce results no better than its inputs. And in most commercial settings, quality issues resemble Watson far more closely than the pristine circumstances of chess. The quality requirements for training, updating, and operating AI models are broad, deep, and, in some cases, poorly understood. For example, while it appears that Microsoft’s Tay chatbot was well trained in normal discourse, user-generated content quickly taught it to mimic hate speech.
Exacerbating this, many companies’ data and systems architectures are increasingly messy and chaotic. This makes getting the data in shape to apply AI tools much harder, compounding quality issues.
I find this situation paradoxical. As best I can tell, everyone involved understands the importance of high-quality data. Some practitioners privately admit data is the most important part of any AI project. But companies haven’t even put someone in charge, never mind sorted out the complete program they’ll need to ensure they have the high-quality data they need. Perhaps, as Google researchers note, “Everyone wants to do the model work, not the data work.” But until that data work gets done, it’s hard to see good models emerging and AI succeeding.
Now consider the issues associated with organizational capabilities needed to put AI to work. These may well prove as difficult as the quality issues. I find the penetration of basic, though powerful, analytical techniques broadly instructive here because both it and AI depend so heavily on data. Such tools have been around for at least two generations.
But other than basic reporting, most organization still don’t use them very much or well. And these tools are far easier to understand than AI models. The unfortunate reality is that analytics is bolted onto, rather than built into, organizations, practically guaranteeing a perpetual fight for relevance. It is hard to see a bright future for AI until this structural issue is resolved.
The second big organizational issue is that employees resist innovations that threaten their jobs. And while AI can reduce the drudgery in work, make employees more productive, and lead to a growth in employment, most are right to be concerned. After all, a drudge job beats no job when it helps feed a family. So, how will employees and AI work together? Should employees be expected to help create an AI that will replace them? To explain and or clean up training data? In situations where AI replaces them, how should they be compensated? In situations for where AI makes people more effective and/or efficient, should people receive higher compensation? There are no easy answers.
One final point regarding staff. Even as generative AI is easy to use, developing and deploying most AI requires top talent. Yet too many data scientists feel their talents are wasted. As one veteran put it, “I find that most data scientists are incredibly frustrated that they aren’t having much impact.” No surprise! Finding something novel that the business actually uses and deploying a model in practice is incredibly difficult. Even worse is dealing with bureaucracy, quality issues, and the fears of those impacted by their work. Companies hoping to attract and retain top talent must address these frustrations.
Third, and lastly, consider issues of humans/society and AI coming to accommodate one another. As noted above, some believe AI will elevate humanity while others believe the AI threatens civilization. So, these issues are already generating enormous attention, though so far little agreement, on national and international levels. Though only a few companies will have much influence at those levels, most will find they have to take a stand on some, such as:
- What are the standards against which companies should compare their AIs? There are numerous reports of deaths caused by self-driving cars. But people are far from perfect drivers and their mistakes also kill lots of people. Is “zero accidents” or “fewer accidents” the objective?
- Should companies compensate individuals for providing their data? Should they pay more for data guaranteed correct?
- How should national security issues, such as the competition between U.S. and China impact companies’ plans?
- AI is much like a “black box” — it does not give good explanations for the ways it reached its conclusions. Thus, companies should begin to think through how they will address legitimate questions such as “Why did you deny my loan?”
What to do?
This is probably a good time to take stock. Is your company fully embracing the data, organization, and societal challenges that AI requires and is your senior leadership fully committed? If so, you’ve raised your chances of success. Good luck!
Or, does the vast majority of your work focus on the technology, perhaps led by your chief analytics officer and/or your tech group? If so, ask yourself if you’re up for the full range of work on the data, organization, and society/people issues needed to earn significant business results. If not, best to keep your powder dry. If so, the really hard and important work still lies ahead. You simply must get your more senior leadership invested in the effort. There is no set script for doing so.
You must also begin work on the data and organizational issues describe above. I recommend two steps to help you get started:
I also recommend two further steps, anticipating a decision to plunge headlong into the effort:
- Scour the company for proprietary data, stuff you have that others cannot match. This is important because you’re unlikely to create sustained advantage with data your competitors can also access.
- Beef up your change management capabilities. No matter what works best, putting AI to work will require massive change. Start to build the professional help you will need.
AI Isn’t Easy
The hype surrounding AI makes it seem much easier than it is. Don’t get sucked in! It is easy to “play” in the AI sandbox. But actually making it pay off, at scale, across the company requires serious work, top to bottom and across the company. The technology is almost certainly the most manageable of the required efforts. You also need high-quality data and some tough organizational changes, all against a backdrop of societal acceptance. AI is only for those in it for the long haul!