Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

A new book chronicles the battle over AI, but fails to question whether AI is worth battling over

Of all the technologies that have created buzz over the last few years, by far the buzziest is what’s known as artificial intelligence — AI for short.
It’s buzzy because the chatbots and data crunchers it has produced have startled users with their human-like dialogues and test-taking skills, and also because its critics, and even some of its proponents, have raised the specter of devices that can take over human endeavors and threaten human existence.
That’s what makes a new book by Bloomberg columnist Parmy Olson so exquisitely timely. “Supremacy: AI, Chat GPT, and the Race That Will Change the World” covers the corporate maneuvering underlying the development of AI in its current iteration, which is chiefly a battle between Google, the owner of the laboratory DeepMind, and Microsoft, a key investor in OpenAI, a prominent merchandiser of the technology.
Olson deserves praise for the remarkable journalistic accomplishment of chronicling a business battle while it is still taking place — indeed, still in its infancy. For all the timeliness of “Supremacy,” the question may be whether it has arrived too soon. How the battle will shake out is unknown, as is whether the current iterations of AI are genuinely world-changing, as her subtitle asserts, or destined to fizzle out.
If the latter, it would not be the first time that venture investors, who have showered AI development labs with billions of dollars, all marched off a cliff together. Over the last few decades, other novel technologies have come to market riding a wave of hype — the would-be dot-com revolution of the late 1990s and the cryptocurrency/blockchain revolution already showing its raggedness come to mind.
For much of her book, Olson seems overly captivated by the potential of AI; in her prologue, she writes of never having seen a field “move as quickly as artificial intelligence has in just the last two years.” According to her bio, however, she has been covering technology for “more than 13 years.” That may not have been enough to give her the historical perspective needed to assess the situation.
The core of “Supremacy” is a “Parallel Lives“-style dual biography of AI entrepreneurs Demis Hassabis and Sam Altman. The first, the founder of DeepMind, is a London-born game designer and chess champion who dreamed of building software “so powerful that it could make profound discoveries about science and even God,” writes Olson. Altman grew up in St. Louis and became marinated in the Silicon Valley entrepreneur culture, largely through his relationship with Y Combinator, a startup accelerator of which he would become a partner and eventually president.
Olson is a skillful biographer. Hassabis and Altman fairly leap off the page. So do several other figures involved with the AI “race,” such as Elon Musk, who co-founded Open AI with Altman and several others whose fundamental jerkitude comes across much more vividly in her pages than in those of Walter Isaacson, Musk’s adoring biographer.
Readers fascinated by high-stakes corporate maneuvering will find much to keep them enthralled in Olson’s account of the ups and downs of the relationship between Google and DeepMind on the one hand, and Microsoft and OpenAI on the other. In both cases those relationships are strained by the conflict between AI engineers focused on safely developing AI technologies and the big companies’ desires to exploit them for profit as quickly as possible.
Yet what gets short shrift in the book is the long history of AI hype. Not until about halfway through “Supremacy” does Olson seriously grapple with the possibility that there is less to what is promoted today as “artificial intelligence” than meets the eye. The term itself is an artifact of hype, for there’s no evidence that the machines being promoted today are “intelligent” in any reasonable sense.
“Overconfident predictions about AI are as old as the field itself,” Melanie Mitchell of the Santa Fe Institute perceptively observed a few years ago. From the 1950s on, AI researchers asserted that exponential improvements in computing power would bridge the last gaps between human and machine intelligence.
Seven decades later, that’s still the dream; the computing power of smartphones today, not to mention desktops and laptops, would be unimaginable to engineers of the ’50s, yet the goal of true machine intelligence still recedes beyond the horizon.
What all that power has given us are machines that can be fed more data and can spit it out in phrases that resemble English or other languages, but only the generic variety, such as PR statements, news clips, greeting card doggerel and student essays.
As for the impression that today’s AI bots give of a sentient entity at the other end of a conversation — fooling even experienced researchers — that’s not new, either.
In 1976, the AI pioneer Joseph Weizenbaum, inventor of the chatbot ELIZA, wrote of his realization that exposure to “a relatively simple computer program could induce powerful delusional thinking in quite normal people,” and warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — had produced a “simpleminded view … of intelligence.”
The truth is that the inputs on which today’s AI products are “trained” — vast “scrapings” from the internet and published works — are all the products of human intelligence, and the outputs are algorithmic recapitulations of that data, not sui generis creations of the machines. It’s humans all the way down. Neurologists today can’t even define the roots of human intelligence, so ascribing “intelligence” to an AI device is a fool’s errand.
Olson knows this. “One of the most powerful features of artificial intelligence isn’t so much what it can do,” she writes, “but how it exists in the human imagination.” The public, goaded by AI entrepreneurs, may be fooled into thinking that a bot is “a new, living being.”
Yet as Olson reports, the researchers themselves are aware that large language models — the systems that appear to be truly intelligent — have been “trained on so much text that they could infer the likelihood of one word or phrase following another. … These [are] giant prediction machines, or as some researchers described, ‘autocomplete on steroids.’”
AI entrepreneurs such as Altman and Musk have warned that the very products they are marketing may threaten human civilization in the future, but such warnings, drawn largely from science fiction, are really meant to distract us from the commercial threats nearer at hand: the infringement of creative copyrights by AI developers training their chatbots on published works, for example, and the tendency of bots flummoxed by a question to simply make up an answer (a phenomenon known as “hallucinating”).
Olson concludes “Supremacy” by quite properly asking whether Hassabis and Altman, and Google and Microsoft, deserve our “trust” as they “build our AI future.” By way of an answer, she asserts that what they have built already is “some of the most transformative technology we have ever seen.” But that’s not the first time such a presumptuous claim has been made for AI, or indeed for many other technologies that ultimately fell by the wayside.
Michael Hiltzik is the business columnist for The Times. His latest book is “Iron Empires: Robber Barons, Railroads, and the Making of Modern America.”

en_USEnglish