The Real Reason to Be Nervous About AI:

Advertisements

In recent weeks, an unlikely drama has unfolded in the media. The center of this drama is not a celebrity or a politician, but a sprawling computational system, created by Google, called LaMDA (Language Model for Dialogue Applications). A Google engineer, Blake Lemoine, was suspended for declaring on Medium that LaMDA, which he interacted with via text, was “sentient.” This declaration (and a subsequent: Washington Post: article) sparked a debate among people who think Lemoine is merely stating an obvious truth — that machines can now, or soon will, show the qualities of intelligence, autonomy, and sentience — and those who reject this claim as naive at best and deliberate misinformation at worst. Before explaining why I think those who oppose the narrative sentiment are right, and why that narrative serves the power interests of the tech industry, let’s define what we’re talking about.

LaMDA is a Large Language Model (LLM). LLMs ingest vast amounts of text — almost always from Internet sources such as Wikipedia and Reddit — and, by iteratively applying statistical and probabilistic analysis, identify patterns in that text. This is the input. These patterns, once “learned” —a loaded word in artificial intelligence (AI) —can be used to produce plausible text as output. The ELIZA program, created in the mid-1960s by the MIT computer scientist Joseph Weizenbaum, was a famous early example. ELIZA did not have access to a vast ocean of text or high-speed processing like LaMDA does, but the basic principle was the same. One way to get a better sense of LLMs is to note that AI researchers Emily M. Bender and Timnit Gebru call them “stochastic parrots.”

There are many troubling aspects to the growing use of LLMs. Computation on the scale of LLMs requires massive amounts of electrical power; most of this comes from fossil sources, adding to climate change. The supply chains that feed these systems and the human cost of mining the raw materials for computer components are also concerns. And there are urgent questions about what such systems are to be used for — and for whose benefit.

The goal of most AI (which began as a pure research aspiration announced at a Dartmouth conference in 1956 but is now dominated by the directives of Silicon Valley) is to replace human effort and skill with thinking machines. So, every time you hear about self-driving trucks or cars, instead of marveling at the technical feat, you should detect the outlines of an anti-labor program.

The futuristic promises about thinking machines do not hold up. This is hype, yes — but also a propaganda campaign waged by the tech industry to convince us that they’ve created, or are very close to creating, systems that can be doctors, chefs, and even life companions.

A simple Google search for the phrase “AI will returns” returns millions of results, usually accompanied by images of ominous sci-fi-style robots, suggesting that AI will soon replace human beings in a dizzying array of areas. What’s missing is any examination of how these systems might actually work and what their limitations are. Once you part the curtain and see the wizard pulling levers, straining to keep the illusion going, you’re left wondering: Why are we being told this?

.