A Warning From the Past: When Machines Began to Think

A Warning From the Past: When Machines Began to Think




Before “AI” Had a Name

Long before we started arguing over whether ChatGPT could write poetry or pass the bar exam, people were already worried about “artificial brains.” The year was 1958 Eisenhower was still president, Sputnik was spinning above the Earth, and The Nation ran a piece that might, in hindsight, feel eerily prophetic.

The article wasn’t about robot overlords or sentient machines plotting revenge. It was a book review, of all things a quiet reflection on The Computer and the Brain by John von Neumann, the Hungarian born mathematician whose fingerprints were all over both the atomic bomb and the birth of computing. He’d died the year before, leaving behind this slim but startling volume comparing the workings of the human mind to those of early computers.

The reviewer, Cornell philosopher Max Black, saw in von Neumann’s ideas something monumental. He called the scientist’s earlier work in game theory “one of the intellectual monuments of our time.” And yet, even amid his praise, Black hinted at unease. He imagined a world transformed by “artificial brains” though he admitted, back then, that our understanding of them was still “crude and unsystematic.”

He wasn’t wrong. It still is.

The Genius With a Bomb in His Pocket

What Black didn’t mention, maybe because it was too close to the surface to see, was that von Neumann’s fascination with the power of machines didn’t stop at theory. He had worked on the Manhattan Project, helped design nuclear weapons, and later pushed for intercontinental ballistic missiles the kind that could carry hydrogen bombs across oceans.

Von Neumann wasn’t some neutral thinker tinkering in abstraction. He believed, fiercely, in the logic of deterrence. He thought that if we could make machines think faster, calculate better, and remove the mess of human hesitation, maybe we could prevent catastrophe.

But that faith in machinery that cold, mathematical ideal has always carried a dangerous allure. It’s easy to imagine how von Neumann, had he lived longer, would’ve applied the idea of “artificial brains” to military use. After all, if you trust computers to think, why not let them protect you too?

That’s the exact thought that has haunted us ever since.

Cold War Fantasies and Silicon Nightmares




By 1983, that uneasy thought had grown teeth. In another Nation piece “Previewing the Latest High Tech” a defense researcher named Stan Norris laid out what sounded like a techno thriller plot but was, in fact, U.S. military reality.

The CIA, Norris wrote, was already working on systems that could “process information and formulate hypotheses.” Essentially, computers that could reason or at least mimic reasoning fast enough to make battlefield decisions. Other projects aimed to build autonomous robots for combat.

It’s chilling to read that now, forty years later, when we’ve actually seen prototypes of AI drones, predictive targeting systems, and automated kill lists. Norris saw the outline of that future and didn’t like what he saw.

“New technology continues to create new forms of terror,” he wrote. Each new advance didn’t make the world safer it only accelerated the arms race, replacing diplomacy with code. “Weapons have outrun politics,” he warned. “The search for common security lies not in the laboratory but at the negotiating table.”

That line still hits hard. Maybe harder now.

The Ghost in the Machine

Two years later, The Nation returned to the subject this time with an article by a young graduate student named Paul N. Edwards. His focus was DARPA, the Defense Advanced Research Projects Agency, and its effort to automate parts of the U.S. nuclear command system.

In other words, to let machines make or at least prepare decisions about launching nuclear weapons.

Edwards didn’t mince words. The idea of placing “a key element of the nuclear trigger in the ghostly hands of a machine,” he argued, was not just foolish it was suicidal. Computers could be faster and more logical, sure, but that didn’t make them wiser. Humans program them, and humans are fallible. They cannot anticipate every possible glitch, misreading, or false alarm.

He saw the seduction clearly: the fantasy that a hyper rational AI could protect us from ourselves, preventing the irrational human impulse that might start a war. But in reality, such faith only transferred our most dangerous decisions to systems incapable of compassion, doubt, or context.

“The solution,” Edwards wrote, “lies, as it always has, in reducing the danger of war by putting weapons aside and expanding the possibilities for peaceful interchanges.”

It’s striking, reading that line now, in a moment when generals talk openly about “AI driven deterrence” and algorithms decide who sees what online. The ghostly hands, it seems, have only multiplied.

The Endless Seduction of Logic



There’s something intoxicating about the idea that machines can think better than we can. It promises safety, control, even redemption a way to fix the chaos of human nature with code.

But history keeps proving otherwise. From the earliest computers to modern neural networks, every system we’ve built reflects not just our intelligence but our blind spots. Bias, fear, ambition they leak into the code. Every algorithm is a mirror, and the reflection isn’t always flattering.

The military, of course, finds this logic irresistible. It offers a clean, calculable battlefield one where decisions happen in microseconds, untainted by hesitation or empathy. But that’s exactly the danger. Once machines begin to act where only moral reasoning should, the distinction between defense and annihilation blurs.

Edwards and Norris saw that decades ago. They warned that our pursuit of machine intelligence was inseparable from our appetite for dominance and that both might end up consuming us.

Echoes That Haven’t Faded

Now, seventy years after The Nation first mentioned “artificial brains,” we’re still asking the same questions. Can intelligence without conscience be trusted? Can a system built for efficiency ever be ethical?

Michael Klare, writing in the same issue that revived these earlier warnings, argues that the temptation remains the same: to offload human responsibility onto silicon logic. It feels safer. Cleaner. But it’s a mirage.

The truth is, every time we hand over a decision to an algorithm whether it’s a self driving car, a social media feed, or a drone we surrender a piece of human judgment. And that, more than any science fiction apocalypse, may be the real danger.

Perhaps the real lesson from those old Nation writers is that the threat of AI isn’t new at all. It’s just the latest chapter in an old story our longing to escape the burden of thinking, and the fear of what happens when we finally do.



Open Your Mind !!!

Source: TheNation


Comments

Trending 🔥

Google’s Veo 3 AI Video Tool Is Redefining Reality — And The World Isn’t Ready

Tiny Machines, Huge Impact: Molecular Jackhammers Wipe Out Cancer Cells

A New Kind of Life: Scientists Push the Boundaries of Genetics