Our quest for perfect machines that make perfect decisions may be removing the most important element from the most important decisions; the human element.
In the early morning hours of September 26th 1983 the fate of the world hung in the balance between man and computer. Were it not for the human element of intuition and empathy the world today would have been a very different place.
Humans aren’t perfect, we make mistakes, it’s what we do. Yet, it may be in our imperfection–the soft a fuzzy space of compassion, empathy, and love–that the greatest value of our decisions takes shape.
In 2006 Dow Chemical launched a campaign dubbed “The Human Element.” At a time when intelligent machines and algorithms are increasingly taking over for the work that humans once did it may be worth revisiting the role and importance of the imperfect human element in our organizations.
Don’t get me wrong, I’m fascinated by Artificial Intelligence and it’s ability to do many things so much better than its human counterparts. I’d much rather see my 15-year-old grandchildren being driven by an autonomous vehicle than a 17-year-old driver.
I also see incredible promise for what AI can do to solve some of the greatest challenges that humanity faces, from curing disease to reversing climate change to even helping us become an interplanetary species.
But I fear that our idolization of intelligent machines and our quest for perfection may come with the very high price of discounting the compassion, empathy, judgment, and simple imperfect humanity that often goes into making the right decisions.
Which is why the story of former USSR Lt. Col. Stanislav Petrov is one that I often recount when the topic of replacing imperfect humans with perfect machines comes up; his singular act of humanity, may well be the reason you are alive and reading this today.
The Sum Of All Fears
Petrov was an unremarkable Soviet officer who on Sept 26, 1983 was tasked with the awesome responsibility of monitoring the Soviet’s radar for incoming nuclear ballistic missiles launched from the USA.
Just three weeks prior, the Soviets had shot down a Korean Airways passenger plane, killing 269 people. Tension between the two super powers couldn’t have been under more strain. The doomsday clock had probably never been closer to midnight since the Cuban missal crisis.
In the early morning of September 26th Stanislav Petrov WALKED onto that stage as he took command of the Soviet Union’s Air Defense Forces monitoring for nuclear missile launches by the US. Shortly after starting his shift Petrov’s radar lit up with one after another blips. Five USA Minuteman intercontinental ballistic missiles were undeniably heading toward his homeland. The USSR was under attack.
Red Army protocol was clear on what Petrov had to do; call in the strike, and quickly. In less than thirty minutes, the time it would take a ballistic missile to reach its target from launch, Russia would lose any ability to mount a land-based counterattack. The sum of all fears was suddenly an unimaginable reality.
Petrov recounted the anxious seconds, “The siren howled, but I just sat there for a few seconds, staring at the big, back-lit, red screen with the word ‘LAUNCH’ on it,” he told the BBC’s Russian Service in 2013. The large backlit letters LAUNCH indicated that the missiles had indeed been launched from the USA. As Petrov watched and counted the seconds a second warning replaced the first, MISSILE STRIKE. “All I had to do was to reach for the phone; to raise the direct line to our top commanders.”
Had Petrov made that call, we would–at least those of us who were left to talk about it–be remembering September 26th as the day the world nuked itself back into the stone age.
Clearly Petrov didn’t pick up the phone. He sat there listening to his gut. A gut churning with the struggle to on the one hand do what he’d been relentlessly programed to do without thinking, and on the other coming to terms with the very human intuition to prevent a nuclear holocaust. Unfathomable responsibility for any of us to comprehend. In Petrov’s own words, “Seconds felt like minutes, and minutes stretched for eternity.”
Questioning The Machine
Petrov’s gut told him the information he was seeing couldn’t be accurate, the infallible array of sensors, satellites, radar, and computers had to be mistaken. And so, he looked at the undeniable data, listened to the screeching of the nuclear warning alarms, looked at the glaring red letters “MISSILE STRIKE” on the large monitor, and waited; questioning the machines.
He defied protocol and contacted the USSR satellite tracking stations to see if they had picked up anything. Nothing. Seven minutes had passed since the launch alarm. The weight of the world was quite literally on Petrov’s shoulders. A cacophony of alarms and those under his command screaming at him to decide.
Twenty-three minutes later, Petrov knew his intuition had been right and the computers had been wrong. (It would later be determined that satellites had interpreted sunlight hitting a layer of clouds as the five incoming ICBMs.)
Could AI have done a better job of correlating the necessary sensory data from the satellites? Would it have had the intuitive human reaction to freeze and consider the real human implications of following its stated objectives? Would an intelligent machine have had the ability to think to itself, as Petrov later said,
“I refused to be guilty of starting World War III … If I made the wrong decision, a lot of people will die. A lot of people will die,”
Years later, even Petrov conceded that his decision was based to a large degree on the fact that his background was not entirely military. Had AI been programed by the military to use a military ethic would it have decided differently than AI developed by a non-military contractor? Would another officer in Petrov’s place have done the same?
All impossible questions to answer, but that’s precisely the point. There is no definitive linear or logical set of circumstances and no singular set of outcomes in cases where the very human capacity for judgment and intuition based on compassion and empathy play such a pivotal role.
Eternal Vigilance Is The Price of Technology
I don’t claim to have the answers to these questions or that there even are definitive answers. What I do have is an understanding of how each new generation of technology becomes simultaneously more powerful in its ability to construct and to destroy. Never before have we needed to keep pace with such a rapidly increasing burden of human vigilance that balance of power will demand.
For the first time in history we will soon have the ability to not only replace the tedium of physical work with machines but to potentially replace many of those things that we consider to be uniquely human decisions–those with life and death consequences–with AI-driven machines.
There is no formula or set of rules for how to manage this change. And we will no less stop the trajectory of AI than we will stare down a Tsunami. Yet, in our zeal to automate and replace people we desperately need to also acknowledge that there are decisions that only the very human elements of compassion, empathy, and love can make and should make; imperfect decisions made by imperfect people in an imperfect world.
Parts of this article are excepted from my upcoming book Revealing The Invisible.
This article was originally published on Inc.
Wait! Before you go…
Choose how you want the latest innovation content delivered to you:
- Daily — RSS Feed — Email — Twitter — Facebook — Linkedin Today
- Weekly — Email Newsletter — Free Magazine — Linkedin Group
Tom Koulopoulos is the author of 10 books and founder of the Delphi Group, a 25-year-old Boston-based think tank and a past Inc. 500 company that focuses on innovation and the future of business. He tweets from @tkspeaks.