Mary Shelley’s 1818 depiction of Victor Frankenstein and his abomination has become synonymous with the creation of monsters since, both literally and figuratively. Innovations often take on lives of their own, independent of their innovators’ wishes and intentions, and though perhaps created with the best of intentions, the old adage is that “the road to hell is paved with good intentions”.
Rebecca J. Rosen, writing for The Atlantic, brings up two specific inventors who came to regret their inventions — or rather, who came to detest how they were used. The first is Kamran Loghman, who worked for the FBI in the 1980s to help along the development of pepper spray into weapons-grade material. In reaction to a 2011 incident at UC Davis in which protesters were pepper sprayed by police, Kamran told The New York Times, “I have never seen such an inappropriate and improper use of chemical agents.”
Albert Einstein is probably the more famous of the examples, however, who Rosen points out “played almost no role in the development of the atomic bomb but whose discoveries led to it.” While Einstein may never have played a direct role in the Manhattan Project, he was one of the initial voices urging Roosevelt to support research of what would become the most destructive weapon known to mankind. It’s said that years later, Einstein came to regret ever sending a letter to Roosevelt.
“Had I known that the Germans would not succeed in producing an atomic bomb,” he’s reportedly said, “I would have never lifted a finger.”
With more innovations dreamt up in the present than ever before, it’s important that we tread with caution. Our ethical imperative should be to innovate without unintended and detrimental consequences. Much like healthcare professionals, we should strive to first do no harm.
The Hippocratic Oath is sworn by medical graduates the world over, and is generally boiled down to that phrase: “first do no harm”. Obviously, this doesn’t extend to technological innovations in the health field or even the legislature surrounding health care, but maybe it should.
Don’t get it twisted, introduction of major technological advances to the healthcare field have absolutely proven beneficial. The simple addition of dynamic digital signage has contributed to improved patient happiness as well as operational and safety efficiency, for example — but the larger implications of turning a hospital into a technologically fueled machine are equally disconcerting, if not downright dangerous.
One of Ohio University’s online programs has a list of ethical dilemmas faced by today’s health care administrators on their website, and among them are the ethics of privacy. Electronic health records (EHRs) are regulated by the Health Insurance Portability and Accountability Act (HIPAA), but don’t always work optimally. Sometimes these records aren’t consistent between insurance companies, health care providers, and the third parties that operate between them, and a lack of information can be just as deadly as wrong information. On top of that, health care professionals sometimes must be prepared to make tough calls concerning patient confidentiality, especially when a patient or outside party is at stake — and especially in an age when patient data is sought after by outside forces such as hackers.
The digitization of hospitals has caused a surge in illegal cyber activity, with ransomware attacks shutting down a large number of hospitals in both 2016 and 2017. This goes beyond privacy, as holding critical healthcare infrastructure for ransom puts lives at risk. The only silver lining is that the FDA is introducing new medical application regulation changes, which could lead to improved cyber security in the field.
Internet of Things & Big Data
The rise of ransomware highlights an issue that involves more than just hospitals. The Internet of Things (IoT), fueled by Big Data, means that almost everything from cars, to refrigerators, to homes are connected to the internet in some way or another.
If there’s one thing that people need to be aware of in the common era, it’s that anything connected to the internet is vulnerable to cyber attack. Most recently, the WannaCry ransomware attack proved that, a reiteration of the Dyn DDoS attack which involved the infamous Mirai botnet.
If WannaCry was an indicator that users need to be more up-to-date and vigilant regarding system updates and their own security, the Dyn attack showed just how little regard IoT manufacturers have toward their consumers’ security.
A report published by the Institute for Critical Infrastructure Technology focuses on manufacturer negligence as a primary reason that the Dyn DDoS attack ever occurred in the first place. The report points out “the lack of security by design in devices such as DVRs and IP-enabled closed circuit TV cameras that are protected by weak or known default credentials as the root cause for the emergence of these attacks,” says Michael Mimoso, writing for Threatpost. “Further, they caution that the availability of the Mirai source code has brought these large-scale attacks within reach of script kiddies, criminals and nation-states alike.”
The IoT is an awesome thing. The potential applications are truly awe-inspiring. However, unless we truly invest in better security, we’re building a castle without defenses, hoping enemies will show mercy instead of invading. Praying that hackers don’t run our self-driving cars off the road while we’re in them. Trusting that our homes aren’t electronically locked while we’re gone and held for ransom, like hospitals already have been.
One idea that cyber security experts have been considering doesn’t rely on humans protecting computer infrastructure, but computers protecting human infrastructure…
AI affects all aspects of everything. The healthcare field, which we’ve previously discussed, has been phasing in A.I. healthcare since the beginning of 2017, and IBM’s Watson has been helping oncologists diagnose cancer for awhile now. Some even think that AI is the future of cybersecurity, considering machines will almost always out-perform humans. This is a great thing, in one regard, and a very risky, potentially catastrophic thing on the other hand.
What needs to be understood is that there’s a difference between what’s called “strong AI” and “weak AI”. Weak AI is the type of AI that makes “smart” systems like smart homes or smart cars work. They can’t think for themselves, but the computer is autonomous to a degree, able to make decisions in context with its operational environment. Even this type of AI is going to be tricky trying to define our relationships with, especially when the ethics of things like self-driving cars get involved.
Strong AI, on the other hand, will require major ethical consideration before implementation. The day that we actually create real, decision-making AI is the day that many have dubbed “The Singularity”. Nobody knows what will happen when this day comes — some think that humanity will join with machine, organic beings with digital modifications, allowing us to live forever. Others think in more “Skynet” terms, certain that the machines will turn on us in a nuclear holocaust. Nobody knows what could actually happen… but everything is on the line.
In the end, we need to be vigilant and consider the long-term, multiple outcome effects of innovation, good or bad.
If we operate as innovators believing that our work will better the world, need to realize that our works could also very well hurt the world in ways we never intended — and from there, develop safeguards and solutions to ensure that they don’t.
image credit: 20th Century Fox
Wait! Before you go…
Choose how you want the latest innovation content delivered to you:
- Daily — RSS Feed — Email — Twitter — Facebook — Linkedin Today
- Weekly — Email Newsletter — Free Magazine — Linkedin Group