Are We Losing the Will to Innovate?

Are We Losing the Will to Innovate?Are we really achieving anything?  Or are we squandering the ingenuity of our predecessors on a trivial consumer culture of tweets and likes?  Where is our moon landing, our theory of relativity, our decisive breakthrough that future generations will remember?

That’s the challenge that Justin Fox raised in a provocative HBR post.   It’s a valid question, one that should be asked and answered.  However, after thinking about it for a while, I realized that much of the hand wringing in this area has been misplaced.

Digital technology has become superficial because we’ve gotten so very good at it.  It’s a 60 year-old paradigm that achieved little in its first few decades, then sprinted forward, devouring everything in its path and will soon come to an end.  That’s what innovation theorists call the S-Curve and it’s how technology works.  The future is bright.

The Present Paradigm

The technology cycle that everybody talks about today really began in 1948, with two developments at Bell Labs.  The first and more famous was the invention of the transistor. The second, less well known but just as important, was Claude Shannon’s groundbreaking paper that launched information theory.

Since then, we’ve learned how to squeeze billions of transistors onto a single chip creating devices that, although they fit comfortably in our pockets, have more computing power than the entire Apollo program.  We unconsciously refer to Shannon’s obscure paper every time we choose a data plan from our cable company or download a movie in MB’s.

The sublime has paved the way for the ridiculous and that’s exactly as it should be.  We’ve become so good at the present technology that we use it to enrich our everyday lives.  That might seem trivial, but it’s fun and we like it.

On the other hand, there are some truly new paradigms that seem more like science fiction than real research.  Nevertheless, they are advancing quickly and are either starting to manifest themselves in real products or probably will within the next decade.

Nanotechnology

In the late winter of 1959, a young scientist named Richard Feynman, got up to address the American Physical Society.  His lecture was not the usual fare of decaying sub-atomic particles and obscure Greek letter strewn formulas.  Nevertheless, it promised to become one of the most significant and consequential scientific events the world has ever known.

It was entitled There’s Plenty of Room at the Bottom and it is an absolute delight.  In that room a half century ago, speaking at roughly a high school level, Feynman asked why we could not print the Encyclopedia Britannica on the head of a pin and introduced the world to nanotechnology.  Half a century later it’s just getting started.

The next step is nanocomputing.  We will create devices out of microscopic components and there will be computers as small as a grain of sand.  In the future, we will literally be able to spray on information technology (and possibly, in the case of cosmetics, rub on). This isn’t science fiction, as this article shows, the effort is already well underway.

But nanotech goes far beyond computing.  Today, it’s being deployed to build the next generation of solar panels.  New materials such as super-strong carbon structures called fullerenes are revolutionizing materials science and may also provide the key to unlocking superconductivity, while self-replicating nanorobots will change manufacturing forever.

Genomics

While moon landing captured the world’s imagination, this generation’s great achievement, the mapping of the human genome, which took 13 years and $3 billion, is probably more significant. Since then, scientists have cut the cost to less than $1000 and the price will fall to under $100 in another decade.

That’s about the cost of a basic blood test today, so it is not surprising that genomics is becoming one of the hottest areas of investment around.  From using personal genomes to better diagnose illnesses to using gene therapies for diseases like cancer and Alzheimer’s, this new field will revolutionize medicine as we know it.

Yet, the impact will go far beyond health services.  This article about Craig Venter, one of the pioneers of the field, shows how genetic engineering can be used to solve a wide range of thorny problems, like energy.  He and his team are converting microorganisms such as algae and bacteria into organic factories that will produce 17% of our fuel by 2022.

Artificial Intelligence

In 1956, a group of luminaries including Claude Shannon and Marvin Minsky convened at a conference at Dartmouth College.   The purpose was to launch the new field of artificial intelligence.  The organizers boldly predicted that in 20 years the problem could be solved and a computer could do anything a human could do.

Alas, the prediction turned out to be wildly optimistic and in the early 1970’s DARPA pulled its funding from artificial intelligence, inaugurating a period now called the AI Winter.  The stigma lasted until Deep Blue’s defeat of reigning world chess champion Gary Kasparov in a highly publicized match in 1997.

Since then, artificial intelligence methods such as Markov chains, genetic algorithms and neural nets have become widely deployed for purposes such as facial recognition, natural language processing, logistics and, of course, video games.

Probably the best indication of the impact of artificial intelligence is how many tasks we used to consider uniquely human for which we now routinely use computers.  We think nothing of having Expedia calculate a multi-flight itinerary or Amazon and Netflix recommending media choices.

In the not so distant future, self-driving cars and computerized medical diagnostics will become the norm.

Quantum Teleportation

In the late 1920’s, Einstein and Bohr were engaged in a series of famous debates about the future of physics in which Einstein declared “God does not play dice with the universe.” Bohr retorted, “Einstein, stop telling God what to do.”  Einstein lost the argument and was so pissed off that he squandered the rest of his career trying to prove himself right.

At issue was the new field of quantum mechanics.  Einstein’s objection was that this probabilistic view gave rise to some seriously wacky ideas.  He proposed an experiment, dubbed the EPR paradox, which he felt would redeem him.  He pointed out, that if quantum mechanics were valid, the experiment would result in instantaneous teleportation, an apparent impossibility.

He was proved wrong when the experiment was successfully carried out at IBM labs in 1993 and researchers teleported photons a short distance.  More recently, scientists in Europe have achieved a teleportation of nearly 100 miles.

This same principle of quantum entanglement may provide the answer to the limits of the present computer paradigm, which will reach its limit around 2020.  The first quantum computer was sold last year.

The Life Cycle of a Paradigm and the End of The Computer Age

We are coming to the end of the computer age.  It’s not unusual for a family today to own a few laptops, a bunch of smartphones and maybe a tablet or two.  That’s an enormous amount of computing power and far more than we really need.  In ten years, the power of our technology will multiply 100 times; in 15 years, a 1000 times.

With capabilities so cheap and plentiful, it shouldn’t be surprising that they are put to use in seemingly trivial ways.  There’s nothing wrong with that.  If people like to spend time on Facebook or toying with virtual reality on their Xbox Kinect, that’s a perfectly acceptable deployment of technology.

The next generation of technology paradigms are much earlier in their life cycles.  Just as no one knew what a transistor was until small radios came out in the 70’s, many of the most exciting breakthroughs today don’t grab big headlines.  As they improve and become more productive, they will find more frivolous uses and enter the public consciousness.

As Arthur Clark once said, “Any sufficiently advanced technology is indistinguishable from magic.”  After we learn how to use it, it becomes just “stuff.”

And that, after all, is why we do it in the first place.

image credit: en.wikipedia.org

Follow @ixchat on twitter Don’t miss a post (4,700+) – Subscribe to our RSS feed and join our Innovation Excellence group!


5 Principles of InnovationGreg Satell a consultant who concentrates on media, marketing and innovation. Check out at his site, Digital Tonto and follow him on twitter @digitaltonto

This entry was posted in Digital, Innovation, Profiles of Innovators, Technology and tagged , , , , , , , , . Bookmark the permalink.

2 Responses to Are We Losing the Will to Innovate?

  1. Pingback: Our times: more nano-innovation, less ‘big innovation’

  2. Pingback: Innovation Excellence | Innovation Quotes of the Week – September 30, 2012

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Keep Up to Date

  • FeedBurner
  • LinkedIn
  • Twitter
  • Facebook
  • Slideshare
  • Email
  • YouTube
  • IPhone
  • Amazon Kindle
  • Stumble Upon

Innovation Authors - Braden Kelley, Julie Anixter and Rowan Gibson

Your hosts, Braden Kelley, Julie Anixter and Rowan Gibson, are innovation writers, speakers and strategic advisors to many of the world’s leading companies.

“Our mission is to help you achieve innovation excellence inside your own organization by making innovation resources, answers, and best practices accessible for the greater good.”