This is post by LinkedIn influencer Peter Diamandis, Cofounder at Human Longevity, Inc.
Unexpected convergent consequences… this is what happens when eight different exponential technologies all explode onto the scene at once.
This blog (the second of seven) is a look at artificial intelligence. Future blogs will look at other tech areas.
An expert might be reasonably good at predicting the growth of a single exponential technology (e.g. the Internet of Things), but try to predict the future when A.I., Robotics, VR, Synthetic Biology and Computation are all doubling, morphing and recombining… You have a very exciting (read: unpredictable) future. This year at my Abundance 360 Summit I decided to explore this concept in sessions I called “Convergence Catalyzers.”
For each technology, I brought in an industry expert to identify their Top 5 Recent Breakthroughs (2012-2015) and their Top 5 Anticipated Breakthroughs (2016-2018). Then, we explored the patterns that emerged.
Artificial Intelligence – Context
At A360 this year, my expert on AI was Stephen Gold, the CMO and VP of Business Development and Partner Programs at IBM Watson. Here’s some context before we dive in.
Artificial intelligence is the ability of a computer to understand what you’re asking and then infer the best possible answer from all the available evidence.
You may think of AI as Siri or Google Now on your iPhone, Jarvis from Iron Man or IBM’s Watson.
Progress of late is furious — an AI R&D arms race is underway among the world’s top technology giants.
Soon AI will become the most important human collaboration tool ever created, amplifying our abilities and providing a simple user interface to all exponential technologies. Ultimately, it’s helping us speed toward a world of abundance.
The implications of true AI are staggering, and I asked Stephen to share his top five breakthroughs from the past three years to illustrate some of them.
Recent Top 5 Breakthroughs in AI: 2011 – 2015
“It’s amazing,” said Gold. “For 50 years, we’ve ideated about this idea of artificial intelligence. But it’s only been in the last few years that we’ve seen a fundamental transformation in this technology.”
Here are the breakthroughs Stephen identified in Artificial Intelligence research from 2012-2015:
1. IBM Watson wins Jeopardy demo’s integration of natural language processing, machine learning (ML), and big data.
In 2011, IBM’s AI system, dubbed “Watson,” won a game of Jeopardy against the top two all-time champions.
This was a historic moment, the “Kitty Hawk moment” for artificial intelligence.
“It was really the first substantial, commercial demonstration of the power of this technology,” explained Gold. “We wanted to prove a point that you could bring together some very unique technologies: natural language technologies, artificial intelligence, the context, the machine learning and deep learning, analytics and data and do something purposeful that ideally could be commercialized.”
2. Siri/Google Now redefine human-data interaction.
In the past few years, systems like Siri and Google Now opened our minds to the idea that we don’t have to be tethered to a laptop to have seamless interaction with information.
In this model, AIs will move from speech recognition to natural language interaction, to natural language generation, and eventually to an ability to write as well as receive information.
3. Deep Learning demonstrates how machines learn on their own, advance and adapt.
“Machine learning is about man assisting computers. Deep learning is about systems beginning to progress and learn on their own,” says Gold. “Historically, systems have always been trained. They’ve been programmed. And, over time, the programming languages changed. We certainly moved beyond FORTRAN and BASIC, but we’ve always been limited to this idea of conventional rules and logic and structured data.”
As we move into the area of AI and cognitive computing, we’re exploring the ability of computers to do more unaided/unassisted learning.
4. Image Recognition and interpretation now rivals what humans can do – allowing for imagine interpretation and anomaly detection.
Image recognition has exploded over the last few years. Facebook and Google Photos, for example, each have tens of billions of images on their platform. With this dataset, they (and many others) are developing technologies that go beyond facial recognition providing algorithms that can tell you label what is in the image: a boat, plane, car, cat, dog, and so on.
The crazy part is that the algorithms are better than humans at recognizing images. The implications are enormous. “Imagine,” says Gold, “an AI able to examine an X-ray or CAT scan or MRI to report what looks abnormal.”
5. AI Apps Proliferate: Universities scramble to adopt AI curriculum
As AI begins to impact every industry and every profession, there is a response where schools and universities are ramping up their AI and Machine Learning curriculum. IBM, for example, is working with over 150 partners to present both business and technology-oriented students with cognitive computing curricula.
So what’s in store for the near future?
Anticipated Top A.I. Breakthroughs: 2016 – 2018
Here are Gold’s predictions for the most exciting, disruptive developments coming in AI in the next three years. As entrepreneurs and investors, these are the areas you should be focusing on, as the business opportunities are tremendous.
1. Next-gen AI systems will beat the Turing Test
Alan Turing created the Turing Test half a century ago as a way to determine a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.
Loosely, if an artificial system passed the Turing Test, it could be considered “AI.”
Gold believes, “that for all practical purposes, these systems will pass the Turing Test” in the next three-year period.
Perhaps more importantly, if it does, this event will accelerate the conversation about the proper use of these technologies and their applications.
2. All five human senses (yes, including taste, smell and touch) will become part of the normal computing experience.
AIs will begin to sense and use all five senses. “The sense of touch, smell, and hearing will become prominent in the use of AI,” explained Gold. “It will begin to process all that additional incremental information.”
When applied to our computing experience, we will engage in a much more intuitive and natural ecosystem that appeals to all of our senses.
3. Solving Big Problems: Detect & Deter Terrorism, Manage Global Climate Change.
AI will help solve some of society’s most daunting challenges.
Gold continues, “We’ve discussed AI’s impact on healthcare. We’re already seeing this technology being deployed in governments to assist in the understanding and preemptive discovery of terrorist activity.”
We’ll see revolutions in how we manage climate change, redesign and democratize education, make scientific discoveries, leverage energy resources, and develop solutions to difficult problems.
4. Leverage ALL health data (genomic, phenotypic, social) to redefine the practice of medicine.
“I think AI’s effect on healthcare will be far more pervasive and far quicker than anyone anticipates,” says Gold. “Even today, AI/Machine Learning is being used in oncology to identify optimal treatment patterns.”
But it goes far beyond this. AI is being used to match clinical trials with patients, drive robotic surgeons, read radiological findings and analyze genomic sequences.
5. AI will be woven into the very fabric of our lives – physically and virtually.
Ultimately, during the AI revolution taking place in the next three years, AI’s will be integrated into everything around us, combining sensors and networks and making all systems “smart.”
AI’s will push forward the ideas of transparency, of seamless interaction with devices and information, making everything personalized and easy to use. We’ll be able to harness that sensor data and put it into an actionable form, at the moment when we need to make a decision.
Disclaimer: This is an Influencer post. The statements, opinions and data contained in these publications are solely those of the individual authors and contributors and not of iamwire and the editor(s). This article was initially published here