AI Methods, Around For Decades, Now Move To Dominate Everything
More than 350 years ago, Gottfried Leibniz theorized that all rational thought could be broken down into a series of binary expressions, that whatever went on in a person's head could be transferred to a mechanical device. It was this theory that Leibniz believed could lead the world away from wars and arguments. Logic and rational solutions, he forecast, could be arrived at through a non-partisan device.
The German mathematician-philosopher went on to devise a calculating language comprised of just two things: 1s and 0s. We now know Leibniz's breakthrough as the binary computing language, of course, the language of machines.
Leibniz might be disappointed to learn that petty disagreements and grudges can still conflagrate into gun battles and country-on-country wars, even in this technology-enabled world. But he'd be buoyed to see the recent progress in the field we refer to as artificial intelligence, which has become a keystone in the tech zeitgeist. Companies employing AI techniques such as machine learning, neural nets, natural language processing and deep learning have come to dominate the lists of recently funded startups.
Some of the solutions that the world's technologists have been pursuing for decades, if not centuries, are now realities in the current phases of AI development. In the near future, AI disciplines will play a role in curing diseases, abetting high-level creativity, making travel safer, and dozens of other applications across a bevy of disciplines. Some AI algorithms may even be saving tens of thousands of human lives in the very near future.
To be clear, artificial intelligence, in the modern tech parlance, isn't a moniker to be taken literally. Success, to most, isn't defined as creating a machine that's truly intelligent and self-aware. It's about solving problems and sniffing out relationships that humans, on their own, can't. The best AI solutions of the moment can do that with aplomb.
"If the cost of computing continues to fall, as does the cost of data, while we increase our digital footprint, it means more and more capabilities will exist for AI to take over digital aspects of our lives," says Trevor Orsztynowicz, VP of engineering at Bench, an automation platform for accounting.
This has been a mission of many of the best minds in computer science for decades. In fact, many of the techniques we consider to be cutting edge have been in the quiver of computer scientists for a generation or two. It's stacks of raw computing power plus the deluge of data the modern web has given us that has pushed AI back to into the innovation wheelhouses of Silicon Valley.
AI techniques have been around for decades
Deep learning techniques were first developed in the 1960s and received cycles of attention, brightening and fading, into the 1990s. Computer scientists would grow excited with little breakthroughs, but were then limited in their work, which was primarily focused on very controlled, in-laboratory scenarios, as there was a dearth of real world applications because of limited data and computer power.
Academics with access to large banks of computer power were operating in a vacuum. This kind of power simply wasn't available or affordable for most programmers and companies.
Neural nets were first theorized about in the 1940s, and work on them advanced into the 1970s, but at that point, theory was largely ahead of hardware. The pursuit of solutions using neural nets was relegated to a waiting game when computing power caught up to the thought work already done by humans.
Machine learning followed a similar path, as it's closely related to these other methods. Theory development began in the 1950s, but the field experienced lulls and fits of progress for decades as academics progressed it with demonstrations but met up with the constraints of the hardware and available data of the day.
In the 1980s, many of these techniques found themselves in the backseat to expert systems, a kind of rule-based AI that generated a lot of excitement at the time, and consumed a lot of bandwidth among businesses who actually had data and were trying to apply these systems. By the 1990s, however, the promise of expert systems had faded, and the modern disciplines of AI crept back to the fore, empowered by more and cheaper parallel processing power.
The real breakthrough, however, came from the web and trove of data that the world's computers began to produce once they were all drawing from the same network.
Computing power + data = AI renaissance
AI in practice stalled in the past because the breadth of data required to make it work was in short supply. On the rare occasions when a practitioner would find enough data, she would often be foiled by a lack of computing power. Yes, the work could be done, and, yes, people knew how to do it, but getting the kinds of resources necessary required a five-figure or six-figure grant—just to analyze one data set.
As most of the machinations of the world have migrated online, humanity has become awash in data. Some of it is valuable, some of it is worthless, but all of it can be passed through filters, brought to heel by statistical examinations and poured through the great siphon of AI.
Advances in parallel computing power in chips built to handle it have continually brought down the cost and time needed to perform deep learning and neural network data examinations. The 'AI winters' of the past were brought on, quite simply, to methods and theory being ahead of hardware and available data.
"I was in college in the 1980s and it was the height of the hype curve for neural networks," says Derek Collison, the CEO of Apcera, a container management platform for cloud software. "I remember building a model to identify numbers from hand-written notes. It felt very powerful, but we quickly hit a wall with how brittle the models were and the lack of massive amounts of compute and data being available. The NNs of today of course do have some nice upgrades, but essentially are very similar to those we worked on in the late 1980s."
The other big change is that most of this work is now done with a purpose, it's no longer just researchers and those pursuing PhDs.
"When you apply any kind of analytics, it needs to be done for a reason," says Fiona McNeill, SAS' global product marketing manager. "When analytics solves the problem at hand, or provides the answer to a complex question - there isn’t disillusionment, there’s productivity gains, newfound opportunities, and insight. Few organizations today can afford to only experiment."
With masses of data now available in the real world, neural networks evolved into deep learning neural nets, with thousands of layers, rather than the few layers held by their former, shallow counterparts, explains McNeill.
All of this has lead to the current situation where AI startups have become the toast tech, drawing $5.4 billion in funding during 2016, up from just $88 million in 2009, a 61x increase in just seven years.
And as AI techniques become more popular and standardized, there have arisen more tools available to developers to leverage them. The shovels and pickaxes of the AI world are now being fashioned by new startups and big companies like Amazon.
We are close to having a world where any kind of AI can be summoned via API, supplied by a large swath of competitors at ever-lowering prices. Amazon already offers machine learning on tap through AWS. Just as cloud servers, storage and computing power has become a commodity, available from a spectrum of companies, AI will also follow this route.
The commoditization of AI is afoot, and its impact on our world is only beginning to be felt.
Next in our AI reports: AI Can Make Anything A Commodity, Even Creativity