Based on the sheer number of articles about deep learning that are published every day, one could be forgiven for thinking that deep learning and neural networks make up the bulk of artificial intelligence innovation. Here is why when it comes to neural networks, we’ve only just scratched the surface.
Despite the incredible technological advances made possible through these deep learning techniques, relatively few organizations have opted to implement them.
According to Mary Beth Moore, an artificial intelligence and language analytics strategist for SAS, those who do use deep learning tend to do so for specific use cases, such as CNN (convolutional neural networks) and image recognition.
Even when neural networks can be applied to other spaces, such as text analysis, they tend to be less popular than conventional machine learning approaches.
Why? For one thing, neural networks require a large amount of clean, labeled data. Clean, labeled data, in turn, requires processors capable of handling substantial training sets, as well as engineers who are familiar with applying deep learning frameworks, both of which can impose extra costs on companies that can ill afford them.
What about the issue of transparency?
Somewhat paradoxically, the more accurate a neural network becomes, the less transparent it is; in other words, as the neural network develops, it becomes harder and harder to pinpoint how it arrives at a particular solution.
Naturally, this makes some companies reluctant to embrace a technology whose results, no matter how accurate, are difficult to explain fully to clients and investors.
However, a recent paper published by researchers from MIT Lincoln Laboratory explored ways to design a neural network that would make it easier to interpret results while maintaining a high level of accuracy. As the authors note, while neural networks “were initially designed with a degree of model transparency, their performance on complex visual reasoning benchmarks was lacking.”
Most current iterations of neural networks “do not provide an effective mechanism for understanding the reasoning process.”
The neural networks solution was to create “Transparency by Design networks.” Networks that are capable of “directly [evaluating] the model’s learning process,” helps to lessen the mystique surrounding neural networks, and provide more accountability.
While the development of such techniques will hopefully hasten the adoption of neural networks across a more significant number of industries, it must also be noted that neural networks themselves still have a long way to develop.
The difficulty in adoption is, in part, due to the aforementioned need for extensive data training sets.
Data sets require companies to undertake the arduous process of collection, cleaning, and labeling.
It is estimated that, for a deep learning algorithm to reach or exceed a human’s performance, the training set should contain at least 10 million labeled data examples. That much data is a reasonably high bar to clear, especially for smaller companies who do not have the means or the opportunity to gather that many pieces of information.
The opportunities that deep learning can offer businesses are enormous.
For instance, deep learning can help companies reduce their manufacturing costs by increasing accuracy and efficiency. It can also identify new business opportunities, personalize interactions between customer and company, and enable businesses to better respond to shifts in supply and demand.
Neural networks are transforming the world of healthcare by pinpointing effective treatment options, analyzing research, and finding patterns that would otherwise have gone unnoticed.
Neural networks underpin several of the most widely-used AI technologies: image recognition, voice recognition, and translation.
These neural networks are also capable of creating art, composing music, and teaching themselves how to solve a Rubik’s cube. Other functions are there that only humans were previously capable of performing to a high level.
Whether we can create sentient AI or not, the fact remains that neural networks are capable of doing much more than carrying out basic analytical tasks. More than any other technology, neural networks can demonstrate, or to some critics, mimic, human intuition and creativity.
Jeremy Fain is the CEO and co-founder of Cogntiv. With over 20 years of interactive experience across agency, publisher, and ad tech management, Jeremy led North American Accounts for Rubicon Project before founding Cognitiv. At Rubicon Project, Jeremy was responsible for global market success of over 400 media companies and 500 demand partners through Real-Time-Bidding, new product development, and other revenue strategies, ensuring interactive buyers and sellers could take full advantage of automated transactions. Prior to Rubicon Project, Jeremy served as Director of Network Solutions for CBS Interactive. With oversight of a $30 million+ P&L, Jeremy was responsible for development, execution and management of data-driven solutions across CBS Interactive’s network of branded sites, including audience targeting, private exchange, and custom audience solutions. Prior to CBS, Jeremy served as Vice President of Industry Services for the IAB, where he shaped interactive industry policy, standards, and best practices, such as the first VAST standard and the Tc&Cs 3.0, by working on a daily basis with all the major media companies as well as all the agency holding companies.