Simplifying the artificial neural network, from theory to efficient reality
Thanks to new technological advances, potential savings in computer memory storage and execution times are staggering.
Now that recent multi-stage computer vision algorithms have been created for practical use—over 100 years after Alexander Bain and William James coined the term “neural network” and 40 years after Paul Werbos’ dissertation on creating an algorithm to numerically simulate an artificial version of these processes—the focus may be shifting to simplifying this process to a single stage.
Based on the research of how a brain processes the identification of a visual object (namely through a hierarchical ordering of edge detection, color changes, visual attention, shape recognition, and object identification), this algorithm contains a parallel set of stages that passes data along a chain of layers. It starts with the object on display and ends with a computer output that identifies the object as belonging to one of several classes. The Universal Approximation Theorem (UAT) suggests that this multi-stage algorithm can ultimately be consolidated into a single stage.
The potential savings in computer memory storage and execution times are staggering.
Learning from Past Setbacks
However, from a commercial standpoint, we are not there yet. The work of Yann LeCun and his Bell Lab colleagues in the 1990s—training a computer to classify more than 10 classes of objects—was ultimately halted by computer processing speeds. It was not until the year 2012 that a group from the University of Toronto, led by Alex Krizhevsky, harnessed the power of the graphics processing unit, which enabled a deeper design of the LeCun version.
This new deep artificial neural network had enough computing capacity to expand upon the number and complexity of computational layers and to overcome the limitations of processing speeds; it regained favor as the algorithm of choice for computer vision and object detection.
The wide-ranging applications of this deep version include face detection, merchandise detection, and, most recently, in the area of medical diagnoses of Alzheimer’s disease using MRI images. But as you might notice, this expansion of layers and parameters may one day reverse itself if we adhere to the UAT’s call for a single layer.
From Theory to Proven Practicality
That’s where James LaRue enters the picture. As ICF’s technical director for cybersecurity services, LaRue presented a newly-allowed patent at the April 2018 SPIE Disruptive Technologies conference in Orlando, Florida. It may be the first proven and practical solution to deliver on the UAT promise. It is important to note that a theorem may state that something is true without giving an actual example, leaving it up to a practitioner to create one.
Such is the case with the UAT and with LaRue’s patent. His idea was to utilize a technique that condenses the multiple steps of computations across each successive neural network layer into one single step of processing for each layer. An additional multiplication then combines the series of single steps and produces one final single step process is then executed.
The technique is the culmination of combining the learning of numerical association, submarine detection, and speaker identification in the context of multiple conversations. Hence, the solution presented in the patent is more about applying conceptual ideas to the problem rather than directing computational resources to it.
The patent was based on the LeCun solution from the 1990s with the result, as reported to DARPA at the Innovation House project in 2012, that the Patent (application at that time) yielded an accuracy of 97% (relative accuracy to the LeCun solution) but operated at a speed 10x faster; which makes sense since we only had to execute a single layer approximation.
From Proven Practicality to Potential Implementation
Now that LaRue’s patent has been approved—lending even more credibility to the solution—he will use the machine learning enterprise developed at ICF to move forward with promoting the UAT one-step solution for implementation alongside present multi-stage implementations. Imagine an Unmanned Aerial Vehicle (UAV) that is doing on-board computer vision tasks, and consider its limited fuel resources that must be rationed for flying and other complex tasks and computations.
Now imagine a UAT algorithm that consumes one-tenth of the power required for those computer vision tasks. It’s another win-win solution!