DarwinAI unveils next-gen tech for artificial intelligence development
EP&T MagazineElectronics AI AI artificial artificial intelligence chip chip intelligence
Team to present research findings at NeurIPS as company prepares to release latest explainability platform
DarwinAI, Waterloo-based startup creating next-gen tech for artificial intelligence development, has announced the next milestone in its product roadmap with the release of its explainability toolkit for network performance diagnostics.
Based on the company’s Generative Synthesis technology, this first iteration of the tool provides granular insights into neural network performance. Specifically, the platform provides a detailed breakdown of how a model performs for specific tasks at the layer or neuron level. This deep understanding of the network’s components and their involvement in specific tasks enables a developer to fine-tune the model designs for efficiency and accuracy.
The introduction of explainability comes two months after the company announced its emergence from stealth, its Generative Synthesis platform, and $3 million in seed funding, co-led by Obvious Ventures and iNovia Capital, as well as angels from the Creative Destruction Lab accelerator in Toronto.
“Understanding network efficiencies at such a granular level is how our platform is able to achieve such fantastic optimization results,” said Sheldon Fernandez, CEO of DarwinAI. “With our explainability tool, we are now surfacing this information to our clients, which allows them to better fine-tune their networks for specific tasks. All this is made possible by our patented Generative Synthesis technology, which scrutinizes deep neural networks using AI itself.”
Explainability’s Initial Frontier: Neural Network Performance
Explainability is key in addressing the “black box” problem at the heart of deep learning. Given the tremendous complexity of neural networks (hundreds of layers with millions of parameters), it is virtually impossible for a human to understand how such a network makes a decision and, more generally, what makes a good network. Generative Synthesis, DarwinAI’s core technology and the product of years of academic research, uses AI itself to understand the capacity of each neuron and its impact on network performance. This data yields valuable insights into how developers can improve the neural network for specific tasks.
“It is essential for organizations to understand a neural network’s behavior to improve deep learning models, eliminate functional problems and triggers during model development, and ensure a safer, more accurate neural network,” said Mike Leone, senior analyst, Enterprise Strategy Group. “DarwinAI’s initial explainability offering for network performance and the Generative Synthesis platform address these industry pain points and underscore the benefits of leveraging AI to analyze and build AI.”
DarwinAI’s Generative Synthesis technology is premised on the power of human-machine collaboration. Whereas the company’s initial platform focuses on the generation of highly optimized neural networks, in which developers work with AI to rapidly develop efficient deep neural networks, this second release provides crucial insights into how the network can be improved for particular tasks and low-level behaviors under specific conditions.
“Generative Synthesis is a powerful technology that accelerates deep learning and enables it at the edge,” said Drew Grey, CTO of Voyage and a respected thought leader in the autonomous vehicle space. “Moreover, Darwin’s explainability toolkit gives our team key insights into why and how a network behaves the way it does. Such insights allow us to construct more robust and efficient models. The technology also has the potential to reduce the amount of labelled data we need to purchase by pinpointing a model’s inefficiencies in certain areas. Our work as an early adopter of Darwin’s technology has been extremely encouraging.”
DarwinAI’s initial offerings – deep learning design optimization and explainability for network performance – prefigure an ambitious product roadmap.
In the coming months, the company is scheduled to release additional explainability features that will have important implications for the ethical, regulatory and reliability elements of deep learning. These upcoming features can be applied to numerous vertical industries.
One upcoming feature includes root-cause analysis, which will allow an engineer to identify the inputs that most influence a particular network decision. Such information can, in turn, highlight problematic correlations, biases in data, edge and boundary cases, and identify ways to rectify these shortcomings.
“Peering into the black box of deep learning is imperative to unlocking its potential in highly regulated industries,” said Professor Alexander Wong, co-founder and chief scientist at DarwinAI. “Our initial offerings provide small glimpses into this box, but we are working towards full illumination.”
Product Availability and Details
DarwinAI will be releasing the first version of its explainability toolkit – enabled by the company’s Generative Synthesis technology – to enterprise customers, beginning with the neural network performance toolkit at the end of this year. The user interface for neural network performance explainability will include new features for profiling and explaining the performance of deep neural networks.