Electronic Products & Technology

DarwinAI explains explainability

EP&T Magazine   

Automation / Robotics Electronics Engineering AI AI AI technologies artificial intelligence artificial intelligence

Waterloo team publishes key explainability paper, works to improve industry-wide trust in AI

DarwinAI, a Waterloo, Canada startup creating next-generation technologies for artificial intelligence development, announced that the company has conducted academic research that answers a key industry question: “How can enterprises trust AI-generated explanations?” 

Explainability has been key in addressing AI’s ‘black box’ problem as it is nearly impossible for a human to understand how deep neural networks make decisions. To date, there’s been limited assessment of explainability methods within the nascent deep learning field, and most existing evaluations focus on subjective visual interpretations.

The paper, authored by the DarwinAI team, espouses a machine-centric strategy to quantify the performance of explainability algorithms. DarwinAI, which was also recently named a Gartner ‘Cool Vendor,’ is working on a new version of its explainability platform that will offer additional features to bolster enterprises’ understanding of and trust in AI.

“The question of how deep neural networks make decisions has plagued researchers and enterprises alike and is a significant roadblock to the widespread adoption of this particular form of AI,” says Sheldon Fernandez, CEO, DarwinAI. “It is critical that enterprises obtain some understanding of how a neural network reaches its decisions in order to design robust models with a certain level of trust.”

Advertisement

“Explainability in neural networks has been a core concept for deep learning engineers – a necessary and crucial goal for our industry,” adds Drew Gray, CTO of Voyage, an autonomous driving company working with DarwinAI. “With this research, the DarwinAI team has introduced concrete performance metrics for explainability that also highlight the benefits of their own approach. Their toolset takes the concept to the next level by translating explainable insights into recommendations for both model design and dataset augmentation. The latter is particularly exciting for us.”

DarwinAI Research Momentum 

DarwinAI’s paper, “Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms,” explores a machine-centric strategy for quantifying the performance of explainability methods on deep neural networks.

Essentially, the team subjected a deep learning network to a clever psychology test, removing explanatory variables and having the network reevaluate the result to determine the efficacy of a given algorithm. The team conducted a comprehensive analysis using this approach on several state-of-the-art explainability methods, including LIME, SHAP, Expected Gradients and GSInquire, their own proprietary technique.

In one study, the team introduced YOLO Nano, a highly compact deep convolutional neural network designed for embedded object detection on edge and mobile devices. The model was generated using the company’s Generative Synthesis platform, which optimizes neural networks using AI itself to reduce their computational requirements. Moreover, the technology is complementary to hardware toolkits that improve performance on specific chipsets. For example, engineers were able to dramatically accelerate the inference performance of Generative Synthesis by leveraging embedded Intel Deep Learning Boost technology on 2nd Gen Intel Xeon Scalable processors. This combination constitutes a powerful offering for deep learning practitioners.

“Intel and DarwinAI frequently work together to optimize and accelerate artificial intelligence performance on a variety of Intel hardware,” said Wei Li, vice president and general manager of Machine Learning Performance at Intel. “But in addition to performance, we are very supportive of their research into algorithm transparency and explainability, which will help make AI deployments fair, auditable and ethical.”

About DarwinAI

Founded by renowned academics at the University of Waterloo, DarwinAI’s Generative Synthesis technology represents the next evolution in AI development, demystifying the complexities of deep learning neural networks while unraveling their opaqueness. Based on years of distinguished scholarship, the company’s patented AI-assisted platform enables deep learning design, optimization and explainability, with a special emphasis in enabling AI at the edge, where computational and energy resources are limited.

 

Advertisement

Stories continue below

Print this page

Related Stories