Electronic Products & Technology

NuEnergy.ai leads on the Governance of AI

Artificial Intelligence is powerful. Trust in AI is critical.

October 22, 2020  EP&T Magazine

The rapid progress of artificial intelligence (AI) technologies in the last 5-10 years has led many to wonder where it could ultimately take us. From SIRI and self-driving cars to facial recognition technology, AI is progressing rapidly. This revolutionary technology offers numerous opportunities for the advancement of most existing tech platforms, however, while regulations remain – achieving its ethical use ultimately looms as a significant challenge as AI matures.

This is where NuEnergy.ai comes in. The Ottawa-based AI management software and professional services firm partners with governments, technical experts, industry and entrepreneurs to provide solutions that help make AI trustworthy for clients and for global society.

Niraj Bhargava is co-founder and CEO of NuEnergy.ai, a Canadian artificial intelligence management software and professional services firm.

EP&T recently sat down (virtually) with Niraj Bhargava, co-founder and CEO of NuEnergy.ai and asked him to delve into issues relating to governance, ethics and trust. As a Canadian leader in AI governance, NuEnergy.ai is the only company on the Government of Canada AI Source list with its sole focus on the governance of AI. It has also teamed with the Federal Government on ethical AI guardrails.

With more than 30 years of technology, business creation and leadership experience under his belt, Bhargava has a unique profile of entrepreneurship, corporate business expertise and academia, while holding a degree in systems design engineering from the University of Waterloo and an MBA from the Ivey School of Business. He served as university professor and Dean, and as well as CEO, founder and leader of technology companies focused on energy demand for the smart grid and deep neural network AI/ML technology for high-accuracy voice recognition.

At NuEnergy.ai Bhargava leads a team of expert associates who work with clients in defining ethical and cutting-edge AI-enabled solutions and with innovators in the creation, launch and scale up of AI trust measurement techniques. Here is what he had to share.

AI governance is about AI being explainable, transparent, and ethical.

How do you ensure those qualities are built into the AI solutions that NuEnergy.ai is supporting for its clients?

While NuEnergy.ai is qualified to develop algorithms, we have decided to focus our efforts on the governance of AI. We act as a third party partner to make sure AI ‘guardrails’ are in place for organizations developing, procuring or deploying AI. We do this through education, co-creation of organization specific frameworks, and monitoring software via our Machine Trust Platform.

There are many use case examples of why AI needs governance. One is facial recognition technology. While AI recognizing faces can be helpful in preventing violent crimes, for example, some of technologies have been recently been banned in Canada. One of the concerns is that technologies like this could be biased based on skin colour – this misstep has cost organizations and citizens, and could have been avoided.

A governance policy needs to exist and interact with the world around it or within the organization; it is intended to serve.

Who or what bodies should be directly involved with the development of such a governance policy? Who is accountable?

Every organization has a governance body that is accountable – often that is the Board, and the Board may assign these accountabilities to senior management. While legislation is often required, legislation is often lagging in technology innovations like AI. In 2020, governance bodies need to not only govern people, but also machines. AI now has the ability to influence and augment decisions, as well as continuously learn and perhaps make its own decisions.

It is the responsibility of any governing body to oversee its AI.

NuEnergy.ai describes itself as having a passion for contributing to building an AI-enabled world that everyone can trust.

As an end-consumer, how do I know I can trust your AI solutions?

We provide transparent measurements. NuEnergy.ai advocates measurements against ethical and trust questions that are developed by our clients. We facilitate the framework and measurement using the best known techniques that are auditable and transparently provided.

As AI becomes increasingly prevalent in more and more tech devices, how do consumers safeguard themselves from ‘bad’ AI?

Be demanding to their suppliers and be selective to those you trust. Look for transparency and informed consent.

How can those that create electronic devices, with AI incorporated into the designs, ensure that it will be developed and marketed ethically?

NuEnergy.ai and others are not only helping develop standards but also some common measures and Trust Index questions. NuEnergy.ai can provide a transparent trust index to measure and monitor the AI. Further, I add that electronics sensors and the Internet of Things often create devices that gather data without being fully informed on how the data is used. Transparency from electronics customers will become more important to safeguard those creating electronic devices.

How big a role does training and education of AI use play in its acceptance and eventual global adoption?

Very important. There are high levels of interest in NuEnergy.ai AI Governance Education Series. In 2020 and beyond, no one should develop or deploy AI without education and practices of ethical checks and balances.

We most recently delivered a novel program to Transport Canada where over 50 senior leaders were educated on the need for governance and how to create an ethical roadmap for the organization as they transport people and goods across the country.

Describe how NuEnergy.ai is helping the world “put on ethical guardrails” amidst the rapid change that AI is bringing about?

  • Custom education;
  • Organization specific AI governance frameworks;
  • Configurable measurement software. If you don’t measure AO, you cannot manage AI.

More than ever, organizations are using artificial intelligence to customize and prioritize their business and commercial actions. While the public and the average consumer is getting used to the idea of AI making impact on their everyday transactions, it is taking a wide form of the public attention as well. The latest developments regarding facial recognition technology for policing, contact tracing tech (Covid-19), virtual health care, and use of AI in financial sector (i.e. to administer loan applications) and autonomous vehicles are front and centre today.

However, even obvious ethical failures in AI have not fully embedded the need to govern AI from the perspective of the developer or deployer of such technologies. So whose problem is this anyway?

The debate on whose responsibility it is to worry about risks of AI is taking place in many boardrooms, and by many board directors. Do we need strong regulations by our governments to manage the risks of AI? Or do we allow manufacturers and creators of AI to self-govern on the areas of transparency, bias, explainability and safety? How does the average company that is deploying innovative tech/AI tackle such a problem? Who is accountable for the impact of rogue or opaque AI that damages societal risks, or even causes harm to citizens?

The task of governing AI can seem daunting and requires unique frameworks and resourcing. Unfortunately, most companies wait for a crisis to get this right, and often risk losing their reputation and public trust as a result of ethical failures in AI. A recent example related to facial recognition AI has turned this technology upside down. Government of Canada has recently stopped use of Clearview AI facial recognition technology, with the giant tech companies (IBM, Amazon and Microsoft) stopping development of this tool altogether.

Governments need to start playing a pivotal role in AI governance. Most notably Canada and a handful of other countries have led this effort and Canada has introduced the Algorithmic Impact Assessment (AIA) tool that allows deployers of AI to assess and mitigate the risks associated with deploying an automated decision system.

But, AI can in fact be trustworthy. A framework that is co-created by the developer/deployer and a third-party company can be the solution that is unbiased and allows guardrails around new innovations that can co-exist with society without endangering lives or damaging trust of the public. As a pioneer in the area of AI Governance, NuEnergy.ai, an Ottawa start-up in the business of AI governance, does exactly this.

“If can’t measure the trust level of AI, you can’t manage it.

—————————-

Niraj Bhargava is co-founder and CEO of NuEnergy.ai, a Canadian artificial intelligence management software and professional services firm that partners with governments, technical experts, industry and entrepreneurs in providing solutions that help make AI trustworthy for clients and for global society.      https://www.nuenergy.ai


Print this page

Related Stories

Leave a Reply

Your email address will not be published. Required fields are marked *

*