BOOK A DEMO TO SEE THE POWER OF IGNITE DIRECT

Request a demo

How Explainable AI Can Benefit Your Business

Artificial Intelligence (AI) has taken centre stage during COVID-19, supplementing the work of scientific and medical experts in fighting this pandemic. There are many global examples of AI technologies solving problems across all stages of this crisis. An Australian-developed AI diagnostic tool, for example, is helping hospital staff around the world accurately detect COVID-19 and assist in its containment. An AI-powered research database developed in the US is enabling scientists to discover coronavirus vaccine and treatment literature resources at unprecedented speed. In the UK, University of Cambridge researchers are using AI to analyse patient information in order to predict the risk of COVID-19 patients developing more severe disease and needing respirator support. 

So too, AI is expected to play a vital role in post-pandemic recovery as businesses seek to advance their efficiency and adaptability. In Australia’s lending sector, AI has already begun to replace many steps and stages in the credit assessment and decisioning process. With these changes come a range of challenges for lenders. Biggest of these is the issue of accountability – making sure that decisions made by AI are fair, ethical and unbiased. 

This need for accountability is likely to be hard to measure if AI technology is allowed to become a “black box”, whereby information goes in, and a decision comes out without any detail on what informed the decision. The ability to explain how AI systems assess data and form decisions is critical to overcoming this challenge. Opening the black box and shedding light on the making of these decisions is what Explainable AI (xAI) is all about. 

What is required for AI to be explainable?

Explainability in AI is a hot topic, and something industry, regulatory and government groups are focusing their attention on. The Department of Industry, Innovation and Science has recently established ethical principles for AI development and deployment and is developing guidance to help organisations put these into practice. The principles cover a range of issues, including privacy protection and security, fairness, human-centred values and explainability. For AI to be explainable, the technology must be able to disclose the following information promptly:  

  • For users – what the system is doing and why
  • For creators and operators – the processes and input data
  • For accident investigators – why accidents occur
  • For regulators – to deep dive 
  • For legal professionals – to inform evidence and decision‐making
  • For the public – to build confidence in the technology

Why is explainability so important?

Algorithms are deciding more about our lives than ever before. More than just enabling credit assessments, they’re making friend suggestions for us on Facebook and deciding what we might like to watch on Netflix. From legal systems to employment and medicine, there are few parts of our lives untouched by some level of automated decisions. 

Whether made by human or AI, any decision can be unfair or wrong. But what we’re working towards with automation is to build ways that we can identify and address these issues. The starting point for this is to make these decisions explainable. As we build explainable decision systems, we can better understand the basis for a determination. And in doing so, develop processes to make those decisions fairer, more accurate and transparent.

But also, we are people having decisions made about us, by processes we can’t see. Decisions made by humans can be biased and flawed. Still, we’re more comfortable with that because someone rather than something is accountable for the decision. Automating decisions is part of the broader digital transformation, which is taking place right now. But people rightly have concerns that with more and more machines making more and more decisions, how do we ensure accountability for those decisions? Therefore explainability is an integral part of building comfort in this new world. 

When can business start seeing the benefits of explainable AI?

Equifax business customers around the globe are already benefiting from explainable AI, thanks to the introduction of NeuroDecision® Technology (NDT). Developed by the Equifax Data & Analytics Lab, the deployment of this ground-breaking patented service is already occurring in sectors as diverse as auto, mortgage, utility, credit card, wireless, insurance, commercial and consumer banking. 

With explainability goals built-in at the design stage, NeuroDecision® Technology enables the development of high-performing, explainable neuro-network models. This game-changing ability to determine the justifiability of highly accurate machine predictions empowers businesses to make informed decisions when assessing risk. 

NeuroDecision® Technology is proving its value in helping Equifax clients reliably predict the future behaviour of specific customers. Applying this technology to your modelling will ensure you get the most out of your models. Highly adaptable, it applies to any new or existing problem where high-performance explainable models are critical. 

We are using NeuroDecision® Technology in custom and configurable solutions we are building for our customers right now using our Ignite® platform.

To see the power of Ignite®, request a demo or contact your Equifax Account Manager to find out how our AI-enabled technology can help you serve your customers and grow your business.

Request a demo
 

Book a demo to see the power of ignite direct

We will contact you to book in your demo if you give us a little bit of information about yourself.

Related Posts

While PEP, sanctions and adverse media screening are vital for customer due diligence, false positives create unnecessary delays and frustration. These inaccurate matches waste time and resources, slowing down onboarding and impacting the customer experience.

So, how can you optimise your screening process and minimise false positives?

Read more

When it was announced in 2017 that the world’s most valuable resource is no longer oil but data, organisations were already leveraging data to manage credit risk, predict future trends, and unlock new revenue systems to drive business growth. 

Read more