INTRODUCTION: RE-BOOTING THE FINANCIAL SERVICES SECTOR

Technology continues to drive transformation, shifting manual processes to automated solutions and the ongoing evolution to digitised lending will bring greater accuracy and insights to credit risk.

Artificial intelligence complements a suite of evolving credit risk tools - comprehensive credit data, repayment history, income and employment data, bank transaction data – that assist a lender to meet the key responsible lending criteria sitting at the heart of the National Consumer Credit Protection Act 2009:

1. make reasonable inquiries about the consumer’s requirements and objectives

2. make reasonable inquiries about the consumer’s financial situation

3. take reasonable steps to verify the consumer’s financial situation.

These three criteria have been the focus of the Royal Commission into Misconduct in Banking, Superannuation and Financial Services Industry; the Australian Securities and Investments Commission’s (ASIC) review of Regulatory Guide 209 Credit licensing: Responsible lending; and Australian Prudential Regulatory Authority’s proposed Prudential Standard 220 - Credit Risk Management standard.

No single tool will meet all compliance needs, but taken together, the compliance gap can be narrowed.

The digitised lending experience continues to evolve as consumers change the way they engage with credit providers and manage their financial wellbeing. While those with legacy systems will struggle to keep up with the ever-changing digital landscape, more nimble lenders are taking advantage of the opportunity evolving intelligence is bringing.

1. “Make reasonable inquiries about the consumer's requirements and objectives”

Many credit providers are focused on improving the way they verify the consumer’s financial situation, but the first step in the assessment, is to make reasonable inquiries about the consumer’s requirements and objectives. This involves capturing a detailed explanation about the purpose of the loan, then recording this information as part of the application.

To comply with this obligation, lenders are faced with two challenges.

Capturing the consumer’s story

The first challenge is establishing processes to extract information from consumers about why they are applying for credit. General descriptions such as ‘personal expenses’ or ‘business investments’, are no longer enough to comply with responsible lending regulations. A checklist or questionnaire may be convenient, but best practice is likely to involve:

  • asking the consumer to provide a concise narrative of their requirements and objectives;
  • paraphrasing the consumer’s objectives back to them, in relation to the features of the product they’ve selected.

Understanding the consumer’s story

The second challenge for lenders is what to do with the consumer’s story. Responsible lending obligations do not end with simply recording this information. Credit providers need to connect the dots between the purpose of the loan and the consumer’s financial situation. With that understood, lenders can determine a level of credit appropriate for an individual.

As humans, we can read a free-text field and quickly understand what level of credit is reasonable for an individual in a particular situation. But this kind of manual process is neither scalable, nor enabling broader insights and learnings.

Connecting the dots at scale

Having a machine absorb, interpret and make decisions based on free text information is significantly more complex, but Artificial Intelligence (AI) and machine learning is making enormous leaps through Natural Language Processing (NLP), which can help credit providers connect the dots at scale by:

  • automating the process of reading and interpreting consumers’ requirements and objectives
  • using smart analytics and predictive attributes to detect red flags and inconsistencies.

The swift-evolving nature of technology means in the near future NLP techniques may also be used to:

  • intelligently compare the consumer’s objectives to the product features
  • paraphrase the consumer’s objectives back to them, to confirm they are accurate
  • educate the consumer about more complex product features where appropriate
  • identify applications that require additional actions or manual intervention
  • make a formal record of each step in the process, for compliance purposes
  • identify machine-generated explanations that may indicate a fraudulent application
  • generate an understandable explanation of the assessment in the event of a dispute.

More remains to be done, particularly with the challenge for AI to be able to detect and assess more nuanced aspects, such as:

  • Is there a mismatch between the consumer’s objectives and the features of the product they’ve chosen?
  • Is the consumer confused or unsure about their objectives?
  • Does the consumer have a limited capacity to understand the contract?

2. “Make reasonable inquiries about the consumer’s financial situation”

Over the past 18 months there has been close scrutiny of the Melbourne Institute's Household Expenditure Measure (HEM) benchmark, a tool based on ABS data estimates of household spending based and the median spend on critical basics such as food and common discretionary basics such as takeaway food, alcohol, childcare and entertainment.

While HEM continues to be scrutinised, the broader question of benchmark utility in a larger, sophisticated AI decision flow offers new opportunity. While ASIC regulatory guidance notes that: ‘[benchmarks] are not a replacement to making enquiries...nor a replacement for an assessment based on...verified income and expenses’ the responsible lending guideline proposed by ASIC also suggests that: ‘benchmarks can be useful as a tool to test the plausibility of consumer-provided information, but do not give a positive confirmation’.

Essentially, it’s clear that lenders need to move from linear ‘yes/no’ decision trees such as HEM that are useful at a population level, to a more nuanced and consumer-centred decision flow.

Rethinking decision flows

To do this at scale requires AI solutions that consider a broad range of data sources and follow many different lines of enquiry before making a decision. As each line of enquiry will be specific to the consumer, the AI will need to communicate with them to request additional information or explanations, ensuring there are no unreasonable gaps or red flags in their financial records.

For example, it may be helpful if the application process is a cascading series of engagement points that differs depending on the information provided at each stage of the process. Whether this is achievable will depend on whether an AI can be created that doesn’t follow linear decision waterfalls or static triggers to benchmarks, but can make decisions that are both fair and explainable.

These are not small problems in the realm of reliance on AI alone.

3. “Take reasonable steps to verify the consumers’ financial situation”

The need for verification arises because of information asymmetry—only the consumer knows their true situation in detail and the credit provider can only surmise and infer. Verifying the consumer’s financial situation is therefore important but can quickly devolve into questions about the veracity of available data sources, unobservable data points and the fairness of automated decision-making processes.

Balancing regulations and privacy

To address these challenges, lenders need to go well beyond checking a consumer’s income and expenses. The existing regulatory guidelines state that some applications may require verification of:

  • employment circumstances (full time, part time or casual) or any foreseeable change in circumstances, such as impending retirement
  • the nature of any discretionary expenses
  • credit history, including the use of small credit contracts and credit recycling
  • the consumer’s age and number of dependants
  • the consumer’s assets, including their nature and value.

This desire to solve the problem of information asymmetry must be carefully balanced with:

  • protecting consumers’ privacy
  • detecting bias in automated decision-making processes
  • staying in front of fraudulent actors who quickly learn and adapt to new processes
  • minimising the costs of processing applications
  • delivering a positive customer experience across all channels.

Balancing these obligations will not be achieved by simply using a different method to obtain the data. It must start with assessing data’s four “V” (volume, variety, velocity and veracity) and understanding how the data—and the analytics and AI driven by the data—balances the needs of credit providers and consumers. At this point, the solution appears to be a combination of comprehensive credit data, repayment history, income and employment data, and bank transaction data—potentially available under the proposed Consumer Data Right (CDR) framework.

There are three compelling advantages of credit data over other data sources:

  • It’s independent and not limited to what the consumer chooses to disclose, solving the problem of information asymmetry.
  • There are well-established, reliable mechanisms to obtain the data quickly and at scale.
  • There are strong privacy safeguards and controls over the use of data.

Contrast that with transaction data from a consumer’s bank account. Regardless of what technical mechanism is used to obtain the data, there remain two fundamental challenges:

  • It requires a consumer’s consent and so doesn’t solve the problem of information asymmetry.
  • The credit provider may obtain more information than strictly necessary to verify the consumer’s financial situation.

How much data is too much?

This brings us to a key challenge for credit providers when verifying the consumer’s financial situation: how much data is too much? ASIC and APRA’s proposed approach starts with an ‘if not, why not’ attitude, so if the information is readily available, then credit providers may need to justify why they didn’t use that information. However, there is a difference between data that is:

  • readily available (such as standardised (APIs)
  • useful in meeting the competing objectives of collecting the data
  • necessary to balance the risks and benefits for consumers.

This raises issues well beyond protecting consumers’ privacy. Obtaining data beyond what is strictly necessary for verification can place more responsibility on credit providers, effectively shifting their responsible lending obligations from avoiding harm to making decisions based on a biased judgement of the consumer’s lifestyle and behaviours.

Under Open Banking, credit providers will soon be absorbing more consumer financial information than ever before. Whether or not this information will be used in the decision-making process is redundant—the lender will still be liable under responsible lending guidelines as they have accessed the data.

Balancing the need for verification with ethical and unbiased collection methods requires a three-pronged solution that:

  • provides access to the right type of data
  • uses fair algorithms that offer explainable outcomes
  • enables human intervention where appropriate.

Detecting application fraud

There is also a link between the verification of a consumer’s financial situation and the detection of application fraud. This is most prevalent among applications by third parties such as mortgage brokers, who may deliberately or mistakenly submit fraudulent or misleading applications.

By using an independent, credible source to verify information (such as verifying income with an employer), credit providers may be able to meet their responsible lending obligations. However, this doesn’t necessarily help the lender identify doubtful applications that warrant additional steps to detect potential fraud.

By the time a credit provider identifies a trend in fraudulent applications, the fraudsters have moved on to the next green field. Detecting fraud at the point of application requires predictive analytics driven by the whole industry. Fraud forums and industry collaborations do exist, but they require a step forward so the focus expands from application channel to application location, at a sufficiently granular level that can detect fraud without compromising privacy.

Capturing, structuring and interpreting data to verify a consumer’s financial situation is more than a question of technology. No single data source will be superior in all cases and across all dimensions. To maintain compliance, avoid bias and detect potential fraud, credit providers may need to consider implementing ‘privacy screens’ which offer a physical, legal and ethical barrier between the data recipient and the decision maker.

Where to next?

Rebuilding confidence and ensuring compliance in a rapidly evolving digital landscape will require ongoing and incremental change. Credit providers must start planning now for the shift towards an AI-powered, digitised, seamless and personalised environment that will fundamentally transform banking as we know it.

A starting point for credit providers may be a review of existing portfolios for indicators of financial stress or hardship. This will require a comprehensive analysis of representative data using industry-adapted analytics. Early detection today may help the consumer avoid serious harm to their financial wellbeing and highlight the processes and policies that credit providers should tackle first.

It’s likely that the short-term scrutiny of responsible lending in the media will soon evolve into a deeper reflection on the role of technology in decision making. The next whitepaper in this series will delve further into the shift from linear to interactive decisioning, how this might work operationally for lenders and how it might look for consumers experientially.

Find out more

If you would like to discuss these topics in more detail, please email Kevin James

Related Posts

While PEP, sanctions and adverse media screening are vital for customer due diligence, false positives create unnecessary delays and frustration. These inaccurate matches waste time and resources, slowing down onboarding and impacting the customer experience.

So, how can you optimise your screening process and minimise false positives?

Read more

When it was announced in 2017 that the world’s most valuable resource is no longer oil but data, organisations were already leveraging data to manage credit risk, predict future trends, and unlock new revenue systems to drive business growth. 

Read more