The draft EU Artificial Intelligence Act, when in force, will classify AI systems that evaluate the creditworthiness of people or establish their credit score as ‘high risk’ and will require fintech companies in Ireland and the EU to ensure compliance with the proposed regulation relating to high-risk AI systems.
Traditional credit-checking methodologies have tended to rely on a ‘check the box’ approach to assessing a person’s creditworthiness, based primarily on the subject’s assets and earnings and prior behaviour related to the servicing of their debt. Advances in AI have meant that technology can now be used to provide a much more nuanced and accurate means of assessing the credit risk of individuals and companies, particularly with regard to income forecasting and spending behaviour, delving down (at least in theory) to a person’s browsing history or their Google footprint.
While this provides lenders with more certainty and speed, as well as having the potential to free up funding opportunities that benefit both lenders and borrowers, risks are inherent in the use of this technology, both for its users and those who are subjected to it.
After its introduction in April 2021, the Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (the AI Act) was discussed for the eighth time in the European Council, with several amendments being proposed to it in December 2022. The AI Act may be passed by the European Parliament this year.
As a Member State, Ireland will be obliged to implement the AI Act and to adapt its legislation to conform with the AI Act’s requirements once it is enacted. For many of the hundreds of fintech companies operating in Ireland, this means they will be required to adhere to the AI Act’s risk-based approach to the regulation of AI, which categorises the risks posed by AI systems as either unacceptable, high, or low/minimal, with different requirements and regulations applying to each category.
“A key feature of AI is its potential to detract from the freedom and autonomy of those who use it.”
AI Systems
In essence, an AI system is defined by the AI Act as a category of software, developed using one or more of a defined set of techniques and approaches which, for a given set of (human-provided) objectives, generates outputs capable of influencing the environments the software interacts with.
A degree of autonomy of the AI is envisaged, recognising that a key feature of AI is its potential to detract from the freedom and autonomy of those who use it or – in the case of people whose creditworthiness is checked – are subjected to it. Such systems effectively take or inform decisions and produce results, which are generated artificially by software.
High-Risk AI systems
An AI system that is intended to evaluate the creditworthiness of natural persons or establish their credit score is one of several AI systems that is specifically categorised as ‘high risk’.
This means that Irish fintech companies (as well as those in other EU member states) who use an AI system to undertake creditworthiness checks would be required to comply with several requirements set out in Chapter 2 of the AI Act. While several of these requirements may already be met by responsible operators under their obligations (in terms of data protection, money laundering and consumer protection regulations) most existing practices will not be sufficient and will require adjustment.
Fintech companies that use AI systems to undertake creditworthiness systems would have to abide by the requirements outlined below:
• A comprehensive risk management system will be required. It will have to be regularly and systematically updated, and identify, assess, and manage all risks, including residual risks associated with the use (or foreseeable misuse) of the system. It would have to communicate the residual risks to its users.
• Where the AI system involves training models with data, those systems will need to be developed based on data sets that comply with specified criteria. This requires that the data sets be ‘relevant, representative, free of errors and complete’, and take into account anything particular to the specific geographical, behavioural or functional setting within which the high-risk AI system will be used.
• Technical documentation related to the AI system must demonstrate that the system complies with the requirements imposed on high-risk AI systems. At a minimum, it will require all information to be provided in Annex IV of the AI Act.
• The capacity for record keeping and event logging while the system is operating will be required, and logging capabilities will have to conform with recognised standards or common specifications.
• The operation of all high-risk AI systems must be ‘transparent’ and be accompanied by instructions for use, the requirements of which are comprehensively set out in the AI Act. This includes a requirement to disclose their level of accuracy, robustness and cybersecurity.
• Human oversight will always be required, and the AI Act sets out the terms, including that a person may decide not to use the system or disregard, override or reverse the output it gives.
• High-risk AI systems must provide ‘appropriate’ accuracy, robustness and cybersecurity, and perform consistently throughout their lifecycle. Chapter 3 of the AI Act also imposes a range of additional obligations on providers, users, importers and distributors of high-risk AI systems. It will require a documented system covering:
• Strategy for regulatory compliance • Thorough design and design verification measures
• Quality control measures
• Testing, specifications, and standards
• Systems for data use and management
• Post-market monitoring and reporting
• Resource management and accountability
“A person may decide not to use the system or disregard, override or reverse the output it gives.”
Ireland
In Ireland, where several hundred fintech companies operate, the government has prepared a National Artificial Intelligence Strategy, which recognises both the advantages to be gained from the implementation and use of effective AI systems in Ireland, as well as the potential pitfalls that are inherent in AI: under-regulation, intrusiveness and low trustworthiness.
The Irish strategy document largely replicates the risk-based approach apparent in the AI Act. Institutions that undertake creditworthiness evaluations using AI systems will be among the many that should be preparing for further regulation.