Jus CivileISSN 2421-2563
G. Giappichelli Editore

Artificial Intelligence applied to the provision of banking services. Latest updates to the European acquis on automated consumer credit scoring (di Camilla Tabarrini, Dottoressa di ricerca – Università Ca’ Foscari Venezia Gioia Caldarelli, Dottoressa di ricerca – Università Ca’ Foscari Venezia)


.

SOMMARIO:

1. Consumer creditworthiness assessment in the era of industry 5.0: a foreword. - 2. The EU acquis on automated decision-making: the regulatory journey towards algorithmic transparency. - 2.2. The EUCJ judgement on algorithmic transparency in automated credit scoring practices: the Schufa Case. - 2.3. AI-based creditworthiness assessment under the consumer and mortgage credit Directives: a focus on the CCD2. - 2.4. AI-based CWA and credit scoring under the Artificial Intelligence Act. - 3. Creditors’ obligations and consumers’ rights concerning the use of AI-based credit-granting processes: an initial mapping of regulatory overlaps. - 3.1. Overlapping obligations of providers and deployers of AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score. - 3.2. Regulatory overlaps on the right to an explanation of AI-based credit decisions. - 4. Conclusions and open questions. - 4.1. Which consequences for violations of provisions regulating automated creditworthiness assessment? - 4.2. Addressing regulatory overlaps and interplay with existing and future regulations: which way forward? - Note


1. Consumer creditworthiness assessment in the era of industry 5.0: a foreword.

Originally, creditworthiness assessments (CWA) had only prudential relevance as it was conceived instrumental to the sound and prudent management of banking and financial institutions. [1] Directives 2008/48/CE on credit agreements for consumers (“CCD”) and 2014/17/EU on credit agreements for consumers in relation to residential properties (“MCD”) were the first legal acts to look at CWA also as a tool of consumer protection to prevent over-indebtedness. Such extended regulatory perspective was later shared by the EBA in its Guidelines on loan origination and monitoring (EBA/GL/2020/06 – “LOGL”). [2] Over the last years, rapid technological developments further changed the credit market facing both creditors and borrowers with new opportunities and risks. [3] A pivotal role in this ongoing market evolution is played by the ability of the latest machine-based systems to show unprecedented levels of autonomy, adaptiveness and ability to infer knowledge from the input they receive, also referred to as Artificial Intelligence (AI) systems under the recently adopted Artificial Intelligence Act (hereinafter AIA). [4] In fact, although the business of assembling or evaluating consumer credit information have existed for decades, [5] Big Data and AI systems have sharply enhanced creditors capabilities to make creditworthiness inferences from both traditional and alternative data (e.g. data on geolocation, POS transactions and digital footprint) [6] as well as to translate into a numerical expression the probability of a consumer or business to repay a loan regularly and in full (i.e. credit score). Therefore, it was in the context of the latest rise of so-called “cheap & fast” decision-making solutions, that concepts such as those of algocracy [7] and algorithmic surveillance [8] begun to surface. It is nowadays notorious that algorithmic credit scoring could exacerbate risks of discrimination and distributional unfairness inherent in any statistical system. [9] This is especially true if AI’s opacity, complexity, dependency on data and the risk of human overreliance are not properly addressed. [10] Hence, it is not surprising that the necessity to set boundaries to safeguard a number of fundamental rights potentially threatened by AI (e.g. human dignity (Article 1 EUCHR), respect for private life and protection [continua ..]


2. The EU acquis on automated decision-making: the regulatory journey towards algorithmic transparency.

2.1. – The notion of algorithmic transparency in automated decision-making was originally conceived as a data protection issue and, from this perspective, have been thoroughly discussed over the last years. [13] Indeed, the so called “black box problem”, earlier also referred to as the “scored society”, [14] was first raised in 2015 from a philosophical perspective by Frank Pasquale who used this expression to indicate “a system whose workings are mysterious; [as] we can observe its inputs and outputs, but we cannot tell how one becomes the other”. [15] This notwithstanding, the first legislative provision on automated decisions dates to almost forty years earlier as it can be found in article 2 of the January, 6th 1978 French law on informatique, aux fichiers et aux libertés stating that “[a]ucune décision de justice impliquant une appréciation sur en comportement humain ne peut avoir pour fondement un traitement automatisé d’informations donnant une définition du profil ou de la personnalité de l’intéressé”. At the international level, a first legislative attempt to address automated data processing was made on 28 January 1981 with the original Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108). Indeed, Article 8 thereof provided for additional safeguards including the ability of any person “to establish the existence of an automated personal data file, its main purposes, as well as the identity and habitual residence or principal place of business of the controller of the file”. [16] In 1995 the European legislator introduced a similar provision in the first Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data (Directive 95/46/EC). Namely, Article 15 on “automated individual decisions” provided for data subjects’ right “not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.”. In 2016 this right flowed into Recital 71 and Article 22 GDPR. These legislative attempts to [continua ..]


2.2. The EUCJ judgement on algorithmic transparency in automated credit scoring practices: the Schufa Case.

It is undisputed that a decision on a credit application based on the automated processing of personal data is potentially covered by Article 22 GDPR as it significantly affects data subjects access to fundamental services. Indeed, both Recital 71 GDPR and the explanatory note to Convention 108+ [23] exemplify the risks posed by automated decisions therein regulated as those affecting data subjects’ ability to access credit. This notwithstanding, the legal debate continued on the extent to which automated CWA and credit scoring practices actually fall within scope of application of the general prohibition set by Article 22(1) GDPR. [24] Indeed, for Article 22(1) GDPR to apply, three cumulative conditions must be met: (i) there must be a ‘decision’; (ii) the decision must be based solely on automated data processing or profiling; and (iii) it must legally or at least significantly affect data subjects. The EUCJ recently addressed this interpretative question by deciding a case brought up by a consumer (OQ) who was refused a loan by a financial company following an insufficient credit score determined by a German third-party private company (Schufa). [25] First, both the Advocate General (AG) and the EUCJ stated that a decision within the meaning of Article 22 GDPR encompasses any public or private opinion statement with a binding character producing factual or legal consequences. Hence, by affecting a data subject’s ability to be granted credit, Schufa’s score qualifies as a private decision producing significant effects on data subjects under Article 22(1) GDPR. Second, the EUCJ clarified that the loan refusal amounts to a decision solely based on Schufa’s score pursuant to Article 22(1) GDPR if the creditor’s decision on the loan application strongly drew on the credit score. Although this is generally a question of fact that the EUCJ leaves for national courts to answer on a case-by-case basis, it offered interpretive guidance based on the factual information provided by the referring court. Namely, the EUCJ deemed Schufa score covered by Article 22 GDPR by adopting a substantial interpretative approach to safeguard data subjects’ rights to be informed about the existence of an automated data processing and to obtain meaningful information regarding its underlying algorithmic logic, its significance and its envisaged consequences (Articles 13(2)(f), 14(2)(g) and 15(1)(h) [continua ..]


2.3. AI-based creditworthiness assessment under the consumer and mortgage credit Directives: a focus on the CCD2.

When the CCD originally entered into force in 2008 Article 8 dealt with the obligation to assess the creditworthiness of the consumer, by generally providing for creditworthiness assessment to be carried out on the basis of sufficient information obtained from the consumer (where appropriate) and from the consultation of relevant databases (where necessary). In this case, Article 9(2) CCD also required for creditors to timely provide the consumer with free information on the database consulted. Six years later, more detailed CWA provisions were introduced in the MCD. Indeed, pursuant to Recital 55 and Article 18 MCD, the CWA shall be carried out on the basis of information on the consumer’s income and expenses and other financial and economic circumstances that is necessary, sufficient and proportionate. Moreover, unlike the CCD, under Article 18(5) MCD creditors are required to inform the consumer both ahead of the consultation of a database and once the data is obtained. Furthermore, MCD expressly provides for Member states to ensure that credit is only granted to consumers when the CWA indicates that the obligations stemming from the credit agreement are likely to be fully met. Overall, neither the CCD, the MCD nor its original implementing guidelines on CWA [31] addressed the use of automated tools in the credit-granting process. The first reference to algorithmic CWA was indeed introduced in 2020 when the EBA published the LOGL as part of the Action Plan to tackle the high level of non-performing loans. Namely, LOGL expressly allow the use of automated models in the decision-making processes complying with the governance framework set therein. [32] In 2023 a recast text of the CCD was published (CCD2) [33] and the provision on consumer CWA was widely amended to introduce more detailed requirements. For example, as opposed to the former Article 8(1) CCD, Article 18 CCD2 now clarifies that the consumer CWA has to be “thorough”, thus setting clear quality standards for creditors to meet before concluding a credit agreement. [34] Also, Article 18(1) of CCD2 currently stresses that “[…] [CWA] shall be carried out in the interest of the consumer, to prevent irresponsible lending practices and over-indebtedness […]”. Furthermore, while maintaining the possibility for creditors to consult databases, Article 18 CCD2 goes further by prohibiting social networks from being [continua ..]


2.4. AI-based CWA and credit scoring under the Artificial Intelligence Act.

The AIA was adopted in the pursuit of the “EU Strategy to shape the next Europe’s decade and make our future fit for the coming digital age”. In fact, the objective of the Regulation is to promote the uptake of transparent, non-discriminatory and human-centric AI (so called trustworthy AI). [38] To this end, the AIA introduces a proportionate risk-based AI governance framework that provides stakeholders with entrepreneurial guidance towards the development, placement and use of AI systems that can stand the test of European fundamental values. [39] Following the Brussel Effect, the AIA applies worldwide to providers of AI systems (including general-purpose AI models) that are either: (i) placed on the market or put into service in the Union; or (ii) producing outputs that are used in the Union. The AIA also applies to all deployers of AI systems located within the Union or deployers located in third countries but producing AI outputs used in the Union. [40] Overall, the AIA sets four different regulatory regimes for AI systems, [41] depending on the intensity of the risks posed to persons’ fundamental rights. Namely: (i) minimal-risk AI (e.g. spam filters) is allowed without specific restrictions; [42] (ii) limited-risk AI (e.g. generative AI, [43] chatbots and deep fakes) is allowed subject to minimum transparency obligations; [44] (iii) high-risk AI (e.g. credit scoring) is allowed subject to a set of mandatory requirements over its entire lifecycle; (iv) unacceptable-risk AI (e.g. social scoring) is prohibited because contravening Union values. [45] It follows that, with specific regard to AI-based scoring practices, the AIA sets a preliminary distinction between social and credit scoring. Social scoring (i.e. the estimation of personal or personality characteristics over certain periods of time) [46] is prohibited inasmuch as it leads to either (or both): “(i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity”. [47] Consequently, social scoring practices are apparently allowed if: (i) producing favourable effects [continua ..]


3. Creditors’ obligations and consumers’ rights concerning the use of AI-based credit-granting processes: an initial mapping of regulatory overlaps.

In light of the above, financial institutions as well as credit reporting agencies that are providers or deployers of AI systems used to evaluate the creditworthiness of natural persons or establish their credit score are subject to (at least) the requirements set by the AIA, the GDPR, the MCD, the CCD (including its recast version). Such multi-layered regulatory framework makes it necessary, especially in highly regulated sectors such as the financial one, to assess and identify regulatory overlaps. Indeed, a clear outline of the interplay of all the relevant legal acts is pivotal to reach the AIA declared objective “to ensure consistency, avoid duplication and minimise additional burdens” on both providers and deployers of AI systems. [57] Along these lines, the next paragraphs will draw an initial map of the main regulatory implications and overlaps both in terms of consumers’ rights and creditors’ obligations concerning the use of AI-based credit granting processes.


3.1. Overlapping obligations of providers and deployers of AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score.

Chapter III, Section 3 AIA provides for a set of obligations placed on all actors across the high-risk AI value chain, namely providers (Articles 16-22), importers (Article 23), distributors (Article 24) and deployers (Article 26). These obligations have a horizontal nature, as they are meant to apply across sectors, but are also complementary inasmuch as their application “should be without prejudice to existing Union law, notably on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety”. [58] More specifically, every high-risk AI system shall be designed and developed: (i) on the basis of training, validation and testing data sets that meets the quality standards set by Article 10 AIA; (ii) to ensure sufficient operative transparency thus enabling deployers to interpret the system’s output and use it appropriately; [59] (iii) to allow for effective human oversight over their use in order to minimise the risks to fundamental rights pursuant to Article 14 AIA; (iv) to achieve an appropriate level of accuracy, robustness, and cybersecurity, and consistently perform throughout its lifecycle. [60] Providers of high-risk AI systems shall ensure (and prove upon request of a national competent authority) that their high-risk AI systems are compliant with these requirements or otherwise immediately take any necessary corrective action to withdraw, disable or recall the AI system. [61] In addition, for each high-risk AI system providers shall: (i) affix the CE marking; [62] (ii) register it in the EU database for high-risk AI systems listed in Annex III set up and maintained by the Commission; [63] (iii) draw up a declaration of conformity and keep it at the disposal of the national competent authorities for 10 years after its placing on the market or put into service. [64] Against this background, the AIA expressly addresses regulatory overlaps stemming from the obligations set on providers and deployers that are financial institutions subject to requirements regarding internal governance, arrangements or processes established pursuant to the relevant Union financial services legislation. [65] First, at the governance level, the AIA states that competent authorities as defined in Regulation (EU) No 575/2013 (CRR), CCD, MCD, Directives 2009/138/EC (Solvency II), 2013/36/EU (CRD) and 2016/97/EU (IDD) should be preferably [continua ..]


3.2. Regulatory overlaps on the right to an explanation of AI-based credit decisions.

With specific regard to automated credit-granting processes, another level of regulatory complexity stems from the partially overlapping scope of application of the individual right to an explanation set by Articles 22 GDPR, 18 CCD2 and 86 AIA. [82] Namely, pursuant to Recital 171 and Article 86 AIA natural persons affected by decisions based mainly upon the output of a high-risk AI system should have the right to obtain from the deployer clear and meaningful explanations if such decisions produce legal effects or similarly significantly and adversely affect their health, safety and fundamental rights. Such explanation shall convey a description of the role of the AI system in the decision-making procedure and the main elements of the decision taken. Furthermore, like Recital 71 GDPR, Recital 58 AIA recognizes that credit scoring and creditworthiness assessment practices significantly affect fundamental rights of natural persons “since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services”. [83] This provision extensively overlaps with the rights enshrined in Articles 13(2)(f), 14(2)(g), 15(1)(h) and 22 GDPR [84] as well as in Article 18(8)(a) CCD2. Indeed, as mentioned, both the CCD2 and the GDPR address automated decision-making processes affecting natural persons by providing for consumers/data subjects right to obtain human intervention, to express their view and to obtain an explanation of the automated decision. [85] However, while Article 18 CCD2 and Article 86 AIA respectively refer to creditworthiness assessments and decisions that “involve the use of automated data processing” or are “mainly based” on automated outputs, the wording of Article 22 GDPR only refers to “solely” automated decisions. As mentioned, the EUCJ in the Schufa’s judgment nonetheless excluded a formalistic approach by clarifying that Article 22 GDPR also applies to credit decisions that draw strongly on the automated score even if the final decision formally rests with a third human party. Hence, the GDPR, the CCD2 and the AIA convey a similar explainability standard: the right to obtain clear, comprehensible and meaningful information on the logic, the risks, the significance, the effects, the functioning and the main elements (i.e. the main parameters and their respective weight) of the automated [continua ..]


4. Conclusions and open questions.

In light of the above, AI development, testing and ongoing monitoring need to be carefully designed and implemented to detect and correct bias, algorithmic opacity, accountability gaps and, more generally, infringements of all relevant legislatives act. This results in the multiplication of compliance burdens on the industry, especially for providers and deployers that are already subject to stringent regulations such as financial institutions carrying out AI-based CWAs. [102] For example, under the AIA, data quality and bias detection safeguards will need to be thoroughly documented via the FRIA, as part of the DPIA. [103] At the same time, the internal prudential risk management and governance setup will need to be integrated with risk monitoring and reporting functions focused on AI-related risks to fundamental rights of clients. This will include, for example, documenting and monitoring AI explainability standards applied to automated CWAs. [104] Similarly, AI human oversight will require providers and deployers’ staff to be numerically sufficient and adequately skilled and trained. Moreover, to comply with their customer information and protection duties, credit institutions developing or deploying AI systems for CWAs purpose could be required to integrate their current paperwork with AI-related information both at the pre-contractual and contractual stage, as well as to setup ongoing review processes of automated decisions contested by clients. As mentioned, AI conformity assessments can be carried out by AI provider based on their ‘internal control’, which, in turn, could reflect harmonised European standards. To this end, on 22 May 2023 the European Commission issued a Standardisation request [105] in support of Union policy on AI to the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), requesting harmonised European standards on ten new technical areas [106] linked to the AIA requirements for ‘high-risk AI systems’ to be covered in conformity assessments and quality management systems. These standards are expected to be delivered by 30 April 2025 and will therefore potentially address many auditing challenges of automated CWA models.


4.1. Which consequences for violations of provisions regulating automated creditworthiness assessment?

According to Article 23 CCD, national law must provide effective, proportionate, and dissuasive sanctions for violation of provisions on consumer CWA. Member States are therefore allowed to set the penalty they deem preferable as long as it complies with these standards. This margin of discretion has however led to interpretative uncertainties under the CCD, recently addressed also by the EUCJ. Indeed, on 11 January 2024 the EUCJ decided a case C-755/22 concerning whether CCD allows for a national law to penalize a creditor where failure to fulfil its obligation to examine a consumer’s creditworthiness has not resulted in any harmful consequences for the consumer. More specifically, in the case at hand the national law provided for the credit agreement to be void and for interests to be forfeited. As mentioned, the CCD does not currently make any express reference to consumers’ best interests when regulating CWAs. However, in its settled case-law, the EUCJ has stressed that the obligation to examine the consumer’s creditworthiness contributes to ensuring high and equivalent levels of consumer protection across the EU. [107] Hence, in deciding case C-755/22, the Court reaffirmed that the objective of Article 8 CCD is also to protect individual customers from the risk of over-indebtedness, which can occur even if the debt has been repaid and after a long period has elapsed. Therefore, the EUCJ found the nullity of the contract compliant with Article 23 CCD even when applied to agreements fully performed by both parties and when consumers did not suffer any harmful consequences as a result of creditors failure to carry out a proper CWA. [108] This topic is however undoubtedly destined to raise further discussions, especially in the context of the use of AI systems for CWA purposes under both the AIA and the new CCD2 provision on CWA. [109] Indeed, as discussed above, it is currently not clear which regulatory regime will apply to automated CWA given the overlapping obligations set by the AIA, the GDPR and the CCD2. In turns, this raises the question of which penalty regime should apply. For example, should Article 18 CCD2 take precedence over the application of Article 86 AIA, consumers could potentially be precluded from lodging a complaint before the relevant MSA if they have grounds to believe that a violation of their explanation rights under the AIA occurred. [110] In this case, they could however [continua ..]


4.2. Addressing regulatory overlaps and interplay with existing and future regulations: which way forward?

As stressed in Draghi Report, [112] ensuring full regulatory compliance across the EU is of pivotal importance to prevent another notorious obstacle to European competitiveness: gold-plating. This is even more crucial in relation to Union laws subject to the Brussel Effect, such as the AIA. Indeed, credit information agencies or creditors developing in-house AI-based scoring tools qualify as providers pursuant to Article 3(3) AIA, irrespective of their location inasmuch as their AI system is commercialized in the EU or the credit score produced is anyway used in the EU. Hence, for example, a US-based credit information agency producing credit scores outsourced and used by a bank established in the EU would qualify as a provider pursuant to the AIA. [113] Similarly, a bank established in the EU carrying out creditworthiness assessments via outsourced AI-based tools or based on outsourced credit scores would qualify as a deployer within the meaning of Article 3(4) AIA irrespective of whether the provider of the AI systems or the credit score is located in or outside the Union. [114] As mentioned, however, to reach the goal of ensuring legislative coherence, avoid duplications and ensure full regulatory compliance, many operative challenge will need to be faced by both authorities and businesses adopting AI-based creditworthiness procedures. Indeed, providers and deployers that are financial institutions will be simultaneously subject to multiple legislative acts, such as: i) the CRR/CRD and the EBA GLs on internal governance for internal and model governance requirements; ii) the GDPR, especially with regard to automated processing of consumers’ personal data; iii) the CCD/CCD2/MCD, with respect to consumers’ CWAs involving automated data processing; and the iv) the AIA, with respect to the risk management, internal governance and explainability standards of high-risk AI systems. Looking ahead, other important regulatory interplay may concern the revision of the MCD and the new FIDAR (Financial Data Access Regulation). [115] On the one hand, on December 2021 the EC sent to the EBA a call for advice regarding the MCD review. In that occasion, the EBA pointed out that the EC should consider addressing AI-related risks potentially posed to consumer protection (e.g. risk of financial exclusion and discrimination) in the AIA as a sector-specific regulation. Since the CCD2 now regulates these [continua ..]


Note