Updated: October 2019
ESBG is supportive of the EU approach on AI and in our view any policy should be technology-neutral:
the same activities should be subject to the same regulation irrespective of the way that the service is delivered,
so that innovation is enabled and a level playing field preserved. Therefore, no specific regulation applying
to financial institutions with respect to AI should exist, as it could be unfairly harmful for the financial
services industry if this regulation would imply stricter requirements than in other industries for the use of the
In case the policymaking would go in the opposite
direction, it is of foremost importance in ESBG’s
view that responses coming from the financial
sector as they emerge in the Piloting Process
(both in its quantitative than in its qualitative part)
would be taken into account. If a regulation/
supervision is to be in place, the supervision
should include a combination of a set of minimal
rules and ongoing assessment.
From the point of view of transparency, ESBG
does not believe that it is convenient to always
disclose how the algorithms work, as they are a
source of competitive advantage for entities’
business models and this could be put at risk.
Additionally, it must be born in mind by
policymakers that the content of an algorithm can
not be disclosed without harming trade secrecy:
intellectual property needs to be protected,
regardless of the technology used for delivering
ESBG opposes the idea of having to undertake independent audits of every algorithm behind AI systems, also
due to the burden and cost it would entail for the banking sector. We propose a different type of approach to
balance trade secrecy and consumer protection:
One alternative proposal can be, for instance, to adopt a “risk-based approach”: in case of algorithms involving
operations heavily affecting human functionalities (and implying, for instance, the risk of harming), it would be
important for the algorithms to be explicable.
One other valid alternative could be an approach whereby supervision algorithms could be undertaken following
alleged failures to safeguard the protection of consumers.
Ex-post supervisory actions could be taken on algorithms, based on suspicions that AI systems are unfairly
discriminating consumers, or incorporating biases unintentionally, but harming consumers.
AI has been tackled in certain way by all the institutions including the Commission Communication on AI of
April 2018, European Parliament Resolution on a comprehensive European industrial policy on artificial
intelligence and robotics of February 2019, and the Council Conclusions on the Coordinated Plan on the
development and use of Artificial Intelligence Made in Europe of February 2019.
To boost EU commitment in AI, the Commission has set up a High-Level Expert Group on AI (‘HLEG’) which
delivered in early April the ‘Ethics Guidelines for Trustworthy AI’ (the ‘Guidelines’). The Guidelines give instruction
on how reaching ‘Trustworthy AI’, which, according to the HLEG, is made up by three components: (a) Lawful
AI, (b) Ethics AI, and (c) Robust AI, and is based on the following four principles (i) Respect for human autonomy,
(ii) Prevention of harms, (iii) Fairness, and (iv) Explicability.
Based on the stakeholders’ feedback, the HLEG is expected to propose a revised version of the Guidelines to
the Commission in early 2020.
As shown in the document “Europe in May 2019” published by the European Commission, there is no European
firm in the top 15 digital firms worldwide. This fact expresses the urgency to boost European competitiveness
to compete with American and Chinese AI strategies.
Thus, concerning AI industry in general, research and development should be promoted and financed by EU
as it is such an important field for the EU to grow in, while, notwithstanding the importance of growing
competition on the international scene, European regulators and supervisors agreed to enable a “Trustworthy
and human-centric AI” to emerge.
Still concerning competitiveness, it emerges from the Commission’s Communication of April 2019 on “AI Ethics
Guidelines” that the principles developed by the AI HLEG are going to be the groundwork for the future
European AI regulatory framework.
Therefore, European policymakers have to keep in mind that a choice should be made when it comes to
the establishment of this regulatory framework: if it comes too early, it might put in place constraints
hindering Europe’s possibility to catch up US and China’s developments; if it comes too late, huge ethical
issues could arise.
Coming specifically to the financial services industry, ESBG is of the view that most fears of risks arising from
deploying AI systems come from potential biases or discrimination of consumers, including unintentionally.
However, we consider that the current financial services legislative framework already provides robust
safeguards for consumer protection to be ensured and enforced. Regulations such as GDPR, e-Privacy rules,
and requirements regarding responsible lending or the selection of target market of financial services,
all applying to AI systems, already provide a comprehensive framework and deliver appropriate safeguards.
The effort and will of the EU institutions to adopt
an EU policy on AI and to foster EU commitment
in this area is highly welcomed as these steps are
necessary to make EU operators able to compete
with US and Asian operators. A global level
playing field needs to be ensured. Furthermore,
ESBG encourages the European policymakers
to manage a complex trade-off between
competitiveness on the one side and ethics on the
other, while considering already available
regulatory frameworks relating to consumer and
data protection. A continued dialogue needs to be
fostered between policy and industry in order
to ensure appropriate measures in line with a
technology-neutral and sector-neutral approach.