Artificial Intelligence (AI)
Updated: October 2020
PROPOSED SOLUTIONS AND ACTIONS
ESBG supports the EU approach on AI, and in our view policy should be technology-neutral: the same
activities should be subject to the same regulation irrespective of the way that the service is delivered, so that
innovation is enabled and a level playing field preserved. Therefore, no specific AI regulation applying to
financial institutions should exist, as stricter requirements could be unfairly harmful for the financial
services industry.
In our view, a strict regulation could hinder the development of AI in the banking sector. AI is in a phase of
appropriation and exploration by the banking sector. In addition, the use of human expertise (data scientists,
compliance and legal officers, client managers, etc.) remains essential to guaranteeing the quality and security
of AI-related processing. Still, already existing regulatory frameworks need to be taken into account and to be
reviewed in order to identify potential gaps (e.g. concerning liability regimes). ESBG advocates that currently
no additional regulation is needed. Regulators and NCAs should work closely together with the industry to
elaborate guidelines on how the current framework applies.
It has been recalled by the European Data Protection Board that the applicable EU legislation already allows
for risks to be addressed (any processing of personal data through an algorithm falls within the scope of the
GDPR). ESBG agrees that AI should comply with the rules in force, in particular the GDPR. Moreover, the
banking industry is already subject to legal and regulatory obligations that address the risks mentioned.
As a result, banks have already developed and continue to adapt their risk models when implementing AI
applications into their processes and services.
ESBG would like to make the following recommendations:
Voluntary commitment of stakeholders to adopt an ethical attitude towards AI is just as important as
regulation to ensure the trust of individuals.
It could be appropriate to make available to stakeholders a self-assessment mechanism of algorithms to
determine the level of risk of each AI application, according to criteria defined by the Commission, and
whether their AI application is subject to the mandatory requirements to be implemented by the Commission
or not. It could be done on the same basis that the ALTAI portal recently launched by the EU AI alliance and
that proposes a dynamic checklist for developers and deployers to self-assess their AI models.
IDENTIFIED CONCERNS
European policymakers have to keep in mind that a choice should be made when it comes to the
establishment of this regulatory framework: if it comes too early, it might put in place constraints hindering
Europe’s possibility to catch up with the US’ and China’s developments; if it comes too late, huge ethical
issues could arise.
Thus, concerning the AI industry in general, research and development should be promoted and financed by
the EU as it is such an important field for the EU to grow in, while, European regulators and supervisors agreed
to enable a “Trustworthy and human-centric AI” to emerge.
Regarding the financial services industry, ESBG believes that most fears of risks arising from deploying
AI systems come from potential biases or – unintentional – consumer. However, we consider that the current
financial services legislative framework already provides robust safeguards for consumer protection to be
ensured and enforced. Regulations such as GDPR, e-Privacy rules, and requirements regarding responsible
lending or the selection of the target market of financial services, all applying to AI systems, already provide
a comprehensive framework and deliver appropriate safeguards.
WHY POLICYMAKERS SHOULD ACT
The effort and will of the EU institutions to adopt an EU policy on AI and to foster EU
commitment in this area is highly welcomed as these steps are necessary to encourage
competition with US and Asian counterparts. A global level playing field needs to be ensured.
Furthermore, ESBG encourages the European policymakers to manage a complex trade-off
between competitiveness on the one side and ethics on the other, while considering already
available regulatory frameworks relating to consumer and data protection. A continued
dialogue needs to be fostered between policymakers and the industry in order to ensure
appropriate measures in line with a technology-neutral and sector-neutral approach. At the
same time, talents should be encouraged to use their skills in the development of AI solutions
in Europe. This applies to European talents as well as to scientists, programmers and AI experts
from outside Europe.
BACKGROUND
On 19 February 2020, the European Commission presented its White Paper on Artificial Intelligence (AI) – A European approach to excellence and trust. The purpose of this White Paper is to set out policy options on how to achieve these objectives.
The European Commission supports a regulatory and investment-oriented approach with the twin objectives of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The European Commission advocates for a solid European regulatory framework for trustworthy AI that will protect all European citizens and help create a frictionless internal market for the further development and uptake of AI as well as strengthening Europe’s industrial basis in AI.
The main building blocks of this White Paper are:
- Measures that will streamline research, foster collaboration between member states and increase investment into AI development and deployment;
- Policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications.
- In addition, the European Commission published its Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.
Further (legislative) steps by the Commission are expected for Q1 2021.