Das erste Gesetz weltweit zur Regulierung Künstlicher Intelligenz ist auf dem Weg: der EU AI Act der Europäischen Union.
Source : science.lu
Date de publication : 08/02/2024
Francesco Ferrero, Luxembourg Institute of Science and Technology
Francesco FERRERO (male) has been Director of the IT for Innovative Services (ITIS) department at the Luxembourg Institute of Science and Technology (LIST) since 2021. ITIS is home to more than 100 IT scientists and engineers who carry out research, development and innovation activities in the field of AI, data and software, with the aim of supporting the digital transformation of private and public organisations. Francesco started his R&D career in 2005 as part of the eSecurity Lab of Istituto Superiore Mario Boella, an Italian RTO, and has been active in the creation and execution of applied research, development and technology projects in the ICT field, particularly in the transport, logistics and smart cities sectors.
Before joining LIST in 2016 as Lead Partnership Officer for the Mobility, Logistics and Smart Cities markets, he held several positions in research and development, management and partnership development. He has been recognised as an international expert in R&D and smart cities and smart mobility for many years, and is co-editor of a major research handbook in the field. He has also been a highly successful Horizon 2020 project coordinator, a keynote speaker at major international events and a member of high-level working groups in Luxembourg and abroad. He sits on the boards of ICT Luxembourg and the Luxembourg Media & Digital Design Centre.
As a researcher, what do you like and welcome about the EU AI Act and why?
I like the fact that the EU AI Act is expected to include provisions exempting AI systems developed or used exclusively for research and development purposes from some of the more stringent requirements. This will allow researchers to experiment and test AI technologies without the full burden of compliance associated with high-risk applications, provided that appropriate safeguards are in place to manage the risks.
I also like the fact that the Act may encourage the use of regulatory sandboxes, which are controlled environments where innovative technologies can be tested under regulatory oversight. This is a relatively new approach that will allow researchers to develop and test AI systems with more flexibility in terms of regulatory compliance. Yet it still ensures oversight and safety and allows regulators to adapt the existing regulatory framework to let new innovative technologies emerge. The latter will be an important element of the AI regulation debate for years to come.
This is very much in the nature of LIST, a public research and technology organization with a strong track record of working with different regulators to develop technologies to assess the compliance and risks associated with new regulatory frameworks.
What aspects of the EU AI Act do you find less favorable and why?
While it is important for Europe to be a "regulatory superpower", it should also show the ambition to become a "knowledge and technology superpower". Today, Europe (and Luxembourg) are lagging behind the US and China in AI RDI. President Macron famously said that while the US has GAFA (Google, Apple, Facebook and Amazon) and China has BATX (Baidu, Alibaba, Tencent and Xiaomi), Europe has the GDPR (General Data Protection Regulation). A similar joke could be made about the AI Act. That's why I believe our government should push for "moonshot initiatives" to promote AI excellence.
This should be done both at the national level, starting with the recognition of AI as a national research priority, and at the European level, where Luxembourg should join forces with other EU countries to reach the critical mass needed to compete globally. The example of the Chips Act is instructive: this is not a law to regulate chip production, but a political initiative to stimulate European RDI in the semiconductor sector with a public investment of more than 43 billion euros, thus reducing our strategic dependence on other countries. As AI has clearly become a strategic technology, the same should be done for it.
What do you think is most important now to advance the development of AI for the benefit of the general public?
First, we need to democratize access to AI. On the technological side, LIST is contributing with its BESSER project, supported by the FNR. This project creates an open-source low-code and no-code platform that allows people with little or no programming skills to build software that embeds AI solutions more quickly. On the education side, we need to work along two dimensions: In schools, where AI needs to be used to improve learning, for example through personalized tutoring, and where students need to be taught how to use new tools like ChatGPT responsibly. And in continuing education, where we need to up-skill and re-skill workers and citizens to use AI.
Second, we need to address the public's fears and concerns about AI. I believe that civil society will have an important role to play in this, and that a public centre full of technical talent like the LIST should be part of a civil society movement to monitor the use of AI. In fact, we are already working in this direction and have developed the first prototype of an AI sandbox that will allow large language models, such as Open AI's GPT-4 or Mistral AI’s Mistral 7B, to be tested against a range of ethical concerns (race, age, gender bias, etc.), thus allowing the public to understand the limits of these technologies.
Fragen: Britta Schlüter
Redaktion: Jean-Paul Bertemes (FNR)