How to Cope with the Future EU AI Act

Published on 28/06/2023

The European Union has taken a significant step towards shaping the future of artificial intelligence (AI) by drafting the “Proposal for a regulation laying down harmonized rules on Artificial Intelligence” (EU AI Act), which stands as the world's first comprehensive legislation dedicated to regulating the use of AI. The Luxembourg Institute of Science and Technology (LIST) is committed to raising awareness through its scientific research in the field among its partners so as to comply with the EU AI Act, expected to be approved by the end of 2023 and fully enacted in 2026.

With the rapid advancements in AI technology, the EU recognises the importance of maintaining ethical standards and safeguarding the rights and safety of its citizens. The AI Act is expected to establish a robust framework that aims to strike a balance between promoting innovation and ensuring responsible AI development. This ground-breaking legislation sets clear rules and obligations for AI systems addressing critical areas such as transparency, accountability, data governance, and fundamental rights. To do so, the Act is based on a risk-based approach, categorising AI systems into different levels of risk.

What’s in it for providers and users of AI systems?

The EU AI Act will apply to all providers and users of AI systems, whether they are established inside or outside the EU, as long as the results produced by the system are used in the Union or impact EU citizens. Several levels of risk are in the spotlight. Unacceptable risk AI systems are systems considered a threat to people and will be banned. These include for instance cognitive behavioural manipulation of people or specific vulnerable groups; social scoring; and real-time and remote biometric identification systems. High-risk AI systems are those that can pose a significant risk of harm to health, safety, or fundamental rights, in particular when such systems operate as digital components of products.

Samuel Renault, Head of AI & Data Analytics lab ad interim at LIST, said:

“The AI Act sets strict requirements for high-risk AI systems with regard to the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. In LIST, we are developing method and technologies to support organisations in addressing these requirements.”

Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, the AI Act prescribes specific responsibilities for users of high-risk systems as well. Typical examples include informing the provider or distributor of any risks or incidents involved with the use of AI systems and suspend the use of the system or using such systems in accordance with the instructions.

Taking the case of ChatGPT, generative AIs would have to comply with transparency requirements.

Samuel Renault added:

“According to the AI Act, generative foundation models would have to disclose that the content was generated by AI, design the model to prevent it from producing illegal content and publish summaries of copyrighted data used for training. This poses a challenge to AI systems designer that LIST can help to solve.”

Finally, limited risk AI systems would have to comply with minimal transparency requirements. Such models include AI systems that generate or manipulate image, audio or video content, for example deepfakes.

Towards human-centric and trustworthy AI

The AI Act aims to promote human-centric and trustworthy AI, introducing obligations for providers and those deploying AI systems while proposing bans on any intrusive, manipulative and discriminatory use of the technology.

Alexandru TANTAR Responsible of the Trustworthy AI Research Group at LIST, explained:

“Trustworthy AI refers to the development and deployment of AI systems that are designed to operate in a reliable, accountable, transparent, and ethical manner, with the goal of earning and maintaining the trust of users and the general public. The Act highlights the importance of human control and oversight over AI systems, while calling on dimensions such as resilience and AI explainability. It recognizes that humans should have the final decision-making authority and that AI should be designed to augment human capabilities and not replace human judgment.”

Under such circumstances, AI systems are expected to be transparent and explainable, so that users and data subjects can understand how AI works and the reasoning behind its decisions. LIST has several running and future projects that cover the explainability of AI models.

Reducing bias in AI algorithms

We all have biases that subtly affect our way of thinking. Traditional hiring processes can be prone to various biases, such as unconscious biases related to race, gender, age, and other protected characteristics as well as the affinity bias, which encourages us to seek people who look, think, and act like us. There is growing interest in new ways of reducing bias. As such, the use of AI is becoming more widespread. Yet, in their current forms and developments, these tools cannot be trained to identify only job-related characteristics and eliminate gender and race from the hiring process. Recently, some companies have found these tools could detect gender from CVs and discriminate against female applicants.

Many of the AI tools currently used for recruitment have flaws, but these can be addressed. The great thing about AI is that we can design it to meet certain beneficial specifications. Case in point, LIST recently developed Amanda.

Marie Gallais, Leader of Human Modelling and Knowledge & Engineering Group at LIST, stated:

“LIST’s unique technology not only breaks down bias risks by identifying their building blocks with the help of so-called FAIR models, but also gives all end-users the means to make informed decisions thanks to an innovative Explainable AI solution, giving detailed feedback on the results obtained. This move towards more ethical and inclusive technologies will be unveiled to you through the use case of mass recruitment processes.”

Like previously with General Data Protection Regulation (GDPR), for which LIST played an active role in raising awareness and tooling, companies will have to comply with this upcoming EU regulation linked to their use of AI assisted systems. LIST is committed to supporting Luxembourg players by raising awareness through dedicated training courses or by developing technologies that address requirements of the act through collaborative projects with partner companies.

Share this page:

Contact

 Samuel RENAULT
Samuel RENAULT
Send an e-mail
Dr Alexandru TANTAR
Dr Alexandru TANTAR
Send an e-mail
Dr Marie GALLAIS
Dr Marie GALLAIS
Send an e-mail