Astrid Wagner, Partner at Arendt. Marie Russillo/ Maison Moderne

Astrid Wagner, Partner at Arendt. Marie Russillo/ Maison Moderne

As the use of artificial intelligence becomes more widespread, the challenges in fully exploiting its potential while simultaneously ensuring it are suitably regulated and individual’s data rights protection is the subject of discussion. In this episode of Arendt We Live#9 2023 “Accelerating digital trends” podcast, , Partner in the IP, Communication & Technology practice area at Arendt looks at the arguments that are shaping the development of AI technologies.

Data is king in the modern marketplace. What opportunities are there to fully unleashing data capabilities?

If we make the link directly with Artificial Intelligence, AI needs to be trained. So, you obviously need a massive amount of data input but you must ensure that the data is of high quality, it's not biased and also that it does not violate third party rights, such as IP rights. I think that many companies are sitting on a goldmine of data, but they're just not fully exploiting it.

The opportunities would be to have your own AI tools, where you can leverage your past activities and your experience in your field of expertise.

AI is becoming more and more advanced and sophisticated. What challenges does it pose in terms of data protection?

You need to have the principle of transparency. That means you need to inform data subjects about how you process their personal data. And you need to do that in a way so that the information is clear, transparent and easily understandable. And that is a bit of a challenge with AI, because it's often like a kind of black box and hence not always easy to explain that in a GDPR compliant way. You would proceed by way of a layered approach. You have a first layer of information and for each item of information in that layer you need to provide a second or even a third layer, which goes into more detail.  Because somebody who is not interested in the very nitty gritty details will just stop at the first layer, but you need to meet the information needs of those who can understand layers two and three. That covers the information obligation.

And again, as foreseen under GDPR,  you need to have a legal basis, to process the data. That can be the consent of the data subject. However,  you are not going to get that from individuals if your contacts are B2B. You can also use GDPR’s “grounds of legitimate interest”, but then you also need to undertake a balancing exercise between the legitimate interest of the data controller and ensuring that the rights and freedoms of the individuals are not seriously impacted in a negative way. Legitimate interest only works if the data contains no financial or health information, or data revealing sexual orientation and so on. So, it is only applicable for very basic personal information. One solution would be to only use non-personal data should that be practically feasible.

There is the example of ChatGPT being suspended in Italy for a month because the Italian watchdog said the AI tool had no way to verify the age of users or mechanisms to control the content that minors were able to access.

Can we expect more AI related legislation coming down the pipeline, and how far reaching could, or should, it be?

For the moment, there is no existing AI regulation at EU level. I think the proposed Artificial Intelligence Act aims to set EU standards that could become the gold standard for AI elsewhere in the world.

Under that proposal you have different obligations, and various levels of requirements applicable, depending on the risk represented by the AI tool.

Once finally adopted, the regulation as it is does go far enough in my eyes. I mean, it's super important that we are adequately protected, because the risks do not just apply to the financial sector, but also to the health sector, to decision making processes in recruitment and so forth.

We see this being taken seriously by the European Commission with its proposed AI Liability Directive, which intends to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU.

To what extent is having a resource such as the MeluXina supercomputer an asset for Luxembourg?

MeluXina is important, as are the relatively large number of state-of-the-art data centres here in Luxembourg, because we need this huge computing power. That is what has allowed AI to make such significant progress. If you couple the huge amount of data available with a high-performance computer, AI can become an extremely powerful tool.

Luxprovide, which is in charge of the operation of MeluXina, serves customers – private and institutional – both here and abroad, is focusing on highly innovative areas such as fintech. It is a real advantage for the Luxembourg finance sector.

Listen to the podcast Arendt We Live #9 “2023 Accelerating digital trends. The increasing impact of digital assets and digitalisation within financial services” .