Fundamental Rights Protection and Artificial Intelligence

    Fundamental rights play a key role in the debate on the regulation of Artificial Intelligence. Technology is frequently claimed to generate new threats to rights such as privacy, data protection and freedom of expression, among others. However, Artificial Intelligence systems can also enhance and support the protection of fundamental rights, such as in the case of privacy preserving technologies, which accommodate various societal needs (facilitating the dissemination of information while safeguarding privacy). Against this background, it is hugely debated among scholars whether a fundamental rights impact assessment should be required for some applications of AI systems, particularly considering the spread of generative models. The panel aims to investigate how to reconcile innovation and fundamental rights in light of the mutual shaping of regulation and artificial intelligence.

    • Can privacy-preserving technology support fundamental rights protection?
    • How can regulators bridge the gap between fundamental rights and the evolution of AI?
    • Which is the role of technology as regulatory factor?
    • Which remedies are available in the current state of the art?

    Organised by ENCRYPT (NL)

    Moderator Giovanni De Gregorio, Católica Global School of Law (PT)

    Speakers Marco Bassini, Tilburg University (NL); Simona Demkova, Universiteit Leiden (NL); Michèle Finck, University of Tübingen (DE); Andreea Serban, Future of Privacy Forum (BE)

    [Music] [Applause] okay good morning you know so nice to see you and of course thanks for being here 845 I mean we understand that it’s still the first day but it’s also nice to see that you have found your way you know to this new building you know this year at cpdp so thank you so much for being there I will be quite short I will be the moderator for this session for those of you that I do not know me I’m joanni de Gregorio and I’m an associate professor at catholica Global School of Law in Lisbon and here I mean I have of course as I told you the pleasure of introducing to this U actually very interesting panel you know that comes from actually different places and expertise and but I’m going to introduce each speaker while they’re going to speak just I wanted to tell you some what we’re going to talk about today as some house rots while some people are joining so we can uh still be and give you a little bit of introduction so first of all I mean the title is quite self-explanatory if you want and it’s also a big title In terms you know it creates a lot of expectation on this panel in terms of talking about fundamental rights and artificial intelligence for so many reasons and for all the attention that has been put on the AI act on fundamental rights but also from so many other issues related to artificial intelligence and the protection of Rights and Freedoms in the digital age so of course we are going to hear different different speakers presenting on different issues as we are going to see and I’m going to introduce all of them please and uh oh okay so there are also some spaces there some here you know at the front no it’s not the best sometimes but still um okay so I mean some house Roots we’ll have the speakers presenting like u 10 12 minutes and then we will open to the floor okay so we try to modate and opening to the floor uh having said that you know let me say that we have the pleasure of course to OST of course Michelle thinkink that will be the first Speaker then we’ll have Simona denova uh Andrea sherban and Marco bassini the project actually and the panel is supported by the encrypt project and Marco will have the chance to tell you a little bit more about the project during the panel okay this is just a little bit of house rules um so I would say without further Ado uh while waiting for everyone takes the seat that’s nice please take your seat fantastic maybe I should do another round but I mean I would not do that anyway so I would say without further Ado I mean thanks again for coming and we are going to open this uh first panel of fundamental rights in AI with um Professor Michelle fin that is a professor of Law and artificial intelligence at the University of tubingen in Germany and she was going to speak about the fundamental rights and AI with particularly with a focus on fundamental rights impact assessment if I’m not wrong and all the questions raised broadly speaking and to frame what it what actually the most important challenges when it when it comes to the protection of Rights um and artificial intelligence so I will leave the floor just to start this because we also we want also to have time for conversation to Michelle Michelle please the floor is yours and then we move to the other speaker and thanks you so much to be here again so um thank you so much Joan So what I’ll try and do is just kick off uh our discussion this morning by talking about the fundamental rights aspects of the icial intelligence act um now you couldn’t possibly say all of the things that uh fall within this uh broad topic within the the 12 minutes that I have but I’ll essentially try to sort of give an overview of what the role of fundamental rights in the artificial intelligence Act is um and also uh talk a bit about the legislative history behind that which I think is quite key to understanding the final text that uh we now finally have right so these are the three parts of of my presentation I’ll start uh by talking about the legislative history of the artificial intelligence act with a focus on uh discussions around the uh role of fundamental rights within this legislative framework um I’ll then uh go over the sort of some key Provisions in the final text um uh from a fundamental rights perspective and then finally I’ll uh close my presentation by introducing fundamental rights the fundamental rights impact assessment um which is of course a key mechanism in uh this regard right um so why do I start with legislative history um I mean first of all this is I guess where we generally start when we talk about this new legislative instrument right because uh it was just finalized but I think it’s an a perspective that’s particularly important when talking about the role of fundamental rights because the uh role of fundamental rights in the I act was one of the most debated and controversial uh aspect of this uh regulation as it was being negotiated over the last couple of years right um and so the point to start with is maybe a um a reminder of what the artificial intelligence Act is right so we’re really talking about I think by and large replication of EU product law so this by and large uh resembles very strongly EU law instruments that have existed for decades under the new legislative framework and this essentially means that this is secondary legislation that outlines broad principles that are then translated into harmonized standards which in turn give rise to a presumption of Conformity for those that adhere to them right um this is an instrument or way of regulating things that’s very familiar in EU law so for instance this is how the EU has regulated things such as toys or washing machines and dishwashers and of course you’ll see that these are products that generally do not rise raise uh fundamental rights concerns right now in relation to AI the situation is of course very different and this was acknowledged all along the legislative process right so if you for instance look at the uh support study to the impact assessment from 2021 then it’s very clear that there was a recognition that AI is a technology that uh raises fundamental rights concerns right so the the support study concluded that there is strong evidence that certain uses of AI systems can significantly impact all fundamental rights recognized in the charter right so we really have this interesting uh situation that the AI Act is buying large replication of EU product legislation but it regulates products AI systems that unlike other products that are regulated by uh EU law uh can raise uh uh fundamental rights concerns you know unlike what is the case in relation to a dishwasher for instance now not withstanding this situation the use of the new legislative framework um um to to to create the act um was a deliberate choice right so I think there was always this awareness okay we using a mechanism in EU law that we’ve used in the past to regulate products that do not raise fundamental rights concerns to now regulate artificial intelligence which of course does regulate uh raise these concerns right um and I think there’s many reasons as to why this deliberate Choice was taken and I think one that’s particularly important to bear in mind here is simply the division of competence between the EU and the member states um which um uh made this one of the choices that was available to the European commission when it crafted the draft AI act now because of this situation right you have a legislative instrument based on product law regulating AI which does raise fundamental concern rights concerns you had this really interesting discourse uh from mainly the European commission from the time that that the draft AI Act was published um so on the one hand you had this instrument really based on product law but most of the discourse that was focused uh or that was sort of used by the European commission to describe this new legis in instrument focused really heavily on fundamental rights right so if you had just read so like the different press releases and uh news conferences that were given by the European commission there was such a strong focus on fundamental rights that you would have thought that actually there’s a really strong uh there are really really strong rights protecting instruments in this text uh unlike what was actually the case when you looked at the draft AI act this of course also led to a critique so a number of commentators uh criticized that there was insufficient protection of fundamental rights under the AI act um so for instance in a paper that was published by Natalie smoa inter co-authors um there was a critique that um there weren’t uh there was no there were sort of no new mechanisms to protect fundamental rights in in the draft AI act um and other Scholars also criticized that fundamental rights were bden afterthought uh in the draft AI act which after all was mainly uh focused on the uh replicated on the new legislative framework um in the course of the legislative procedure this then LED uh to the European Parliament add advocating for the incorporation of Provisions that were specifically related to fundamental rights such as the fundamental rights impact assessment which as we’ll see has been Incorporated in the final text and there were also other uh NOS but also academics um that released for instance open letters to advocate for the inclusion of this instrument um right so what does the final text look like we still have this strong focus on product safety regulation so you could say that the final uh text is very much a cake of product safety law with a few sprinkles of fundamental rights uh on them right um and these sprinkles of fundamental rights in the final text They Don’t Really resolve this conceptual tension that we have uh that stems from the fact that uh product law uh uh like legal instrument is being used to regulate uh something that does raise fundamental rights concern concerns and I’ll briefly go over a few results of this conceptual uh mismatch so um I’ll talk a little bit about the fact that there’s no um there’s a discussion as to what the risk to fundamental rights even is as it is being phrased in the AI act um then I’ll talk a little bit about the standards as a mechanism to protect fundamental rights under the AI act and finally I’ll say a few words about the fundamental rights impact assessment so if you look at article one paragraph one of the AI act you see that this outlines the purpose of the artificial intelligence act right so it tells us that uh one of the roles of this regulation is um to ensure a high level of protection of health safety and fundamental rights uh as enshrined in the charter right and then in the text of the a act you often time find this notion of risks to fundamental rights and this has led to debate as to what a risk to a fundamental right actually is right so the AI act itself it defines the notion of risk in article 3 paragraph 2 uh this definition is set out so uh this article tells us that a risk means the combination of the probability of An Occurrence of a harm and the severity of that harm however some Scholars such as M hind hildbrand have uh underlined that in EU fundamental rights law there’s no need uh for uh to prove that there is harm or likelihood of har right so in EU fundamental rights law rights are considered to be safeguarded as inherent rights implying that any infringement upon these right constitutes violation in and of itself thus without needing to think about whether there was or could have been harm irrespective of um harm materializes right so I think going forward we’ll see a debate as to how to even make sense of this we can look at the existing body of product law to think about what a risk to health and safety is but this notion of a risk to fundamental rights I think is one that is yet to be defined another implication of uh the odd status of fundamental rights within the broad AI Act is that um standards will have a role to play in how um uh fundamental rights related provisions of the AI act will come to be interpreted right so standards are ke key instrument under the high risk AI systems regime of the AI act um which will essentially fles out the broad guidelines of the regulation in relation to high risk AI systems um and that means that standards will also Define fundamental rights related uh aspects such as for instance that which we find in article 102f which requires an examination in view of possible biases that are likely to affect the health and safety of persons have a negative impact on fundamental rights or lead to discrimination prohibited in EU law right there has been a lot of critique around the standardization process because of the fact that standards are typically drafted by technical experts um because of sort of political Dynamics involved and all of these critiques are of course even um more powerful when it comes to to fundamental rights so I think likely what we’ll see is that standardization standard Setters which have themselves publicly said that they’re not particularly keen on having to deal with fundamental rights they they come out up with very broad processes um um which then in term are likely to sort of create a presumption of confirmity and then there will be questions as to what this actually means for the substantive protection of of fundamental rights and then finally uh I want to say a few things about the fundamental rights impact assessment so this is a mechanism that we find in the final text it was not part of the draft AI act um it um made its way into the thr the trialogues due to the EP proposal but also due to leather initiated by alesandro Mono and Jano maler which re received a prize for this yesterday um so we now found this instrum find this instrument in article 27 of the AI act which requires a fundamental rights impact assessment for Annex 3 highrisk AI system uh deployers that are bodies governed by public law are private entities providing Public Services as well as for deployers of systems designed to ass assess credit worthiness and score and credit score um as well as in the context of um life and health insurance right um so article 27 tells us the elements you have to take into account when carrying out a fundamental rights uh assessment um so things such as the description of the deployer processes um description of the period of time and so on and so forth but as you can see these are pretty General guidelines right so we really have to wait for the AI office to provide a questionnaire uh the fifth paragraph of article 27 tells us that the AI office is mandated to create such a questionnaire to see what this really requires in practice right um some people have speculated that um this could be a pretty sort of uh vague questionnaire so that you wouldn’t be in a situation where uh the fundamental rights impact assessment would be pretty much of a you know B ticking ticking exercise but of course uh this could also look different but I think essentially we’ll have to wait for the AI office to act on this to know more about what fundamental rights impact assessments under the AI act really are um so uh I’ll stop here and I’m very much looking forward to the discussion thank [Applause] you so thanks a lot Michelle of course you know and I mean it’s interesting to see now how the discussion is not really uh just about you know understanding you know the risk and the connections between fundamental rights and AI but we’re also trying to operationalize to understand what does it mean when it comes to regulation protecting fundamental rights and all the discussion about risk based approach enforcement the fundamental rights impact assessment we have addressing together are ways that still need to be quite unpack but are ways to move forward in the discussion you know maybe are not the perfect ones but still it’s at least the European way to do that uh without further Ado because that is not my role you know but I mean I can introduce to you Simona denova that now I mean and she’s going to uh talk about another important part of this discussion Ai and fundamental rights it’s about exactly the procedural safeguards Simon is an assistant professor in New in in in European law lien University and she’s precisely going to talk about uh the questions about um procedural safeguards when it comes to the protection of fundamental rights and when we deal with the AI it’s another super important point so it’s result important it’s important to focus on substance and now we’re also looking a little bit at procedure because this proceduralization has become an important part of the European strategy to address um the protection of Rights in digital age so Simona the floor is yours thank you so much of course wait because this light okay should work thank you and very good morning also from my side it’s a it’s a pleasure to be here and as javanni said um I will talk about the procedural side of things especially when it comes to AI act but also how it relates more broadly to EU law and the general principles that we already have under EU law also because um perhaps there is less that we can do to protect the substance of certain decisions that are um uh determined by the uses of artificial intelligence tools um so wait a second this yes this um work is part of a project that I’m leading at Leen University it’s funded by the leaden uh starting Grant uh the project is broadly focused on the EU human Center digitalization but what what that term means we are trying to together with a colleague of mine Danielle mandres we are trying to uncover maybe ways to um to uh induce a a compliance culture in in the EU when it comes to the uses of AI with a certain EU legal mindset um when it comes to also fundamental rights protection so just to just to spend a little bit of time on the term of human Center that I uh that I have particular issues with because it originates more or less in an idea that is that that has nothing to do with rights protection it’s a it’s a computer science uh notion whereby we’re trying to ensure effective control over the uses of artificial intelligence tools but under the EU um digital digital policy and Technology policy we are infusing that notion with rights and values um that exist under EU law so we’re giving um the human- centered approach if you like it the regulatory approach consp constitutional aspirations and it is our role as researchers as lawyers as practitioners to uh contribute to ensuring that that kind of approach is also feasible in practice and so what I wanted to uh focus on today is looking specifically at the procedural safeguards that exist under the AI act with respect to both um the individual decision making specifically where AI tools are used and some observations also on the procedural setup for remedies that of course we can talk about rights protection uh without having effective access to remedies so very briefly again what’s the eu’s vision for human- centered uh AI like we can look into uh the AI act Preamble um and I will be very brief on this in uh point six uh it is stated that it is vital for AI and its regulatory framework to be developed in accordance with Union values as enshrined in article two of the treaty on the European Union fundamental rights and freedoms uh and that as a prerequisite means that the the the AI technology should be human Centric should serve as a tool for people with the ultimate aim of increasing human well-being a rather large uh uh of course as aspiration for the future um one of the ways to ensure the human Centric AI is implemented in pra in practice uh is the guidance offered by the high level expert group on artificial intelligence and the ethics guidelines for uh uh for trustworthy AI which together include seven non-binding ethical principles that you are very well aware of hopefully or if not I will not spend too much uh time on this but this nonbinding in principles are now in in a way more or less translated into legally binding obligations within specific uh Provisions in the AI act when it comes to transparency when it comes to human oversight when it comes to data management um also data protection uh etc etc and uh just the last point on how to ensure this Vision uh of course the purpose of the AI act as stated in article one is to first and foremost improve the functioning of the internal Market Market uh and to promote the uptake of uh of AI within the internal market and at the same time ensure a high level of uh protection of health safety and fundamental rights at the same time um this means of course we are dealing with an internal Market regulation we very well know uh that it it has a product safety nature but it is also exceptional in the fact that it includes very specific rights uh and Provisions concerning implementation of fundamental rights which is UN which is rather unprecedented uh when it comes to product safety uh rules in the European Union so this brought me to a question that I want to explore uh how to apply the procedural safeguards that arise under AI act to ensure a human- centered fundamental rights protection a rather uh large challenge so today I will just give you a little of uh of my first thoughts on the topic and hopefully uh once I write it up in a paper you can you can read about it more this is a topic that we have started explor exploring with some colleagues uh also from lean University uh looking at the role of general principles of EU law in uh maybe filling filling it uh filling up the gaps that arise in in the secondary rules that appear under under the secondary uh legislation that is the AI act because we need to always think of these rules uh as applicable in conjunction with already existing primary uh primary EU law uh one such effort was the Symposium we organized for the digon blog uh last year on safeguarding the right to good Administration in the age of AI um there will be another panel in the icon s conference where we will discuss this bridging uh Public Law European Public Law of views on the topic that perhaps is more or less uh private regulation of of technology and so drawing on insights from regulatory Theory and together with uh European legal scholarship um I will focus on the two safeguards in two types of safeguards in individual decision making and the remedial procedures um I would also like to set this thing within a broader theme uh of constitu constitutionalization in the European digital policy that is taking place within which uh the focus on procedure is becoming ever more evident as I already said we’re using the legal basis of article 114 uh of the treaty on the fun uh functioning of the European Union which is not unprecedented of course again regulating internal Market uh this is the the legal basis that has been used to to expand also the uh influence if you like of the European Union intervention in many spheres uh Beyond of course the regulation of Technology um the scores have been already looking at what this means for the development of Europe integration more broadly uh for instance uh joavan kande already in 2014 has uh named you EU citizenship as a as a the fifth fundamental Freedom so we’re moving in a internal Market Beyond just the protection of the four fundamental freedoms or freedom of movement of Services of people of goods um to really constitutionalizing also the market objectives uh and more recently the scholars like a collection of uh of um of studies in in a recently edited volume on the European integration and harmonization of rules have also identified that now we’re also entering article two uh values of uh of the rule of law protection of democracy of Human Rights and the protection of the charter rights within the objectives of uh of the internal Market rules and this is important to realize also when when we look at what is happening in the digital uh in in the digital digital regulation as well um and there are many reasons why we would then also see the larger importance of put put on procedural protection rather than on substantive protection because we also uh have less perhaps expertise in C certain scientific areas of law like the technology is where we need to delegate certain powers to um more specialized institutions more specializ agencies to prepare uh the technical rules to be applied so as to protect also fundamental rights and this is uh what what Marco alada also discusses in his upcoming PhD thesis looking at the importance of the technology neutrality uh as a way to delegating the content of the rules to a specific uh techn technical bodies so standardization as an example whereby whereby we give up some sort of control over the substance of fundamental right protection and where procedure becomes important because we can only control or not only but largely control um what happens to the substance through effective protection of the procedure uh of Delegation but also of the ru making procedure that uh takes place so I would say that one way or one emerging way of ensuring fundamental rights protection in the age of AI is indeed through procedural safeguards and this is whether we like it or not this is not to say I give up on protection of the substance or we we we have no way of controlling for the fairness of decisions which of course concern substantive uh issues but also looking at the construction of the AI act as a legislation we can see that a lot of the safeguards and rules put into the specific provisions of the AI act largely focus on ensuring effective um procedural steps within a certain decision- making procedure so as to uh demonstrate compliance uh rather than demonstrate substantive uh correctness of an outcome and this is not only a development that takes place uh in the AI Act of course the court of justice has been developing its Juris Prudence through this interaction of a deeper procedural review in uh in questions which concern fundamental rights for instance my uh colleague Julia Gena identifies this uh looking at the Double dynamic between the application of Primary Law guarantees so this general principles of U law and the charter rights in conjunction with gdpr rules or now with with the DSA rules uh on uh on remedies or on D due um due process rights uh and we can identify the trend that the court in indeed the court of justice has been in the active role and it’s activist role in uh in developing very deep uh or higher intensity procedural review in these questions where uh technology poses significant risk to uh fundamental rights and only then we saw um the legislator stepping in arest bino recognized this uh as a sec as another wave of intervention coming from the legislators now only after uh the first judicial activism phase and also the way in which this has been done has been named a regular brutality by Papa constantinos and the head um very tellingly that we have embarked on a large scale um uh legislation uh de adoption of a lot of legis legislative instrument that they do not talk to each other so this step out of of the sectoral rules and maybe looking at what EU law already uh in its primary uh law form has to offer and how it can coordinate the interpretation of these rules is an potential Way Forward so without talking too too long I would just make few key observations on the procedural safeguards that I uh find with respect to AI driven uh individual decision making first as I already said all the ai ai act obligations are secondary um to other existing EU law this includes both uh the primary EU law but also secondary rules like the gdpr which come as like Specialties in uh in relation to AI uses which of course process personal data um and another observation I could make is the procedural safeguards apply based on the extent to which the the system the AI tool impact decision- making this is obvious from um the from the definition of the high risk AI in article six which in paragraph three actually makes an exception for instance to the determination of highrisk systems in the annex three uh which says that only to the extent that it doesn’t impact uh the the the final outcome of a decision it would be considered a high-risk system and then there are there are instances in which perhaps if it is only to to to um um to improve certain content of a decision then it wouldn’t be considered high risk but of course we know that the line is very thin in deciding when uh when our Reliance on a specific tool is also deter determinative for our decision and the only guidance that we have on this in uh so far when it comes to Juris Prudence is the the shua holding case of the court of justice where there was however a rather Clear Connection um to the article 22 because it’s also one of the forms credit scoring uh is one of the forms that is recognized as a form of profiling in the gdpr so I don’t think it provides us enough Innovative um interpretation if we like to be applied in in other systems where maybe the link between the use of AI system may be lesser but still uh substantively affecting uh the the decision in the end another Point uh proection fundamental rights is not absolute as we know there it’s always subject to certain limits uh to proportionality assessment and AI act itself comes with a lot of uh exceptions to the uh to the rules and also to the sa application of the safeguards most of these exceptions however um apply to in my opinion rather high risk very high risk uh uses of AI systems like in the context of law enforcement migration Asylum procedures where we know uh that people are particularly vulnerable um so the application of these exceptions needs to be subject to the assessment of proportionality under the charter article 52 um assessment rather than just uh um balancing between the the if you like between the necessity interests uh when it comes to the use of I under the under the secondary rules and last uh last observation I would like to make when it comes to the proceduralization trend um under AI Act is that those rules are going a bit a step further in in um in harmonizing National procedural rules because we at the the European Union level we have certain compet competencies that are uh conferred on the European legislators to regulate certain areas but when we consider um procedural safeguards applied for instance to the law enforcement context like uh the obligation to have a Judicial authorization for U the real-time uh biometric identification systems in public publicly available spaces this is as a clear procedural obligation that wouldn’t exist under other rules of EU uh maybe directives on criminal law uh that would harmonize criminal laws of the of of the member states so also through this secondary um technology regulation there may be a side effect in EU law toward greater harmonization of member states rules also in areas where perhaps no rule exist on the member state level because it’s it’s an area which uh hardly any member state has uh AI legislation already in place um how much time do I have can I have two more minutes okay then uh I would say just I want to say a couple of words about what would make a good decision- making procedure and compare then the rules that we have under the AI act and in addition to what I already said about the definition of a high-risk uh uh system in article six most of the obligations under the AI act do actually point to enhancing the uh ability to control so the procedural ability to demonstrate uh control over decision making through for instance the transparency obligations you need to provide information on uh the operation of a specific system uh including the use instructions uh including okay substantively what these instructions uh what how to use the system so what the information should be included but still with the view of allowing then human oversight um so as to protect against the T tendency to over rely on uh automated uh outputs if you like um and also provide for steps so as to ensure that a person that does have access to uh an output from an AI tool has an ability and competence to to decide not to rely on on the certain output specific obligation procedurally for instance that is interesting ah sorry I didn’t put it up the specific tendency that is interesting to not is uh that you should be able to rely on uh an AI given output only after it has been separately verified but at least to in to human beings to human agents to decision makers which of course in practice makes it difficult um with exceptions of course for law enforcement sector what their timely decisions are necessary and um for instance there is an obligation to to inform about being subject uh to an AI use which also it’s a procedural obligation rather than a substantive one that allows us to control um the decision as such in EU law instead we have Provisions uh coming from the right to good Administration under article 41 together with uh um the right to an effective judicial protection under article 47 which of course apply only to the public authorities and we know the the obligations under the AI act concern also at the developers of the AI system as well as the deployers of the AI system that can be public authorities but some convergence of uh of the obligations under the notion of the duty of care that had the court of justice has developed out of the the right to good Administration as an individual right in the European Union has a potential to um extend the protection of fundamental rights also to horizontal relationships uh between in individual and private actors and what duty of care as a review Criterion could off offer in this case or it could Bridge uh the requirements under the secondary rules with the uh the EU primary EU law requirements by focusing on Gathering factual uh on having two types of obligations uh factual obligations requiring that the the decisions that uh rely on complete relevant and factually correct information with the ability of the decision making decision maker to actually wait the correct interest and right that will be affected uh of the individual so this is where the individ the the the human decision maker comes into place by being able to balance and ra reason about the impact that a certain tool may have on on fundamental rights as well as qualitative obligations um including the requirements on the information reliability when it comes to both the source of the data which is relied on for certain decision making but also on the procedure that led to the adoption of that decision making of course the application of uh the duty of care requirements is also context dependent it depends of whether we’re talking about General decision making in the context of Ru making where um we give larger discretion to the authorities to decide on the uh substance for instance of the reasons to be given or the the extent of the reasons to be given for certain decisions but in single case decision making on individual decision making the court usually applies um the the relies on the duty of care to H to go for a rather strial intensity of of review to control for also substantive quality of the decision if that makes sense and just one last point on how to actually then uh achieve fundamental rights protection in through procedures under the AI act um then we know that under AI act and now section four we included the section on remedies uh this is also information that uh from an article that I auor together with javani um that you can find on the QR code and I just wanted to say here that um the right to explanation uh what we call the internal complaint mechanism will become key actually to bridging this um essential procedural obligations under EU law with the procedural safeguards under AI act because this allows us to complain or demand the same sort standard of Duty of care from both private and public authorities uh or actors using and deploying AI as we would um in the context of other only public authorities there are of course question marks when it comes to remedies before independent supervisory authorities all eyes are on the data protection authorities but little is known yet about the C cordination of the competencies on the national levels uh also because the right to complain uh Market surveillance Authority that has been included in the AI act doesn’t really uh extend as far as as provide uh a remedy before an independent uh data protection authority and then we have of course judicial remedies even though none are provided for in the AI act uh recital 170 of course states that um remedies s already uh existing under EU and national law continue to apply uh so the individuals should rely on those uh this exist under Articles 19 and 47 of of the charter the member states are uh responsible for actually effective enforcement of EU law including effective protection and uh delivery of effective remedies to individuals in all spheres uh where EU law of course applies which AI act being a regulation applies almost everywhere now so I will leave this Con uh concluding Mar s here uh stating that some questions are open questions when it comes to how to ensure the Deep procedural review in this context include how to uh ensure a coherent interpretation across different member states because we will still have uh different rules uh on procedures applicable but also different authorities intervening so um knowing that they we can look for inspiration in the National legal orders and then uh this is the common problem in crossb interpretation of rules um maybe looking into uh the questions of further harmonization of procedural rules there have been attempts at the EU level non successfully because it’s such a uh sensitive issue also for the member states to give up their National procedural autonomy um the AI act actually has the potential to to go further and harmonize certain rules um perhaps uh as a side effect and some point on uh on the on uh on actually the access to remedies I already so said though so I will not repeat uh thank you for your attention and I will leave the floor to you than so thank you so much uh think this please no worries this part of I mean as you can see I mean basically it could be like other two three four presentation there were so many things actually in this presentation that could be further developed but we leave also for later and I will immediately move the floor to Andre sherban that is there actually in like on the other side of the table there is a privacy and analyst for Global policy at the future of privacy forum and she’s going to speak uh starting from some lesson learned from the gdpr is another super important topic when we move to AI so considering this connection so Andrea please the Flor is yours I will going to be a better conversation this good morning everyone thank you so much first of all for inviting me to this panel it’s very uh interesting to bring all of these perspectives together as I feel like we are now in a conundrum where we have to bring a lot of voices we have to make sure that we have a multifaceted approach to everything that we are discussing when it comes to the AI given the fact that the AI itself is being developed as we speak and it’s getting developed all the time therefore it’s important to keep up with all of these and to try to find a middle way now my uh presentation my intervention today is not based on a presentation on PowerPoint I just wanted to maybe summarize a couple of important things that we have learned from the lessons of the gdpr and give you some foot for thought some maybe post some questions something that we could answer together maybe we can start a conversation about it um but it’s just something that could maybe be of interest um and um I’m really happy that uh the okay occasion now has come to bring a bit the conversation in relation to privacy as privacy is a very important aspect of the AI in general here now we know that technology is evolving faster than ever and faster than law can can keep up with and we know that because we already have now a new AI act that is still being debated whether or not it’s sufficient or not uh should it be improved what is going to happen next and so on uh but we have it now we already have an approval from the council so as we embark on this journey um we have to look at this comprehensive legal framework that is dealing with all of these challenges and we are need to take into consideration all these efforts to align all these definitions coming from Regional from International level and to try to find a middle way now why the AI act matters first of all it’s because we have no other current framework at the European Union so we need to build from somewhere um and the AI Act is going to be quite a very interesting starting point of course um the hard work is only now beginning because we have to bring all of these bricks that is going to create the legal framework as we want to imagine it but um we already have had some uh other discussions a couple of years ago that have been going for five years and it uh feels a bit like the groundhog day I would say uh at least this is the feeling that I’m getting because we do find ourself in a very similar position to where we were only um a few years ago when we adopted the gdpr um so um what we have to do at this critical point is to recall some of the lessons that we learned from the enforcement of the gdpr and maybe step back take a look at the ecosystem that we have of the new data protection laws and data uh related um legislation but from a fundamental rights perspective uh what we know is that there is a very clear uh goal um interplay between the goals of the gdpr which has been focused a lot on fundamental rights protection and the developments of the um AI systems now from our experience with the gdpr um and the protection of personal data and the protection of the individual um and privacy we believe that there are a couple of important synergies to tap into and to leverage them into implementing the um AI act and to make sure that we built a successful AI implementation overall so with this I would like to make uh a couple of points uh that can be further developed of course um but I would like to bring our attention first of all um to the data protection impact assessments because the uh gdpr has actually solidifi solidifi this impact assessment as a tool for regulating technology um and here there is a very interesting interplay between the gdpr and the AI act because um um we have the potential overlap between the dpas with the fundamental rights impact assessments um there is a fundamental rights impact assessment provision in the AI act making specific reference to pre-existing dpas that need to be taken into account the fact is that most organizations both in the public and private sectors have ample experience with the dpas to identify and mitigate the potential risks to fundamental rights arising from the processing of personal data and now this experience and knowhow can be leveraged into um the context of the AI of the artificial intelligence another point that I would like to make is that we have a very vast experience which is now becoming a very good asset um into the pool of the dpos and the Privacy officers um and um not only of course because of the substantive legal reasons and the efficiency of resources and effective implementation res resources the gdpr what has done was to um create this infrastructure of Human Resources in the past five years within companies um in different entities public private sectors um and um they already have the knowledge on how uh personal data is being collected for what are the purposes how it is being used what how it is being shared how it impacts the rights of of individuals and even the communities so this infrastru structure and this experience can and should be brought into the discussion when we discuss about uh implementing the uh AI act another point that I would like to make um also is upholding the principle of data protection by default and by Design and by default um of course this uh principle has been maybe a bit underexplored uh within the gdpr uh but it’s still something that uh could be the key in ensuring the development and deployment ment of the AI systems lawfully while respecting fundamental rights as it we are talking about starting from the early stage of preparing the data sets for training and following the AI act the AI systems through their life cycle so this is also another point that we can take um and further discuss um and finally uh when we are discussing about the Privacy enhancement enh enhanced Technologies well um this uh the pets um are coming as a renewed interest by the regulatory authorities um in the AI development uh particularly from a privacy and data protection perspective and this can be seen not only in Europe but also globally um as we have for example the emergency of regulatory sandboxes and we have examples from other countries such as Singapore and UK well all in all uh was I don’t want to go further um as we would like to have some time for conversations I would say um it feels like this is a NeverEnding Story because in in in fact with every single product everyone wants to be first everyone wants to come on the market first and everyone wants to make sure that um they are already out there some time in this process um there are a couple of things that are being overseen so we need to make sure that enforcement here is key and we need to make sure that we have some clear boundaries where we are not uh having products that infringe fundamental rights therefore we need to make sure that um enforcement is key and this is going to be a very interesting point for the future as also was mentioned uh earlier in the uh previous presentation as there still something to be discussed in practice as how the enforcement is going to look like uh when implementing the AI act but there are a couple of things that I would still like to make sure that everyone takes away from this uh presentation there are three words fairness accountability and transparency um and and I would say that these words should guide us when discussing the implementation of the AI act and we have to look uh in the end at the gdpr as a an inspiration as a source of inspiration to move forward because there are so many um similarities um in the policym side also in the implementation side and also how we are relating to the entire process so it’s going to be very interesting to see what are going to be the developments and and I’m really happy to be here to discuss how it’s going to happen soon thank you so much thanks Andrea indeed I mean it’s quite interesting to understand I mean of course it’s quite you know clear the interrelation between the gdpr and the a act the problem is that how they will look together you know especially when it comes to combining also the risk based assessment including both legislation that are quite different even if we’re still speaking about the risk-based approach so thanks Andrea for that uh without further Ado I would move to Marco basini that is there and let me introduce Marco there’s an assistant professor in artificial intelligence and fundamental rights actually you know I me at tberg University so he will actually he will deal a little bit about the encrypt project is actually actually supporting this panel and just going towards another kind of issues related to security and encryption that are also relevant for what we are talking about today and also the discussion in the European policymaking at the moment so Marco please the floor is yours thanks Joan I will speak from the table so we speed up the process thank you everyone for joining this panel and um as I said I will spend a few words to follow up on our conversation on the incre project and then I will say a few words about what I feel to be a very key point that also the project can show in a way so the role of Technology as a tool for regulation sometimes we we are used to think about red technology as a target of Regulation but sometimes we can also see some possible uh potential in in the use of technology for governing what’s happening in the digital sphere um what about in first of all um it’s a a horizon uh funded European project Gathering different expertise technical expertise and uh of course also legal ones and uh it comprises uh 14 Partners from eight European countries um and the goal of this project is basically to build uh uh uh scalable uh and practical framework for the use of privacy preserving Technologies uh and this is a key point I think in the conversation we can have about the potential of technology in supporting compliance and particularly when it comes to fundamental rights as uh the Assumption and and the observation of the state of the art from which the project moves is that privacy preserving technologies have a huge potential but they are still in a way underdeveloped because of course they are very promising at a small scale but uh in order to be applied to real Warriors cases they need to be further implemented and combined and put together um and that is why the project actually wants to build this framework and to be actually a technical solution uh in order to make sure that different Technologies can be combined and uh made available in order to make sure that privacy preserving Technology Solutions are applicable to as I said real war use cases um why because we know that more and more and this is definitely also connected to AI we need to process data let’s say on a Federated basis we have many actors involved we have multi-stage processing we have multiactor processing of data um and in in in the real world and this makes of course uh you know material the existence of track for fundamental rights to privacy in a way data in a way are supposed to circulate so we actually see perhaps a risk that is inherent in the fact that data are processed by many actors um the idea behind the project is to make sure that technology can support actually privacy and can foster privacy while facilitating the use of data by a variety of factors so uh you know this is the most important challenge we are facing in in the EU in particular now to make sure that we have a data driven economy but at the same time that we preserve privacy and this is actually the rational behind this project uh the project had basically fre use cases uh in mind uh what I mean by that I need that in order to develop a solution for um making sure that price pafic technology can be properly implemented of course we need real cases to see uh what may happen if we use this combination and this framework in order to build this framework different context can be uh taken as an example for example the financial context and where we have maybe financial institutions or ranking institutions that need to know uh data and need to get information about individuals the medical context where maybe more medical practioners may have the need to consult data to access data from different institutions from different centers and also the context of the um prevention of cyber attacks and fyber tracks where we can collect and expect that there is a collection of data making possible to determine when uh cyber attacks are likely to occur and then to implement all the n necessary Rance to make sure that this fra can be detected these are for example the use cases where the project is focused in order of course to uh understand whether this solution can be developed and of course this is very important um in in respect of the possible solution that the project came to develop because we see that there are now constraints on the use of privacy pering Technologies and uh we need to make sure that these constraints are actually overcome so the challenge behind the project is to overcome come this Challenge and make sure that uh this can be further developed and implemented in in real poor cases um of course one of the constraint is gdpr compliance because uh only uh in in certain circumstances we can rely on fake data or synthetic data sometimes we need real uh data and and we know that sometimes this is not something we can avoid um so our role in the project in particular as legal advisor was basically to make sure that this project which is quite part doxic you know because we we are talking about privacy preserving technology so we are doing something for the goods but at the same time we need to process data and this is also a challenge that we see AI systems sometimes we need to process data to make sure that we Implement better Solutions more accurate Solutions and and this is definitely a challenge that we see very well connected with the current developments in the AI domain um and that’s why uh also in this context of the project uh from a legal perspective we need to make sure that proper uh measures such as data protection impact assessment are carried out not yet the fundamental rights impact assessment but this is definitely a scheme that we have to uh take take in account every time we see the development of new technology and and this kind of solution so I think this project in his way is showing the potential in technology for facilitating compliance and and and at the same time for offering the advantages of the DAT driven economy and the exploitation of big data on large scale um at the the same time I feel this is an interesting example of some of the aspects that we have seen together and um the of the potential as I said of Technology uh also as a form of Regulation as a possible tool for regulation um and in this respect of course with respect to Ai and and uh trying to connect my speech with what was said before and in particular in in in the three previe speeches um I think that we we need definitely think about the potential of technology and a point that I see as an develop that I I think will trigger further academic conversation but also uh let’s say a more practical perspective is the problem of making sure that we Implement fundamental rights in a way that is consistent with the state-ofthe-art of technology and and sometimes using an rely on technology uh one of these example in my view is machine learning um that is up something that is still underestimated in its actual impact uh but we know that we rely of course on the B the processing of the data on large scale uh but what happens if data changes what what happens if content is no longer available what happens if we need to actually to uh change the mindsets of algorithms and AI systems to make sure that of course when individuals for example wish their data uh to be removed or uh I I think about copyright right holders asking to remove certain contents uh because they feel that uh they have an exclusive Rise um uh then then then what happens with respect to AI systems and particular generative AI systems that’s were definitely trained before uh at least the models will train on the uh large data set including also those piece of information um this is a matter of alignment to make sure that we can have of course uh systems that are are compatible and consistent with the state of the art of the information available in public spere you know that the AI act tackles this point with article 10 in the data quality requirements setting certain specific uh requirements but but what if information is uh changing later not only in the previous stage but at the moment that uh there are some developments and I think machinal learning is definitely quite interesting examples of how technology can come into play also in this respect in order for example to enforce or implement the right to eraser or other fundamental rights that we see uh protected in the charter or in other constitutions um of course it’s a technique that is still quite uh subject to debate uh it’s it’s debated whether it’s it’s it’s practical or not um whether we can make sure that inferences and correlation can no longer rely on data that were previously processed for training whether it’s possible to reopen let’s say an AI model and to make sure that this uh is uh let’s say retrained uh in a in a way consistent with the information available in the public sphere because of the exercise of certain fundamental rights so I think that this is another interesting perspective that we perhaps will need to explore uh more in detail and deeply in the future revealing the fact that however sometimes technology can also be supportive of the certain type of uh rights uh we need to determine which technical conditions uh in a way uh meets the satisfaction of a fundamental rights the protection of that rights in a way it it calls back to my mind what happened in Google Spain where we basically were just getting a message from the court adjusted that the removal from uh the search result generated by search engines meant right to cancellation something that we already had in the gdpr but actually uh was looking for a technical correspondent that we needed to understand which technical conditions were actually meeting that rights in the specific technical content and I think that perhaps what we can see also in the AI domain will be similar and uh I think we will have a a lot of conversations so not only from an academic perspective but combining also industry and and policy makers on this because uh I still see a disconnection so thank you for your attention and okay so now it’s time for questions uh happy to open it you know so I mean whoever wants to be an ice breaker please it’s your time come on let’s open it and we have also five minutes more maybe because I mean we have started five minutes later yes please if you want also to come here it’s fine you know perfect okay rules orbl Mike is there Mike so again I Wasing at okay I was working at the medical sector so there is still no regulation as long as I know about use uh uh of how to use and access to the training data I think the most important rules for example um we build an AI tools and uh the access to the data or trainings data there is still no regulation who can access the data and how can uh the researcher use the data so is there any um binding implementing rules uh other than AI so that at least we have some clear guidelines how to use an access is the training data thank you okay should we collect some questions so the first one is on training data so is there any other question otherwise I’m just you know ask the panel if they want to address this one yes please I’m Peter hi I’m I’m Peter my name is Peter Gaspar and I have a question about uh the the fundamental rights impact assessment so obviously it makes sense uh to perform this assessment and and investigate whether we are introducing any any risks but I think the focus should also be on in introducing incremental risks so AI can potentially create risks but risks already exist decisions are made by humans humans can have good days and bad days they come with biases and these biases are very very difficult to to change if possible at all so shouldn’t we look at basically on incremental risks yeah maybe certain risks are created by by the system but maybe the systems can eliminate human biases and human problems which are very very difficult to to fix okay should we collect another one yes please I was wondering that the tools have to serve as a tool for the people and protction for safety of Rights um what if uh you want to uh see if people are Law Abiding and I don’t mean in the uh law enforcement uh uh uh area but other areas of uh law enforcement uh if you don’t drive through red light for instance things like that just normal laws um what if uh the AI serves me as a person because it has to serve as a tool for people but it doesn’t serve as a tool for you because you are being watched and monitored uh your Behavior Uh how do you um balance those rights okay so I would move to the panel at the moment you know just to answer these three questions access to training data so it’s to all of you whoever actually wants to go first there’s one on fundamental rights impact assessment of course it’s open to all of you but I will ask Michelle of course maybe to address that uh and then we have the one on um of course it was directed to Simon it was for Simona but of course you are always free uh to start so access to training data let me know who wants to go first or should I do that no it’s fine no please I mean um yeah I mean I can say a few words about that so I mean I think it really depends on sort of what the data is and who’s holding it right but uh I think that’s essentially shape by the interplay of like uh uh the gdpr to the extent that this is uh a personal data but then of course you would also have the data act that uh kicks into place like to the extent that maybe uh the the the data that want you want to use as training data was uh generated uh no thanks was nice generated in a scenario that uh Falls within the scope of appication of the of the data act so I think it’s really contextual analysis but we do have instruments in EU law that apply Marco would you like to add something after this Farm style reminder yeah I just I I don’t know if I got the question properly but there might be also some Provisions maybe in domestic jurisdiction sometimes concerning uh the processing of this kind of data for research purposes it’s generally something that is uh up to member states to regulate in more detail so it may also depend on the choices taken by respective jurisdictions which may also have an impact on on the on the on the pro of access of course which is a very yeah also because there are also leway for member states to do something at the national level let’s not forget that it’s super important point when it comes to the enforcement and on fundamental rights assessment Michelle if you can say something about this point on incremental risks and thinking about around that yeah so so if I understood your question correctly it was really about the sort of human bias and whether AI might be a tool to mitigate human bias I mean this is certainly something that people have been um thinking about for for a long time right that AI might also be that you could also use it in areas where we know that human bias exists and might have detrimental effect um I mean I think by by and large the concern is now that like often times training due to the fact that these systems rely on on training data that itself might enshrine human biases they then replicate these biases right so I think that’s sort of the the underlying assumption um in in relation to that but um if you I think if you account for that and if you can address that then we can definitely think of use cases of AI systems that uh might help mitigate uh human biases yeah it all depends on how you use it I think and the governance mechanisms around it and to Simona the honor to address the last question please yes I mean I would just say that the the two are not exclusive so having tools that are used for monitoring citizens behavior of course is uh is fine and it’s it’s lawful as long as they don’t encroach or infringe fundamental rights of those that are subject to the uses of AI so the balancing would happen like that it can be a a tool to serve people in general to serve a society but it can also be a tool that affects the rights of the people that are subject to those uses so this is where then we have to ensure that the protect ction uh can happen and I would say it’s 10:00 so maybe we can enjoy a break before the next of our session thank you so much for joining us and let’s thank again our thank you so much

    Leave A Reply