Understanding emerging AI and data privacy regulations


In this Help Net Security interview, Sophie Stalla-Bourdillon, Senior Privacy Counsel & Legal Engineer at Immuta, discusses the AI Act, the Data Act, and the Health Data Space Regulation. Learn how these regulations interact, their implications for both public and private sectors, and their role in shaping future AI and data privacy practices globally.

The discussion also touches on the Biden administration’s policies on AI and the anticipated American Privacy Rights Act, providing a view of data governance’s current and future landscape.

AI data regulations

Could you provide an overview of the AI Act, the Data Act, and the Health Data Space Regulation? How do they interact with each other?

Amid a recent uptick in laws and regulations being developed within the European Union (EU), three EU-based data regulations are especially important to pay attention to, as they will influence data practices across a range of critical sectors. This includes the Artificial Intelligence (AI) Act, the Data Act, and the Health Data Space Regulation.

  • The AI Act is the EU’s first comprehensive AI regulation and applies to both public and private entities. The main goal is to promote the uptake of human-centric and trustworthy AI. It categorizes AI systems by levels of risk, banning those that create an unacceptable amount of risk to the public and imposing specific requirements for high-risk AI systems. Transparency requirements are imposed for certain AI systems to enable the detection and disclosure of artificially generated or manipulated outputs, and obligations are imposed upon providers putting general-purpose AI models and systems on the market.
  • The Data Act aims to increase the level of data sharing by establishing rules for accessing and using data for users of IoT devices — be they businesses or consumers — while maintaining a high level of personal data protection. Holders of readily available usage data are thus under an obligation to share it with users and, under certain conditions, third parties.
  • The Health Data Space Regulation has two main pillars. One is dedicated to the primary use of electronic health data, enhancing individuals’ access to their electronic health data regardless of the Member States in which they are located. The other one is dedicated to the secondary re-use of electronic health data to facilitate data sharing for research, innovation, and public policy purposes while maintaining compliance with EU data protection rules.

These three regulations are intended to be complementary. The second and the third aim to facilitate access to data for research and innovation purposes, including for building AI, while the first one aims to set a horizontal framework for governing AI systems that are placed on the market, put into service, and used.

What is important to note is that these regulations are not intended to affect the application of GDPR, although they might create exceptions to some of its rules. For instance, the AI Act acknowledges that providers of high-risk AI may exceptionally process special categories of personal data for bias detection purposes when certain conditions are met. Interestingly, the AI Act also adds to GDPR with the introduction of a right to explanation.

The AI Act is noted for its comprehensive approach to trustworthy AI. Can you explain the key provisions of the AI Act and how it categorizes AI systems based on risk?

The AI Act is the first comprehensive AI regulation adopted by EU lawmakers. It imposes obligations on both providers and deployers of AI systems, and specifically targets practices and systems that pose some risk to the health and safety or the fundamental rights of individuals, with the purpose of ensuring “a high level of protection of health, safety, fundamental rights (…), including democracy, the rule of law and environmental protection, against the harmful effects of AI systems.”

The AI Act adopts a slightly different approach from GDPR in that it tries to identify red flags and confine the leeway of decision-makers who are rushing to integrate AI models and systems within their data processing pipelines. It categorizes AI practices and systems based on four risk categories.

The first category encompasses AI practices that create an unacceptable risk to the public, including those that pose a threat to citizens’ rights and democracy, e.g., an AI system used for making risk assessments of people in order to assess or predict the risk of them committing a criminal offense, AI systems that create or expand facial recognition databases, the use of ‘real-time’ remote biometric identification systems in public spaces for law enforcement with some exceptions.

The second category encompasses high-risk systems for which providers are subject to a whole set of requirements including risk management, data governance, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Deployers are also subject to a set of obligations, including an obligation to monitor the operation of the high-risk AI system based on the instructions for use given by the provider. High-risk AI systems are those used in critical sectors such as justice, education, and employment.

The third category encompasses certain AI systems that are subject to transparency requirements to make sure the public is in a position to recognize artificially generated or manipulated outputs. The fourth category encompasses other systems which are largely unregulated. Importantly, general-purpose AI models and systems are also in scope.

Could you discuss the significance of the Biden administration’s recent policies on AI use by the federal government? What do these policies entail?

The Biden administration’s executive order on the Safe, Secure, and Trustworthy Development and Use of AI defines the federal government’s approach to AI. Essentially, executive departments and agencies are required to adhere to eight principles:

1. Deploying safe and secure AI, which will involve developing effective labeling and content provenance mechanisms, to help individuals identify content generated using AI.

2. Promoting responsible innovation, competition, and collaboration, through the funding of AI-related education, training, development, research, and capacity building.

3. Supporting a diverse workforce and facilitating access to opportunities that AI creates.

4. Advancing equity and civil rights and preventing discrimination and bias.

5. Enforcing existing consumer protection laws and enacting safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI as well as promoting responsible uses of AI.

6. Mitigating privacy and confidentiality risks related to data usage which may imply leveraging privacy-enhancing technologies.

7. Dedicating resources to enlarge the pool of public service-oriented AI professionals.

8. Engaging with international partners to develop a framework to manage AI’s risks and promoting common approaches to AI-related challenges.

These principles are far-reaching and will impact many sectors. Although the Executive Order aims to frame the government’s use of AI only, it will inevitably impact private entities, starting with providers selling AI services to the government. Standards that will be developed to back up the approach, e.g., standards to support risk assessment processes, will have an impact beyond the public sector.

The upcoming American Privacy Rights Act is the US’s answer to GDPR. Can you outline the key features of APRA? How might it influence global data practices?

There is a lot to unpack in this bill. While it has triggered a lot of different reactions and the text is meant to evolve during the legislative process, it is a comprehensive framework with some very interesting provisions.

The scope of APRA is narrower than GDPR, although it is interesting to see that APRA would apply to non-profit organizations as well as businesses. Unlike GDPR, APRA includes an exemption for small businesses, although small businesses would be regulated as service providers. Public entities are not in scope, although they are subject to GDPR in the EU. The list of entities that would be impacted by the bill is thus relatively far-reaching.

Some of APRA’s key features include:

  • Publicly available information and inferences made from publicly available data in certain circumstances are excluded from the definition of covered data, which is not the case under GDPR. This is not surprising as publicly available information is usually excluded from the scope of US state privacy laws.
  • The list of sensitive data covered is longer in APRA than in GDPR, but this does not necessarily mean that protection will be stronger or even equivalent under APRA.
  • APRA includes a privacy right of action, which is framed differently from the list of remedies granted to data subjects under GDPR, but which is often absent from US state privacy laws. Yet, private enforcement has clear benefits and should be facilitated.
  • APRA adopts a data minimization approach, which requires covered entities to identify a permitted purpose for the processing of covered data when it goes beyond what is necessary for the supply or maintenance of a specific product or service requested by the individual. GDPR is also based upon a similar logic but does not contain a closed list of legitimate business interests or purposes. Some have argued that APRA’s approach is less flexible than GDPR’s approach, at least in its current draft, but this remains a very interesting attempt to draw a clearer line between processing activities that should reasonably be expected by individuals and processing activities that go beyond reasonable expectations.
  • Additional requirements, such as transparency and impact assessments, are imposed upon large data holders and data brokers, and tracking information generated by high-impact social media companies is considered sensitive.
  • Just like GDPR brought organizational changes with the requirement to employ a Data Protection Officer when certain conditions are met, APRA would require covered entities and service providers to appoint a privacy officer or a data security officer.
Considering these developments, what could be the global impacts of the EU and US AI and privacy laws? How might they affect international businesses?

Compliance with these new rules will require resources and new skill sets but, given the pace at which technology and practices evolve, this is inevitable. The international order is also becoming increasingly multipolar, which makes efforts to harmonize rules at the international level harder to pursue.

International businesses will have to adapt by finding solutions that are able to accommodate claims based upon the protection of local fundamental values, such as the protection of fundamental rights in Europe, or legitimate data sovereignty claims, which relate to concerns about local strategic autonomy. Interestingly, the US is shifting its international digital trade policy to create some space for thinking or rethinking domestic competition and AI policy.

There is already an emerging trend in the industry, often described as the data mesh paradigm, which is trying to develop good practices for the building and maintenance of federated data architectures in highly regulated verticals, such as healthcare and finance. Further testing and prototyping are needed to develop complementary data governance requirements and compliance-by-design approaches, but international businesses should be looking in this direction to get more creative and proactive.

Based on the current trajectory of AI and data legislation, what future trends might we expect in regulating AI and data privacy?

This is not necessarily straightforward to answer. The EU has been criticized for putting the cart before the horse and coming up with a very complex set of regulations. In the wake of Brexit, the UK, for example, is trying a softer approach with its ‘pro-innovation approach to AI regulation.’ Other jurisdictions are taking a rather ambivalent approach. Despite its initial hand-off approach, India now seems to be asking tech firms to seek approval prior to the release of unreliable or under-trial AI tools. It’s difficult to predict how the US position will evolve after the election.

We’ll need to watch to see how the market will react and how the new EU rules will be enforced. However, the risk-based approach embedded within the AI Act, i.e., the explicit prohibition of certain practices and the classification of high-risk AI systems, is already having an impact in practice. This is because many data-driven organizations are now rushing to adopt AI in one form or another, and they look at these red flags to inform the selection of the first AI use cases to which resources are allocated.

What we also see is the debate triggered by the adoption of these new rules in some pockets has rendered organizations more cautious: in several verticals, organizations dealing with regulated data and/or sensitive data are now trying to add new restrictions to their contracts with service providers and, in particular, restrictions related to the use of AI systems and AI training. Cross-border data transfer restrictions are also becoming more frequent, even when organizations are not officially subject to requirements of this type.



Source link