EU’s “AI Act” Requires New Transparency Obligations for AI Providers

By exempting government surveillance from its new controls, the EU has created a “foreground AI” vs “background AI” dichotomy — and claimed the latter for itself.

The Code (Article 52) should be reviewed carefully, but in simplified terms, AI system providers MUST:

1. PROVIDE NOTICE OF INTERACTING WITH AI. If designed to interact directly with people, AI systems must inform the user they are interacting with an AI system (unless its so obvious that its unnecessary).

  • Exception: Law Enforcement.

1a. MARK SYNTHETIC OUTPUT. AI-generated synthetic audio, video, or text must be marked in a machine-readable format and detectable as artificially generated or manipulated.

  • Exceptions: Law Enforcement; Assistive Editing.

2. PROVIDE NOTICE OF COLLECTION OF PERSONAL DATA. Emotion recognition and biometric categorisation systems must inform the user and process data in accordance with GDPR.

  • Exception: Law Enforcement.

3. DISCLOSE DEEP FAKES. If AI system generates or manipulates images, audio or video content constituting a deep fake, it must disclose content is AI generated. If AI system that generates or manipulates text which is a matter of public interest, it must disclose that text is AI generated.

  • Exceptions: Law Enforcement; Interference with Art; Human Has Editorial Responsibility.

Further, the AI Office is empowered to draft implementing laws, and the Commission is empowered to adopt them.

The Commission professes: “The purpose of this Regulation is to improve the functioning of the internal market and promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights…

New technologies, like so-called “AI,” are oft-accompanied by new challenges. But has the free market been given ample time to meet these challenges? Has AI demonstrated a compelling threat to “health and safety”— the way social media platforms have for example? Or is this a hasty over-reaction where the result is more power for the already powerful? As used above, “Law Enforcement” means AI systems used to “detect, prevent, investigate and prosecute criminal offenses.” AI for me but not for thee? What do you think?

Here’s a useful link to: The AI Act Explorer

Previous
Previous

How Tech Giants Employ Newspeak to Take IP Owners Down the Rabbit Hole with “Generative AI”

Next
Next

OpenAI Misses Easy Layup to Champion Property Rights, Opting Instead for “Dialog” About “How Society Can Adapt”