A Guide to High-Risk AI Systems Under the EU AI Act
Purveyors of AI which the EU deems ‘high-risk’ face a new compliance regime requiring:
- For the technology itself -- risk management, data quality, transparency, human oversight, and accuracy; and
- For the business -- registration, quality management, monitoring, record-keeping, and incident reporting.
The first step in understanding what the new rules mean for your business is to understand the “High-Risk” designation (covered at Article 6 of the Act).
The current draft is nuanced and should be reviewed carefully, but in broad strokes, an AI system is “High-Risk” when it relates to mechanical safety or basic human rights.
AI is categorically considered to be “High-Risk” when the system:
(1) is itself (or is a safety component of) a certain type of product
Machinery
Toys
RVs
Boats
Elevators
Protective equipment
Radio equipment
Pressure equipment
Cableway installations
Gas appliances
Medical devices
Aviation security
Motorcycles
Agriculture and forestry equipment
Marine equipment
Railroad interoperability equipment
Motor vehicles and trailers; or
(2) is a listed ‘High-Risk’ AI system
Biometrics (when used to categorize people’s immutable characteristics or emotions)
Critical infrastructure (eg; digital infrastructure, roadway traffic, and supply of energy)
Education and vocational training (eg; systems used for access to and performance in the educational system)
Employment (eg; systems used for recruiting, placement, and employee monitoring)
Essential private or public services (such as healthcare, obtaining credit, emergency response systems, and personal insurance risk assessment)
Law enforcement
Migration
Due process.
Among the glaring exceptions to what is a listed “High-Risk” system include:
Pharmaceutical development and deployment, and
Financial market participation.