The European Union's Artificial Intelligence Act (AI Act) came into force on August 1st, 2024 The regulation was designed to promote the safe and ethical development and utilisation of artificial intelligence, while simultaneously encouraging innovation and safeguarding fundamental rights. The Act categorises AI systems based on their risk levels: AI systems deemed to have an unacceptable risk will be banned in the EU from the end of 2024 onward. Estimates suggest that 5-50% of AI systems will be classified as high-risk, depending on the source. These high-risk AI systems must adhere to various compliance requirements and undergo external audits. Failure to comply with the regulation will incur substantial penalties, similar to those imposed under GDPR. Although certain aspects of the regulation's implementation remain undefined, it is crucial for companies and investors to quickly familiarise themselves with these regulations to mitigate unplanned costs and delays in their operational timelines.
Amidst ongoing debates about the regulation of AI systems across different global regions, the EU has emerged as one of the first jurisdictions to establish a robust framework for these technologies. Proponents of the regulation laud its focus on promoting ethical development and banning certain types of systems, such as social scoring. However, the regulation has also faced criticism for potentially over-regulating and imposing additional compliance costs. For instance, Vinod Khosla, the former Co-Founder of Sun Microsystems and a notable investor, remarked that “Europeans have regulated themselves out of leading in any technology area.” (see here ).
While the long-term social and economic implications for the EU remain speculative, one certainty is that nearly every company within the EU involved in developing, using, or distributing AI will be impacted by the regulation. This article offers ventures and investors a concise summary of the AI Act's key points. It clarifies the scope of affected parties, how the regulation classifies AI systems by risk level, and the resulting obligations for each risk category.
How will you be affected? Prior to delving into the different risk classes and their associated obligations, it is crucial to grasp who is affected by the regulation, and how. Contrary to what one might think, the regulation applies not solely to major AI system manufacturers but encompasses all operators within the AI value chain.
Providers : An organisation that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.Deployers : An organisation using an AI system under its authority as part of its professional or commercial activity.Importers : An organisation located or established in the EU that places on the EU market an AI system under the name or trademark of a natural person or legal entity established outside the EU.Distributors : An organisation in the supply chain, other than the provider or importer, that makes an AI system available in the EU.Some important notes here are:
An organisation can either be a natural person or a legal entity utilising AI systems in a professional way According to the definition of deployers, organisations that use AI systems in their products are also covered by the regulation End-users of products using AI are not affected by the regulation
Risk classification of AI operators The AI Act classifies AI according to its risk. The higher the risk, the stricter the rules. All entities listed above will have to classify their AI risk level.
Inspired by Artificial intelligence act from European Parliament
Most AI systems are most likely to be categorised as minimal risk . These applications, such as games and spam filters, will operate without strict oversight. AI systems that involve interactions with natural persons will be required to meet limited transparency standards to ensure users are aware they are not interacting with human counterparts.
Systems categorised as high risk include those covered by the EU Product Safety Regulation, such as machinery and equipment, electrical and electronic devices, toys, and medical devices, as well as the following use cases outlined in Annex III of the regulation:
Recognition and classification based on biometric features Operation of critical infrastructure Education and vocational training Labour, personnel management (HRTech), access to self-employment Basic provision of public or private services Law enforcement Migration, asylum, border control Legal advice The exact number of AI systems that will be classified as high-risk is currently uncertain. The EU Commission estimates that 5-15% of AI applications will fall into this category, while a survey of the German AI Association assumes that 33% - 50% will be classified as high-risk. These systems will need to adhere to a wide range of regulatory standards and undergo a conformity assessment executed by an external auditor .
What regulatory requirements have to be met? Most of the regulation addresses providers and deployers of high-risk AI systems. Importers and distributors are primarily responsible for ensuring that the high-risk AI systems they import or distribute are compliant.
High-risk AI providers must register with the EU AI System Database, which is not available yet, conduct a conformity assessment, and retrieve certification evidence to demonstrate compliance with EU regulations. In order to be compliant, providers must fulfil a range of obligations, including the establishment of a risk management system and providing technical documentation.
Inspired by Holistic AI
An adjusted rule set has been defined for high-risk AI deployers , which includes the requirement to control input data and monitor operations. Added obligations for GPAI model providers include monitoring, recording, and disclosing the actual or estimated energy consumption of the model.
Cost of non-compliance The AI Act establishes substantial penalties for businesses that fail to comply, mirroring the GDPR's approach to enforcement.
Businesses found to be in breach of regulations regarding prohibited AI systems may face penalties of up to EUR 35 million, or 7% of their global annual turnover. Failure to comply with regulations for operators of AI systems in any other category may lead to fines of up to EUR 15 million, or 3% of the company's worldwide annual revenue. Providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request may result in administrative fines of up to EUR 7.5 million or 1% of the total worldwide annual turnover.
Timeline The EU Council approved the AI Act in May 2024. Most AI systems have at least two years to comply. However, AI systems deemed to pose unacceptable risks will be banned by late 2024.
Impact on startups A 2021 study conducted by the EU estimated that the annual compliance costs for each AI model classified as high-risk will be approximately €52,000. This total comprises €29,000 for internal compliance requirements, such as additional documentation and human oversight, along with €23,000 for external auditing costs pertaining to mandatory conformity assessments. However, the actual costs are likely to be higher, as the assumed hourly rate of €32 is notably low for data engineers and data scientists operating within the EU.
On the other hand, regulated companies can gain a competitive advantage by marketing their ethical AI practices, attracting clients prioritising responsible AI use and data protection. Certification can further instil trust in clients, demonstrating the company's commitment to security and sustainability.
Consequences for investors The EU AI Act establishes a regulatory framework that offers long-term predictability for AI-focused investments in the European Union market. It is designed to establish a secure environment for AI development, offering SMEs a clear legal framework for long-term planning and business stability. This is particularly important as similar regulations are being developed in various other regions, but their final implications remain uncertain.
For existing portfolio companies, we recommend two key actions:
Evaluate current AI usage and determine risk classifications as per the EU AI Act. Incorporate anticipated regulatory and compliance costs into business plans. For upcoming investments:
Include EU AI Act compliance in legal and tech due diligence, similar to current GDPR practices. Evaluate ventures' existing processes and best practices to gauge the complexity of achieving compliance. Focus on MLOps, cybersecurity, and ethical standards to determine if minor adjustments suffice or if major restructuring is necessary.
Conclusion The EU AI Act's provisions prohibiting AI systems classified as unacceptable-risk are scheduled to take effect by the end of 2024. However, member states will need to incorporate the regulation into their respective national legal frameworks, a process anticipated to occur during 2025.
The two-year transition period may appear substantial, but AI companies and investors should initiate preparations promptly to mitigate potential operational disruptions when the regulations become enforceable. Entities falling under the scope of the regulation should begin optimising their processes, including implementing MLOps best practices, enhancing documentation protocols, and strengthening cybersecurity measures.
Investors are advised to engage proactively with their portfolio companies to forecast and assess future regulatory compliance costs. For prospective investments, selecting a Product and Technology Due Diligence provider capable of evaluating the potential implications of AI regulations is paramount. TechMiners is closely monitoring these developments to ensure comprehensive risk identification in our projects and to provide forward-looking strategic recommendations to our clients.