By Dalvin Chien, Partner, Ashleigh Fieldus, Senior Associate, and Cheryl Zhang, Associate
On 13 March 2024, the European Parliament approved the EU Artificial Intelligence Act (EU AI Act). This marks a historic milestone, as the world’s first comprehensive artificial intelligence (AI) legislation.
The EU AI Act is awaiting formal endorsement by the EU Council. Once endorsed, it will become law and will undergo a staged introduction.
As the first major legislation of its kind, governments and organisations worldwide will be looking to the EU as an example. We will undoubtedly start to see other jurisdictions follow suit.
SNAPSHOT:
-
The EU AI Act has been approved and will become law – it is the world’s first AI legislation of its kind.
-
The EU AI Act aims to protect the safety and rights of individuals, upholding democracy and the rule of law, and promote environmental sustainability. The Act adopts a risk-based approach with stricter rules applied to AI applications with higher human risks. It has global reach, with the potential to affect Australian companies in connection with EU market.
-
Australia will likely follow suit and regulate AI – however it remains to be seen whether we will follow the path laid by the EU AI Act, or turn to other jurisdictions for guidance.
-
Organisations should stay tuned and get ready. Further AI regulation and guidance in Australia is coming – and proactivity will be key to ensuring organisations are prepared when it does arrive!
This article provides a brief overview of the EU Act, the implications for Australia, and what it means for you.
What is the EU Act?
The EU AI Act defines AI systems as machine-based systems that (a) operate with different levels of independence and (b) can adapt over time and uses input data to generate outputs such as predictions, content, recommendations, or decisions that can impact real or virtual environments.
Objectives
The aim of the EU AI Act is simple. The objective is to protect the safety and fundamental rights of individuals, uphold democracy and the rule of law, and promote environmental sustainability, especially from the potential risks posed by high-risk AI applications.
Activities it regulates
At its core, the EU AI Act adopts a risk-based approach, regulating AI according to its potential societal harm. The more significant the risk associated with an AI application, the more stringent the regulations imposed..
The EU AI Act breaks down AI applications into three key categories:
1.Prohibited AI. This covers applications including social credit scoring systems, emotion recognition tools in workplaces and schools, and AI that exploits vulnerabilities like age or disability. It would also cover behaviour manipulation and untargeted scraping of facial images for recognition purposes. Additionally, specific predictive policing applications and real-time biometric identification in public spaces fall under this category.
The EU AI Act prohibits the use of such AI applications and systems as they are deemed to present an unacceptable risk to human rights.
2.High Risk AI. These are AI systems that pose significant risks to human health, safety or fundamental rights and is divided into 2 sub-categories:
- AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
- AI systems falling into specific areas such as critical infrastructure, education, employment, essential services (e.g., healthcare and banking), law enforcement, migration and border management, and democratic processes (e.g., influencing elections).
The EU AI Act says that organisations using high-risk AI systems must assess and mitigate risks, maintain usage records, ensure transparency and accuracy, and include human oversight. European citizens will also be allowed to lodge complaints and receive explanations for decisions made by high-risk AI systems that affect their rights.
3.General Purpose AI. These are machines equipped with human-like cognitive abilities, capable of handling diverse intellectual tasks. They are sometimes referred to as “foundation models”, an example of which is ChatGPT.
The EU AI Act outlines transparency requirements for general-purpose AI systems, including compliance with EU copyright law and publication of detailed summaries of training content. More powerful General Purpose AI models facing systemic risks must undergo additional measures like model evaluations, risk assessments, and incident reporting. Deepfake content must also be clearly labelled.
In addition to regulating the three categories of AI systems discussed, the EU AI Act introduces measures to support innovation and SMEs, including establishing national regulatory sandboxes and real-world testing to assist SMEs and startups in developing and training innovative AI before market placement.
Who it applies to
The EU AI Act applies to all companies operating in the EU, regardless of whether they develop or supply AI or not. This includes, for example, AI developers, importers, distributors, and users.
Consequences for not complying
The EU AI Act sets out penalties for breaches concerning AI practices. Member States are tasked with defining these penalties, which must be effective, proportionate, and dissuasive. Breaches can incur substantial fines, with fines capped at lower percentages or amounts for SMEs and startups.
The penalties are as follows:
- Penalties for Operators. Non-compliance with the prohibition of AI practices could result in administrative fines of up to €35,000,000 or 7% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher.
- Penalties for AI Systems Non-Compliance. Failure to comply with various obligations related to operators, notified bodies, providers, authorised representatives, importers, distributors, and deployers may lead to administrative fines of up to €15,000,000 or 3% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher.
- Penalties for Providing Incorrect Information. Supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities could result in administrative fines of up to €7,500,000 or 1% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher.
The decision to impose fines, and their amount, depends on various factors such as the severity of the breach, cooperation with authorities, and financial gains from the infringement. Additionally, there are reporting requirements and procedural safeguards to ensure fair application of penalties. The European Commission may also impose fines on providers of General Purpose AI models for intentional or negligent infringements, with fines not to exceed 3% of the provider’s total worldwide turnover or €15,000,000, whichever is higher.
Does the EU AI Act extend to Australia?
The EU AI Act has far-reaching extraterritorial application .It will apply to Australian companies in a number of situations.
For example:
- Australian companies placing AI systems on the EU market. Regardless of physical presence, Australian companies providing AI products / services to EU markets must adhere to the EU AI Act. For example, if an Australian technology company sells AI-driven software solutions to businesses in the EU for various purposes such as customer service, data analysis, or automation, they would need to comply with the EU AI Act.
- Australian companies providing AI outputs used in the EU. If an Australian company develops AI models or systems whose outputs are utilised by businesses or individuals within the EU, they would be caught by the EU AI Act.
- Importers and distributors of Australian AI systems into the EU. Australian companies involved in importing or distributing AI systems into the EU would be subject to the regulations of the EU AI Act, regardless of their physical location.
- Australian product manufacturers incorporating AI system. If Australian product manufacturers integrate AI systems into their products and sell them in the EU under their own brand, they would be subject to the EU AI Act. For example, if an Australian automotive company develops self-driving cars that incorporate AI technology and sells them in the EU market, they will need to comply with the EU AI Act.
Australian companies engaging in AI-related activities that have connections with the EU market or EU citizens should carefully consider their obligations under the EU AI Act to ensure compliance.
Will Australia adopt provisions similar to the EU AI Act?
The EU AI Act has the potential to establish global norms for AI usage, akin to the GDPR’s impact on privacy regulations, due to the EU’s significant market size, stringent regulations and global economic influence.
Despite this, countries like UK, US, and Canada are prioritising growth and innovation in their AI strategies and are cautious of alienating their AI industries.
Canada, for example, adopted a softer regulatory stance through their proposed Artificial Intelligence and Data Act (AIDA). The AIDA emphasises fostering innovation and responsible AI development while maintaining some flexibility in compliance measures and offering milder enforcement mechanisms compared to the EU AI Act.
As at the date of this article, Australia does not have specific legislation regulating AI. Instead, AI is regulated by a range of existing laws in Australia such as privacy legislation, intellectual property laws, and consumer laws. There are also a range of standards and frameworks that need to be adhered to (e.g., NSW Government agencies are required to take into account the NSW AI Assurance framework for all projects which contain an AI component or utilise AI driven tools).
Whether Australia will follow the path of the EU AI Act or the AIDA is not certain.
Earlier this year, the Australian Government, via the Department of Industry, Science and Resources, expressed its commitment to implementing mandatory safeguards for high-risk AI use cases. The government’s primary goal is to regulate AI development, deployment, and usage, particularly in high-risk contexts, while allowing lower-risk AI to thrive. The regulatory focus encompasses testing and audit, transparency, and accountability, with interim measures like developing an AI safety standard and requirement of watermarking AI-generated content for companies to voluntarily comply.
The government also created an AI Expert Group, established by the Department of Industry, Science and Resources in February 2024, which further underscores the government’s focus on ensuring the safety of AI systems. The group will provide advice to the Department on immediate work on transparency, testing and accountability, including options for AI guardrails in high-risk settings, to help ensure AI systems are safe.
Overall, while Australia does not yet have specific AI legislation in place, it is actively working towards implementing regulatory measures to ensure the safe and responsible development, deployment, and use of AI systems.
Given the dynamic nature of this field, staying informed is key. So, watch this space!
What does this mean for you?
Generative AI is not going anywhere. In addition to Chat GPT, Co-Pilot and Gemini, NVIDIA recently announced the Blackwell Platform that will pioneer six transformative technologies which is set to revolutionise sectors ranging from data processing to generative AI. The Blackwell GPU underpinning the platform are “superchips” that will be four times as fast as the previous generation of chips. It is also continuing to work on Project GR00T. Project GR00T is the foundation model designed for humanoid robots.
The passing of the EU AI Act and the speed and massive scale in which generative AI is being adopted and embraced by the public, will accelerate the take-up of regulation in Australia, or at the very least, an opt-in type framework for AI projects.
In preparation for this, we suggest that organisations do the following:
- Conduct comprehensive audits to understand the AI tools currently in use within your organisation.
- Map out and document the information flows associated with the AI tools, including input sources, processing algorithms, output destinations, and any subsequent data handling processes.
- Identify potential privacy and security implications at each stage of the information flow.
- Stay informed about the EU AI Act and its implications for Australian companies operating in the EU market, specifically whether the EU AI Act applies.
- Keep abreast of developments in Australian AI regulations to remain compliant.
- Start thinking about a compliance framework. That is, organisations should evaluate whether to adopt the most stringent form of compliance (currently the EU AI Act requirements) or tailor compliance efforts based on local jurisdictional regulations.
- Assess the impact of compliance strategies on business operations, resources, and market expansion plans.
- Develop clear policies and procedures outlining compliance requirements for AI usage, data handling, and privacy protection. Also, don’t forget AI must be used ethically.
- Conduct thorough contract reviews with AI vendors, ensuring contractual provisions align with compliance objectives and legal obligations.
- Train employees on compliance protocols and procedures to foster a culture of adherence to regulatory standards.
Get the latest news insights and articles straight to your inbox, simply enter your details.