The AI Act establishes a regulatory framework for artificial intelligence systems, setting standards based on the nature of their application and the level of risk they pose.
Prohibited Practices
At its strictest, the Act prohibits AI uses that infringe on fundamental rights, particularly privacy.
These include:
Transparency
The Act also emphasizes transparency in the development and deployment of AI technologies by:
The Act aims to have AI build trust, ensure fairness, support regulatory compliance and allow for the identification and mitigation of potential biases.
Risk Classification of AI Systems
The AI Act adopts a tiered, risk-based framework to regulate AI, categorizing systems into four distinct classes based on their potential impact:
Penalties and Global Reach
The AI Act imposes strict financial penalties for serious violations, with fines reaching up to 7% of a company’s total global annual revenue. Its provisions apply not only to EU-based organizations but also to companies outside the EU that offer AI services or products within its borders. Given that AI technologies have been in use for over half a century, the regulation may affect existing systems already deployed across various industries.
Triggering Events
AI technologies introduce a range of potential exposures that can trigger insurance claims, including:
These risks can lead to financial loss, legal liability, property damage or even bodily injury—implicating a broad spectrum of insurance coverage. Relevant policies may include Cyber Liability, Directors & Officers (D&O), Errors & Omissions (E&O), Media Liability, Employment Practices Liability (EPL), Products Liability and General Liability.
To address these emerging exposures, AI-specific insurance products have been developed. Brown & Brown brokers can assist organizations in identifying and securing coverage tailored to their unique AI risk profile.
AI governance is rapidly shifting across all jurisdictions. While the Biden administration introduced federal compliance rules scheduled to take effect on May 15, 2025, those regulations were rescinded by the current administration. However, an Executive Order outlining guiding principles for AI development remains in force. The Trump administration has emphasized AI competence as a strategic priority to maintain U.S. leadership in technology, making comprehensive federal legislation unlikely in the near term.
In the absence of federal mandates, individual states have begun to act. Colorado has passed legislation modeled after the EU’s AI Act, and other states are expected to follow suit.
Internationally, regulatory momentum is building. UK authorities are pursuing sector-specific AI rules, while the European Union is advancing a unified legal framework that applies across all industries, regulated or not. The EU is also reforming liability standards for AI systems and AI-enhanced products, aiming to simplify the process for victims to seek compensation.
Globally, experts have identified over 70 jurisdictions with draft AI legislation under review. As the pace of AI innovation accelerates, regulatory frameworks will continue to expand, shaping how organizations develop and deploy AI technologies. Risk professionals must remain vigilant, ensuring that their risk transfer strategies and management programs evolve in step with this dynamic regulatory environment.