After understanding the essentials of AI, the next step for a risk manager is to apply that knowledge to their company’s efforts to employ AI in its business and identify and assess the resultant risks. Because some surveys claim that more than 75% of people are using AI without their company’s authorization, it’s clear that many companies may not have a strong grasp on what AI exposures they may be facing.1
Surveys also indicate that many companies are not providing clear guidance around AI usage, and some employees see AI as a helpful tool when they have a large volume of work and are pressed for time. With the partnership of other company stakeholders, risk managers can create a process to better analyze and quantify AI risk.
Risks will differ from company to company depending upon the type and size of the business, technology being used, databases, outsourcing and depth of AI implementation. Each type of AI has specific use cases that are more appropriate, meaning risks will differ depending upon the use. For example, generative AI is best focused on content generation and conversational user interfaces (UI) where humans are kept in the loop. Conversely, AI math skills, for example, are only at a formative stage and usually are not sufficiently mature enough to be relied upon.
For an organization to uncover the uses of AI and to understand what exposures the company may be subject to, groups should review appropriate uses and keep track of them. Risk managers should be organizing or taking part in creating simple and lightweight vetting processes to capture the following:
Entities should seek to incorporate the vetting process into existing processes where possible. These might include:
Specific workshops can be created to bring businesses into the AI risk identification process. Ideally, these workshops would cover some of the following matters on the agenda.
AI risk can create new risks that impact different lines of insurance. To date, most insurers have not made exclusionary changes to their policies to address AI developments. This is likely because AI operations lawsuits have been few and largely focused on copyright in training models, which is not a risk faced by most companies.
Risk managers looking to ensure their insurance programs are optimized for the new world of AI risk should take steps to assess and identify its use within the entity. At Brown & Brown, we believe that AI-related risks will evolve rapidly, and impacts will be individual depending upon the implementing entity, applications created, controls developed and technology being used. The Brown & Brown team can provide guidance in measuring the impact of technology, cyber and AI on organizations and analyzing insurance programs to align coverage to a company’s risk appetite. Our cyber risk models can encompass AI risk scenarios to evaluate individual exposures. Please contact your Brown & Brown representative to further understand our capabilities.
1 2024 Work Trend Report, Microsoft and LinkedIn 5/24. 75% of people are already using AI at work; 46% of them started within the last 6 months; 78% of them are using their own tools (80% in small and medium-sized companies) Over 50% of them are reluctant to admit using it for important tasks.
2 For more information on steps that can be implemented see Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the NIST AI RMF Playbook
Senior Managing Director