Hro Banner
October 25, 2023

Updates on Regulation of Artificial Intelligence in Insurance

I.          Introduction

Insurance regulators continue to actively develop regulations and guidance on the use of artificial intelligence (“AI”) in insurance and we highlight in this alert three recent developments in this area.  The Colorado Division of Insurance (the “Division”) released a draft AI testing regulation on September 28, 2023 for life insurance underwriting to compliment the AI governance regulation which it adopted in September 2023.  The National Association of Insurance Commissioners (the “NAIC”) also released on October 13, 2023 an updated draft AI model bulletin for all insurers licensed in a state that issues the model bulletin.  In addition, it has been reported that the New York Department of Financial Services (“DFS”) intends to release a new Circular Letter, providing updated guidance for insurers licensed in New York on the use of AI in underwriting and pricing.

II.          Colorado

On September 28, 2023, the Division exposed the Draft Proposed Algorithm and Predictive Model Quantitative Testing Regulation (the “Draft Regulation”).  The Draft Regulation establishes a first-of-its-kind requirement for life insurers licensed in Colorado to perform quantitative testing of external consumer data and information sources (“ECDIS”), algorithms and predictive models to ensure that their use does not result in unfairly discriminatory outcomes.  The Draft Regulation closely follows the Division’s recent adoption of Regulation 10-1-1, Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources Algorithms, a governance and risk management framework regulation for life insurers licensed in Colorado (the “Governance Regulation”), as we reported here.  The Draft Regulation is intended to compliment the Governance Regulation and establish standards for what constitutes unfair discrimination in life insurers’ use of AI in insurance practices.

The Draft Regulation directs insurers that use ECDIS or algorithms and predictive models that use ECDIS (together “use of ECDIS”) in underwriting decisions to perform quantitative testing to evaluate whether the decision to offer coverage is unfairly discriminatory based on the race or ethnicity of proposed insureds.  Insurers must utilize Bayesian Improved First Name Surname Geocoding (“BIFSG”)[1] and the applicant’s name and geolocation information to estimate the race or ethnicity of all proposed insureds that have applied for coverage on or after the insurer began its use of ECDIS in the underwriting process, including through a third party on behalf of an insurer.  For the purposes of BIFSG, the racial and ethnic categories to be used are: Hispanic, Black, Asian Pacific Islander (“API”) and White.

The Draft Regulation requires testing of (i) application approval decisions to determine whether Hispanic, Black and API proposed insureds are disapproved at a statistically significant rate relative to White applicants and (ii) premium rates to determine if there is a statistically significant difference in the premium rates for policies issued to Hispanic, Black and API insureds relative to White insureds.  If this testing indicates a statistical difference of 5% or more, insurers must conduct additional testing to identify specific variables that contributed to the differences.  If the variable that is identified as contributing to the difference is sourced from the use of ECDIS, then that use of ECDIS is considered unfairly discriminatory and must be remediated before any further use.

Insurers must provide a report summarizing testing results on April 1, 2024 and annually thereafter, and include data through December 31 of the previous year.

It should be noted that the Draft Regulation takes a “disparate impact” approach to unfair discrimination, focusing on discriminatory impact rather than discriminatory intent.  Unlike the disparate impact approach taken by federal courts interpreting civil rights laws, however, the Draft Regulation does not include consideration of whether there is a causal connection between the use of ECDIS and the discriminatory impact or whether there is a legitimate business reason for the use ECDIS and no less discriminatory alternative.[2]  By doing this, Colorado appears to be adopting a strict liability standard for any insurance process that has a discriminatory impact.

The Division will engage in a stakeholder process to refine and implement the Draft Regulation.  The first stakeholder meeting was held on October 19, 2023, and stakeholders provided oral comments on the Draft Regulation.  Many stakeholders’ comments encouraged the Division to consider (i) if the 5% level of difference in application approvals and premium rates is the correct variance for finding unfair discrimination; (ii) if the premium rates test for unfair discrimination will be workable given the multitude of variables that are included in determining a premium rate; and (iii) if the Division should create a regulatory safe harbor from liability under the Draft Regulation for insurers making good faith efforts to comply with the Draft Regulation.  There will be additional stakeholder meetings in the coming months and the Division will accept written comments on the Draft Regulation until October 26, 2023.

III.          National Association of Insurance Commissioners

The NAIC’s Innovation, Cybersecurity, and Technology (H) Committee exposed a second draft of its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (the “Model Bulletin”) for public comment on October 13, 2023.  As we have previously reported, the Model Bulletin is intended to provide guidance to insurers licensed in a state that issues the Model Bulletin who use AI systems[3] to comply with relevant unfair trade practices and discrimination laws.  The Model Bulletin also advises those insurers of the type of information and documentation that a state department of insurance may request during an investigation or examination of an insurer.  Once the Model Bulletin is final, state regulators will determine on a state-by-state basis whether to issue it.  Even in states that do issue the Model Bulletin, it will serve only as non-binding guidance to insurers licensed in the state, setting forth the state regulators’ “expectations” as to how those insurers should govern their use of AI systems.

While the language of the Model Bulletin contains many changes from the first draft exposed in July (as shown in this redline), there are only a handful of notable changes from the first draft.  One such change is the loosening of the requirements for contracting with third-party providers of AI-related systems, and encouraging the inclusion of certain prescribed conditions regarding third-party cooperation with regulatory inquiries and investigations “[w]here appropriate and available.”  A summary of the guidance in the Model Bulletin as set forth in the updated draft is provided below.

a.          Regulatory Expectations

All insurers licensed in the state are instructed to develop, implement and maintain a written program to ensure that use of AI systems that make or support decisions related to regulated insurance practices do not violate states’ unfair trade practices laws by resulting in unfair discrimination (an “AIS Program”).  Insurers are encouraged to develop and use systems to test for bias and unfair discrimination in the use of AI systems.  The AIS Program should be reflective of, and commensurate with, the insurer’s assessment of the degree and nature of the risk posed to consumers by the use of AI systems, considering: (i) the nature of the decisions being made, informed or supported by AI systems; (ii) the type and potential harm to consumers resulting from the use of AI systems; (iii) the extent to which humans are involved in the final decision-making process; (iv) the transparency and explainability of outcomes to impacted consumers; and (v) the extent and scope of the insurer’s use or reliance on data, predictive models and AI systems from third parties.

b.          General Guidelines

The AIS Program should vest responsibility for the development, implementation, monitoring and oversight of the AIS Program and for setting the insurer’s strategy for AI systems with senior management accountable to the board or an appropriate committee of the board.  Specifically, the AIS Program should address governance, risk management controls and internal audit functions for the use of AI systems across the insurance life cycle and across the AI system’s life cycle.  The AIS Program should include processes and procedures for providing notice and appropriate information to impacted consumers and should address all AI systems that are in use, whether developed by the insurer or by a third-party vendor.

c.          Governance Framework

The AIS Program should include a governance framework for the oversight of AI systems used by the insurer, which should address the policies and procedures to be followed at each stage of an AI system’s life cycle and the requirements for documenting compliance with those standards.

The governance framework should also include an internal accountability structure, such as a centralized committee comprised of representatives from appropriate units within the insurer.  The governance framework should also indicate the scope of responsibility and authority, chains of command and decisional hierarchies and specify reporting protocols and requirements.  The governance framework should also provide for ongoing training of personnel.

The governance framework should specifically address processes and procedures for developing, using, updating and monitoring predictive models.  It should include a description of methods used to detect and address errors, performance issues, outliers or unfair discrimination resulting from the use of the predictive model.

d.          Risk Management and Internal Controls

The AIS Program should document the insurer’s risk identification, mitigation and management framework and internal controls for AI systems, and should address: (i) oversight and approval processes for the development, adoption or acquisition of AI systems; (ii) data accountability procedures, including data currency, lineage, quality, integrity, bias analysis and minimization and suitability; (iii) management and oversight of predictive models, including documented inventories, descriptions, development and use, with assessments to ensure their continued accuracy; (iv) validating and testing to assess AI system outputs, including the suitability of the data used to develop, train, validate and audit the model; and (v) protection of non-public information, particularly consumer information.

e.          Third Party AI Systems and Data

The AIS Program should address an insurer’s process for acquiring, using or relying on third-party data and AI systems.  This may include the establishment of standards, policies, procedures and protocols relating to (i) due diligence by the insurer to assess the third party and its data or AI systems; (ii) where appropriate and available, the inclusion of terms in contracts with third parties providing audit rights and requiring third parties to cooperate with regard to regulatory inquiries and investigations; and (iii) the performance of audits to confirm the third party’s compliance with contractual and regulatory requirements.

Importantly, while the first version of the Model Bulletin did not contain any qualifying language regarding the inclusion of terms in contracts with third parties governing audit rights and cooperation with regulatory inquiries and investigations, the updated draft of the Model Bulletin provides that these terms should be included only “where appropriate and available.”

f.          Regulatory Oversight and Examination Considerations

The Model Bulletin provides examples of the information and documentation relating to an insurer’s AI-related systems and AIS Program that an insurer may be asked to provide in the context of an investigation or market conduct action.  Specifically, an insurer may be asked for information and documentation evidencing or relating to (i) the adoption of the AIS Program; (ii) the scope of the AIS Program, including any AI systems not addressed by the AIS Program; (iii) how the AIS Program is tailored and proportionate with the insurer’s use and reliance on AI systems; (iv) the policies, procedures, guidance, training materials and other information relating to the adoption, implementation, maintenance, monitoring and oversight of the insurer’s AIS Program, including protection of non-public information, particularly consumer information; (v) the insurer’s pre-acquisition diligence, monitoring, oversight and auditing of AI systems developed by a third party; and (vi) the insurer’s monitoring and audit activities respecting compliance.

IV.          New York

It was reported that the DFS is developing a new Circular Letter which is intended to communicate best practices for insurers licensed in New York when using AI and to clarify some of the outstanding questions following the issuance of Circular Letter 1 (2019), Use of External Consumer Data & Information Sources in Underwriting for Life Insurance.  The new Circular Letter will reportedly be applicable to all insurers licensed in New York and will cover areas such as governance, risk management, internal controls and third-party vendor management.  DFS confirmed that they were following relevant NAIC workstreams and that they were also aware of Colorado’s activity, but DFS did not say if their guidance would be consistent with any of those efforts.[4]

V.          Conclusion

The Willkie insurance team continues to monitor these efforts to adopt legislation, regulation and guidance on the use of artificial intelligence and big data in the business of insurance and stands ready to advise on the development of risk management, governance and testing structures compliant with these initiatives.  Please contact any of the attorneys listed on this client alert if you would like to discuss further.

Click here to download this article.


[1]       BIFSG is a methodology developed by the RAND Corporation to help estimate racial and ethnic disparities within datasets, using surnames and geocoded addresses.  BIFSG estimates are strongly predictive of self-reported race and ethnicity for Hispanic, Black, Asian Pacific Islander and White persons.

[2]       See Texas Department of Housing and Community Affairs v. The Inclusive Communities Project, Inc., 576 U.S. 519 (2015).

[3]       “AI Systems” is a defined term in the Model Bulletin as “a machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, content (such as text, images, videos or sounds) or other output influencing decisions made in real or virtual environments.  AI Systems are designed to operate with varying levels of autonomy.”  In this alert, “AI systems” is not intended to be a defined term and rather refers generally to external data sources, algorithms, predictive models and other components of AI-related systems.

[4]       Senior officials from DFS announced the forthcoming Circular Letter at the annual meeting of the Life Insurance Council of New York.