Training fields

Review of Algorithms and Consumer Data | Carlton Fields

With the increasing use of algorithms and external consumer data, several national and international bodies have recently drafted working products or proposed regulations as follows:

  • The NAIC Accelerated Underwriting Working Group (AU WG) – which on November 11, 2021 released a draft of its educational report for regulators to facilitate “understanding[ing] the current state of the [insurance] industry and its use of accelerated underwriting.
  • The NAIC’s Special (EX) Committee on Race and Insurance (Special Committee) – whose 2021/2022 duties include examining “the impact of traditional life insurance underwriting on traditionally ill populations served, considering the relationship between mortality risk and disparate impact”.
  • Colorado Division of Insurance (CO DOI) – which develops regulations to implement the new Section 10-3-1104.9 prohibition on the use of external consumer data and information sources (external data), as well than predictive algorithms and models using external data (technology) in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, gender, orientation gender, disability, gender identity or gender expression (protected status), which came into force on September 7, 2021.
  • The White House Office of Science and Technology Policy (White House OSTP) – which assesses the “proven and potential harms of a particular biometric technology” as part of its October 8, 2021 request for information.
  • The European Parliament and the Council of the European Union (EU Parliament) – which proposed “to establish harmonized rules on artificial intelligence” (EU AI Regulation) on April 4, 2021 , which recognize “the right to dignity and non-discrimination and the values ​​of equality and justice.
  • China Cyberspace Administration (China Cyber ​​​​Admin) – which on August 27, 2021 published a 30-point proposal for “Algorithm Recommendation Management Regulations”.

The work of these bodies covers the following themes: (i) prohibition of unjust discrimination; (ii) promote fairness and transparency; and (iii) require governance programs.

Unfair discrimination

The various bodies address the potential for unfair discrimination in the use of algorithms and external consumer data, as follows:

What can be unjust discrimination

  • Colorado Rule 10-3-1104.9 imposes a three-pronged test for determining whether unfair discrimination exists:

1. Use of external data or technology correlates with protected status;

2. The correlation results in a disproportionately negative result for such protected status; and

3. The negative result exceeds the reasonable correlation with the underlying insurance practice, including losses or underwriting costs.

The Colorado commissioner is required to establish rules implementing 10-3-1104.9 and hold stakeholder meetings, which are expected in January 2022. Also, possibly providing more guidance on unfair discrimination, the required rules must (i) provide a reasonable period of time for insurers to remedy any unfair discrimination impact of any technology employed and (ii) permit the use of external data and technology that have been found not to be unfairly discriminatory.

  • The Draft AU Task Force Educational Report (i) warns that due “to the fact that Accelerated Underwriting relies on predictive models or machine learning algorithms, it may lead to unexpected or unfairly discriminatory even if the input data is not overtly discriminatory” and (ii) is concerned about the use of a consumer’s behavioral data, including “gym membership, occupation, marital status , family size, grocery shopping habits, wearable technology and credit attributes “because”[a]Although medical data has a scientific link to mortality, behavioral data can lead to questionable conclusions because correlation can be confused with causation.
  • The EU AI Regulation specifically notes that AI systems “used to assess the credit rating or creditworthiness of natural persons should be classified as high-risk AI systems” because they “may lead discrimination against individuals or groups and perpetuating historical patterns of discrimination, for example”. example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts.

The EU AI Regulation also includes “specific requirements aimed at minimizing the risk of algorithmic discrimination, in particular with regard to the design and quality of datasets used for the development of AI systems, supplemented through requirements for testing, risk management, documentation and human oversight throughout the lifecycle of AI systems.

Additional study

Workstream 4 of the Special Committee will address unfair discrimination, disparate treatment, proxy and disparate impact in insurance underwriting in a proposed white paper.

The White House OSTP seeks information to assess “the actual and potential harms of a particular biometric technology,” including “harms due to disparities in system effectiveness for different demographic groups.”

Fairness and transparency

The UA WG, EU AI Regulation and China Cyber ​​Admin seek to ensure that the use of algorithms and consumer data is fair and transparent.

Additional tips

  • The AU WG Education Report suggests the following actions that can be taken: (i) ensure that data entries are transparent, accurate, reliable and that the data itself is free from unfair bias ; (ii) ensure that external data sources, algorithms or predictive models are based on sound actuarial principles with a valid explanation or justification for any claimed correlation or causation; (iii) be able to provide the reason(s) for an unfavorable underwriting decision to the consumer and all information on which the insurer based its unfavorable underwriting decision; (iv) be able to produce information on demand as part of regular rate and policy reviews or market conduct reviews.
  • The EU AI Regulation notes that “[h]High-risk AI systems should … be accompanied by relevant documentation and instructions for use and include concise and clear information, including with regard to possible risks to fundamental rights and discrimination, the optionally.
  • China Cyber ​​Admin seeks to demand that “[c]Companies must disclose the basics of any algorithm recommendation service, explaining the purpose and mechanics of the recommendations in a “visible” way.

Governance program

The various agencies believe that those who use algorithms and consumer data should design and implement governance programs to properly monitor and evaluate that use.

  • The AU WG Educational Report recommends that a governance program (i) ensures that the predictive models or machine learning algorithm in accelerated underwriting have an intended outcome and that the outcome is achieved; (ii) ensure that the predictive models or machine learning algorithm achieves an outcome that is not unfairly discriminatory; and (iii) have a mechanism to correct errors if found.
  • Colorado Section 10-3-1104.9 requires insurers (i) to establish and maintain a reasonably designed risk management framework to determine, to the extent practicable, whether the insurer’s use of data and technology is unfairly discriminatory against protected status; (ii) assess the risk management framework; and (iii) obtaining certifications from management regarding the implementation of the risk management framework. At the NAIC’s fall national meeting, Commissioner Conway explained that Colorado intentionally places the burden of monitoring and testing on insurers because Colorado doesn’t have the resources or expertise to do so. .
  • The EU AI Regulation requires “appropriate human oversight measures” and specifies that “these measures must ensure that the system is subject to inherent operational constraints which cannot be overridden by the system itself and meet to the human operator, and that the natural persons to whom human supervision has been entrusted have the competence, training and authority necessary to fulfill this role.
  • China Cyber ​​Admin’s proposal will require providers to “regularly evaluate and test their algorithms and data to avoid patterns that will induce obsessive user behaviors, overspending, or other behaviors that violate public order.” and morality”.

Insurers should consider consumer data and algorithms used in all areas of the business, including marketing, product design, underwriting, administrative services, claims and fraud units, and measures in place to address unfair discrimination and fairness and transparency. . It also involves considering what governance is in place or may need to be improved.


Source link