WORK WITH US
California’s Draft Regulations Addressing AI in Employment Thrusts Employers into Regulatory Wild West
California’s Fair Employment and Housing Council (“FEHC”) is the body that promulgates regulations to implement and interpret the Fair Employment & Housing Act (“FEHA”) – California’s keystone legislation governing illegal employment practices. For over a year, the FEHC has been exploring additions to its regulations aimed at addressing concerns surrounding the use of artificial intelligence (“AI”) in employment decision-making.
The FEHC’s concerns are certainly well-intentioned, as algorithmic bias is a genuine area of concern. A program that screens resumes and ascribes less weight to applicants with gaps in their employment histories may, for example, unintentionally discriminate against those who were pregnant, served in the military, or have protected medical conditions.
The FEHC is also not the first entity to tackle automated decision making in employment. The Equal Employment Opportunity Commission (“EEOC”), for instance, has released technical assistance on how algorithmic decision-making can violate the Americans with Disabilities Act (“ADA”). And New York City passed a bill that will prohibit employers within the city from using “automated employment decision tools” (“AEDTs”) to screen candidates or employees unless the tool was subject to a defined “bias audit,” and the result of the audit is made publicly available on the employer’s website. (N.Y.C. Admin. Code § 20-870 et seq.) The New York City bill also contains disclosure requirements in favor of impacted individuals. This makes sense, because otherwise most applicants or employees will never know an AEDT played a role in an employment decision. Disclosure requirements also put the applicant or employee on notice to request a reasonable accommodation from the employer.
During its last meeting on March 25, 2022, the FEHC released draft regulations that, according to its authors, clarify how existing standards already encompass the use of AI in employment decision-making. The FEHC attempts to accomplish this feat, in part, through a new term: “Automated-Decision Systems” (“ADS”), which it sprinkles throughout its existing regulatory framework. The draft regulations would prohibit ADS that screen out or “tend to screen out” (whatever that means) an applicant or employee on the basis of a protected characteristic unless it is shown to be job-related and consistent with business necessity.
The term ADS is far from a model of clarity, however: “A computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence, that screens, evaluates, categorized, recommends, or otherwise make a decision or facilitates human decision making that impacts employees or applicants.” There appears to be nothing “automated” about this definition, and almost any type of calculation “facilitates human decision making.” Thus, the proposed definition arguably encompasses everything from a numerically-based employment evaluation to self-learning AI – and the burden will be on employers (1) to figure out what does and does not constitute ADS, and (2) whether the ADS complies with California’s employment statutes and regulations.
The proposed definition for ADS bears some similarities to New York City’s definition for AEDT. But New York’s definition expressly limits AEDT to computational processes that are “derived from machine learning, statistical modeling, data analytics, or artificial intelligence” and used to “substantially assist or replace discretionary decision making.” The definition for AEDT also expressly excludes tools that do not “automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to […] spreadsheet, database, data set, or other compilation of data.” It is unclear why the FEHC rejected a more focused definition, and while the FEHC heard from a number of stakeholders regarding possible risks associated with automated decision-making, it is unclear who it is relying on for technical guidance.
California’s draft regulations would also hold employers potentially liable for a third-party’s ADS by incorporating those who assist in recruitment, hiring, performance evaluations, or other employment-related assessments into the definition of “agent.” This puts the burden on employers to ensure their vendors’ ADS does not violate state law. As noted above, the FEHC’s draft regulations provide no technical guidance for how employers should go about evaluating ADS. This may reflect the FEHC’s own struggles with an admittedly challenging topic and suggests more tailored approach is in order. The draft regulations also contain no disclosure requirements, which may be beyond the FEHC’s authority to mandate. Rather, it expands recordkeeping obligations to incorporate ADS. This is likely a signal that the California intends to acquire ADS from employers as part of routine investigation efforts in order to better understand the role and prevalence of ADS in employment decisions.
The draft regulations would also expose third parties to aider and abettor liability for their own ADS. Existing regulations make it unlawful to assist any person or individual from doing any act “known” to constitute “unlawful employment discrimination.” Clear enough. The draft revisions would add that, “Unlawful assistance under this paragraph includes, but is not limited to, the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system […] for an unlawful purpose […].” In short, the FEHC seems to be taking a defective-products approach to aider and abettor liability in the context of ADS and targeting the chain of distribution. It is not clear that the FEHC has this authority, generally, or whether the expanded definition even requires a harm before liability attaches.
In short, if the goal was to send California’s well-intentioned employers into a confused frenzy, expending millions of dollars trying to unravel every piece of ADS that may facilitate decision-making in the employment context, speculating about their own compliance (or their vendors’), while doing little to promote opportunities for dialogue or transparency between employers and employees, the draft regulations are sure on the right track. This also begs the question: if existing regulations already capture issues associated with AI and automated decision-making, why does the FEHC need new regulations at all? In most situation it likely does not. Rather, because the role of automated decision-making is little-known (and less understood), the FEHC is likely focused on bringing it to the forefront. But that goal comes with a clear consequence: thrusting employers into a high stakes regulatory Wild West.