With the federal approach to AI regulation still in flux, states remain the leaders in establishing statutory guardrails on the use and development of AI. California and Texas debate important AI legislation this year.
California continues as AI pioneer: On the heels of an active 2024 session where the California legislature passed 17 laws related to AI, legislators introduced a new bill, AB 1018, which would create significant legal obligations for employers who use AI-based hiring and workforce management tools.
Expansive notice, use, and anti-discrimination employer mandates: In addition to requiring employers to disclose AI-driven decisions, allowing individuals to opt-out, and providing for appeals of adverse outcomes, the bill would expand covered uses of AI, impose audit and record retention obligations, and create steep penalties.
The bill covers all situations in which AI is used to assist or replace human decision making for “consequential decisions” including hiring, termination, wages, benefits, scheduling, promotions, performance evaluation, access to training and workplace safety decisions.
Employers must keep AI-related documentation for 10 years and must be prepared to submit unredacted performance evaluations to the state AG on request.
Penalties of up to $25,000 per violation of the terms of the legislation are possible.
HRPA’s stand on similar policy efforts: HRPA commented on the California Civil Rights Department’s proposed rule that covered similar AI discrimination concerns.
In our letter, we urged state policymakers to first consider whether existing law adequately addresses their concerns before imposing new mandates on employers. We also criticized the proposal’s broad definitions and recordkeeping requirements.
The rules have yet to be finalized or go into effect.
Texas wrangles its own AI legislation: The Texas Responsible AI Governance Act, (TRAIGA, HB 1709), echoes the regulatory approach of the EU bill, an unexpected approach for a Republican state seeking to lead in technology innovation.
Familiar approach: TRAIGA would apply a risk-based approach to AI governance, similar to the EU and last year’s Colorado legislation.
AI used in high-risk decisions include those related to employment, finance, education, health-care services, housing and insurance.
Employers would have a general duty to use “reasonable care” to avoid algorithmic discrimination which includes human oversight, disclosure of AI use, use and impact assessments, reporting and prompt suspension of noncompliant systems.
Bans use of AI that poses “unacceptable risk,” namely biometric categorization, emotional manipulation, and social scoring.
The new patchwork: While workplace regulation in general continues to crystallize on the state level, artificial intelligence is the newest – and potentially most expansive – layer to the patchwork. Employers can expect more state legislative proposals across the country directed at AI, ranging from smaller efforts targeting specific uses cases to comprehensive frameworks similar to Colorado’s landmark law passed last year.
