Artificial Intelligence continues to transform the world and the workplace. Now, more than ever, regulators seek to balance the benefits of new AI technology with its risks. As previously reported, federal, state, and foreign regulation and scrutiny of AI is on the rise. Without a uniform federal law governing AI in the United States, places like Illinois, Colorado and California are seeking to ramp up AI regulation at the state level.
Illinois Human Rights Act Amendment (effective Jan. 1, 2026)
On Aug. 9, 2024, Illinois Governor J.B. Pritzker signed House Bill 3773 (HB 3773) into law. HB 3773 amends the Illinois Human Rights Act (IHRA) and seeks to protect employees from certain conduct related to AI.
The IHRA generally protects individuals from discrimination, harassment, sexual harassment and retaliation in connection with employment, among other aspects of life. HB 3773 amends the IHRA in several respects.
- AI Discrimination: It is a civil rights violation for any employer to use AI that has the effect of subjecting employees to discrimination based on race, color, religion, national origin, ancestry, age, sex, marital status, order of protection status, disability, military status, sexual orientation, pregnancy or unfavorable discharge from military service.
- Zip Codes: It a violation of the IHRA for an employer to use zip codes as a proxy for the above protected classes with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.
- Notice: Employers must provide notice to an employee that the employer is using AI for the purposes described above. HB 3773 does not specify the circumstances or conditions that require notice, the time for providing notice, nor the means for providing notice. The Illinois Department of Human Rights is expected to adopt rules related to these requirements.
AI is defined by HB 3773 as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
AI also includes “generative artificial intelligence,” defined as “an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including … :
- textual outputs, such as short answers, essays, poetry, or longer compositions or answers;
- image outputs, such as fine art, photographs, conceptual art, diagrams, and other images;
- multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and
- other content that would be otherwise produced by human means.”
Like most recent AI regulation and proposals, the broad and potentially amorphous definition of AI in the new Illinois law should prompt employers to conduct personal AI audits and assess which technologies may be covered.
These amendments to the IHRA go into effect Jan. 1, 2026.
Colorado Artificial Intelligence Act (effective Feb. 1, 2026)
On May 17, 2024, the Colorado Artificial Intelligence Act was signed into law by Governor Jared Polis through Senate Bill 24-205 (SB 205).
Colorado’s law is quite comprehensive and will likely be a blueprint for other states. SB 205 regulates the general use and development of AI by imposing restrictions on “developers” (a person or entity doing business in Colorado that “develops or intentionally and substantially modifies an artificial intelligence system”) and “deployers” (“a person doing business in [Colorado] that deploys a high-risk artificial intelligence system.”).
Among other requirements in the act, deployers (and many employers) must exercise reasonable care to protect against known or foreseeable risks of algorithmic discrimination in using a “high-risk system.” A high-risk systemis any AI system that, when deployed, makes, or is a substantial factor in making, a “consequential decision.” A consequential decision is a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: education, employment, a financial or lending service, an essential government service, healthcare services, housing, insurance or a legal service. An example high-risk system would be an AI system relied upon to decide who to interview or hire for a position.
The act includes a rebuttable presumption that a deployer used reasonable care if the deployer complied with certain requirements of the act, such as:
- implementing a risk management policy and program for the high-risk system;
- completing an impact assessment of the high-risk system;
- annually reviewing the deployment of each high-risk system to ensure that the high-risk system is not causing algorithmic discrimination;
- notifying a consumer (in the context of employment – an employee) of specified items if the high-risk system makes, or will be a substantial factor in making, a consequential decision concerning the consumer;
- providing the consumer with an opportunity to correct any incorrect personal data that a high-risk system processed in making a consequential decision;
- providing the consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk system;
- making a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source, and extent of the information collected and used by the deployer; and
- disclosing to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, that the high-risk system has caused.
Importantly, SB 205 contains some exceptions for companies with fewer than 50 full-time employees. For instance, if the company has fewer than 50 full-time employees and does not use its own data to train the high-risk AI system, the company is not required to implement a risk management policy and program, complete an impact assessment for the high-risk system or notify the consumer of specific items if the high-risk system makes.
The act goes into effect on Feb. 1, 2026.
California Training Data Transparency (effective Jan. 1, 2026)
The California legislature recently passed a flurry of bills that aim to regulate AI. Several were signed into law, covering a wide range of topics like transparency of training data (AB 2013), AI education and literacy (AB 2876), generative AI in healthcare (AB 3030), data privacy (AB 1008) and actors’ rights (AB 2602 and AB 1836). However, Governor Newsom also vetoed one major bill, SB 1047, stating that it “focus[ed] only on the most expensive and large-scale models” and, by ignoring smaller models, “could give the public a false sense of security about controlling this fast-moving technology.”
Diving into one new law potentially impacting employers, Governor Newsom signed and approved AB 2013, titled “Generative Artificial Intelligence: Training Data Transparency”. Starting on Jan. 1, 2026, developers of AI systems or services must disclose specified documentation regarding the data used to train such systems or services. The disclosure requirements are ongoing and may be triggered each time a substantial modification is made available for Californians to use. A “developer” is a person, partnership, state or local governmental agency, or corporation that designs, codes, produces, or substantially modifies an AI system or service for use by members of the public. “Substantially modifies” or “substantial modification” vaguely means a new version, new release, or other update to a generative artificial intelligence system or service that materially changes its functionality or performance, including the results of retraining or fine tuning.
The developer’s disclosures, which should be published on its website, include the following:
- The sources or owners of the datasets
- The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets
- A description of the types of data points within the datasets
- Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain
- Whether the data sets were purchased or licensed by the developer
- Whether the datasets include personal information
- Whether the datasets include consumer information
- Whether there was any cleaning, processing or other modification to the datasets by the developer
- The time period during which the data in the datasets were collected, including a notice if the data collection is ongoing
- The dates the datasets were first used during the development of the AI system or service
- Whether the generative AI system or serviced used or continuously uses synthetic data generation in its development
Notably, AB 2013 has some exceptions. The transparency requirement does not apply to data used to train a generative AI system or service whose sole purpose is to help ensure security and integrity or for the operation of aircraft in the national airspace. Nor does it apply to systems developed for national security, military or defense purposes that is made available only to a federal entity.
As passed, AB 2013 does not currently include any penalties for non-compliance.
Practical Tips and Takeaways
With the rise of AI regulation, employers should evaluate when and how new and proposed laws will impact their business. Employers should consider the following practice tips.
- Employers subject to new laws should audit whether and how their employees are using AI tools — with or without company approval.
- Employers using third-party AI systems should ensure that their vendors are complying with new laws, as applicable, and assess whether the employer could still be considered a “developer.”
- Employers should assess their practices, policies and trainings to ensure they are compliant with new laws and closely monitor any amendments or official guidance regarding the laws.
- Employers who are not subject to the jurisdiction of these new laws may expect similar laws in the future from their states — and consider these laws as guidance on how to proactively govern the use of AI with their employees.
- Employers should recognize that compliance with state laws related to AI — and the absence of a uniform federal law — does not absolve a company for liability for historical federal laws and standards enforced by federal agencies, like the EEOC, FTC or SEC.
- Employers should consider training employees on the risks and benefits of AI tools, including an overview of how AI is used in the company; the potential for bias or discrimination; and the importance of protecting private, confidential or trade secret information.
- Employers should assess and scrutinize any contracts or agreements with AI developers and understand how the AI systems are being utilized at the company, with a particular focus on how AI is used to make employment decisions.
For further guidance about how artificial intelligence presents both risks and opportunities for employers, please contact the authors of this article.