Automated Decision-Making Tool ADMT
An Automated Decision-Making Tool (ADMT) is any software, algorithm, or system that uses computation—whether or not it involves machine learning—to make decisions or substantially facilitate decisions affecting individuals. California's CPPA regulations broadly define ADMTs to include profiling, evaluation, and decisions about access to goods, services, employment, education, credit, housing, and insurance.
Businesses deploying ADMTs may be required to conduct risk assessments, provide consumer opt-out rights, and disclose the existence and logic of automated systems. See also: High-Risk AI System, Automated Employment Decision Tool (AEDT).
Automated Employment Decision Tool AEDT
An Automated Employment Decision Tool (AEDT) is a computational system used by employers to screen, rank, evaluate, or otherwise assist in hiring, promotion, assignment, or termination decisions. New York City Local Law 144, one of the first laws of its kind in the United States, defines AEDTs and requires employers using them to conduct annual independent bias audits and publicly disclose results.
AEDT regulations focus on preventing algorithmic discrimination in the hiring process and are increasingly adopted at the state level in addition to New York City's landmark ordinance. See also: Algorithmic Discrimination, Bias Audit.
AI Governance Framework Policy
An AI Governance Framework is a structured set of policies, procedures, controls, and accountability mechanisms that an organization establishes to manage the development, deployment, and monitoring of AI systems. Governance frameworks typically address risk classification, responsible use principles, data management, incident response, human oversight, and stakeholder accountability.
Several state AI laws require deployers and developers of high-risk AI systems to maintain documented governance frameworks as a condition of compliance. The NIST AI Risk Management Framework (RMF) is frequently referenced as a voluntary baseline. See also: Risk Management Program.
AI Incident Reporting Obligation
AI Incident Reporting refers to legal or regulatory obligations requiring organizations to notify authorities, affected individuals, or the public when an AI system causes or contributes to a significant harm, malfunction, or unexpected outcome. Some proposed state laws draw parallels to data breach notification requirements, mandating timely disclosure of AI-related incidents involving bias, privacy violations, or physical harm.
Incident reporting requirements are an emerging trend in AI legislation at both the state and federal level, though specific mandates vary widely. See also: High-Risk AI System, AI Governance Framework.
AI Regulatory Sandbox Program
An AI Regulatory Sandbox is a supervised testing environment created by a government agency that allows companies to develop and test AI systems under relaxed regulatory requirements for a defined period. Sandboxes are designed to foster innovation while allowing regulators to observe potential harms and develop appropriate oversight frameworks before broader deployment.
Several states have proposed or enacted sandbox programs for AI in insurance, financial services, and healthcare. Participation typically requires application approval, reporting obligations, and consumer safeguards. See also: AI Governance Framework.
AI Training Data Technical
AI Training Data refers to the datasets used to train machine learning models. The composition, provenance, and representativeness of training data directly affect a model's behavior, potential biases, and fitness for purpose. State laws addressing algorithmic discrimination often examine whether training data contains historical biases that may be perpetuated or amplified by AI systems.
Documentation requirements for high-risk AI systems increasingly include descriptions of training data sources, preprocessing steps, and known limitations. See also: Algorithmic Discrimination, Model Card.
AI Transparency Principle
AI Transparency refers to the degree to which an AI system's design, logic, data inputs, decision-making processes, and outputs are understandable and explainable to relevant stakeholders including users, regulators, and affected individuals. Transparency requirements range from simple consumer disclosures ("this decision was made by an AI") to detailed technical documentation of model architecture and training methodology.
Many state AI laws incorporate transparency as a foundational obligation, requiring deployers to provide meaningful information about how AI systems affect consequential decisions. See also: Right to Explanation, Consumer Disclosure (AI).
Algorithmic Accountability Policy
Algorithmic Accountability is the principle that organizations deploying AI and algorithmic systems should be responsible and answerable for the outcomes those systems produce. Accountability frameworks require companies to identify who is responsible for AI decisions, maintain records that permit auditing and review, and provide remedies to individuals harmed by automated systems.
The term is used broadly in legislative debate and appears in the names of proposed federal and state bills, such as the Algorithmic Accountability Act. See also: Algorithmic Impact Assessment, AI Governance Framework.
Algorithmic Discrimination Legal Concept
Algorithmic Discrimination occurs when an AI or automated system produces outcomes that unlawfully disadvantage individuals based on protected characteristics such as race, gender, age, disability, national origin, religion, or sexual orientation—even if the algorithm does not explicitly consider those characteristics. Proxy variables in training data can cause disparate impact that mirrors intentional discrimination.
Prohibitions on algorithmic discrimination are a central feature of many state AI laws, including Colorado SB 205 (insurance), Illinois AEIA, and New York City Local Law 144 (hiring). See also: Bias Audit, Automated Employment Decision Tool.
Algorithmic Impact Assessment AIA
An Algorithmic Impact Assessment (AIA) is a structured evaluation process used before and during the deployment of an AI system to identify, analyze, and mitigate potential harms to individuals and communities. AIAs typically examine the system's intended and foreseeable uses, the data it relies upon, potential discriminatory outcomes, privacy risks, and the adequacy of proposed safeguards.
State laws such as Colorado SB 205 and proposed bills in numerous other states require AIAs for high-risk AI systems as a precondition to deployment. See also: High-Risk AI System, Risk Management Program.
Opt-Out Right (AI) Consumer Right
The Opt-Out Right in the context of AI law refers to a consumer's legal right to refuse or withdraw consent to the use of automated decision-making tools in decisions that affect them. California's CPPA regulations require businesses to offer a "limit the use" or opt-out mechanism for ADMTs used in profiling and automated decision-making.
Opt-out rights vary in scope: some laws allow businesses to override consumer opt-outs when necessary to fulfill a contract or comply with legal obligations. See also: Automated Decision-Making Tool, Right to Explanation.