Ethical AI
What Is Ethical AI? A Guide for Businesses Navigating AI and Privacy
As artificial intelligence becomes more embedded in business operations, the question of how to develop and deploy AI responsibly has moved from abstract principle to regulatory requirement. Ethical AI refers to the principles, policies, and practices that ensure AI technologies are transparent, accountable, privacy-respecting, and aligned with human values. In 2026, ethical AI is no longer optional. It is being codified into enforceable law across multiple jurisdictions.
Why Ethical AI Matters
AI systems can analyze vast amounts of personal data, make decisions that affect people's access to employment, credit, housing, healthcare, and education, and evolve in ways that are difficult to audit after deployment. These capabilities create concrete risks:
- Privacy violations when personal data is used to train models without adequate controls, consent, or data subject rights.
- Bias and discrimination when AI systems produce outcomes that disproportionately affect people based on protected characteristics, often reflecting patterns in training data that developers did not identify or correct.
- Lack of transparency when organizations cannot explain how an AI system reached a particular decision, making it impossible for affected individuals to challenge outcomes or for regulators to audit compliance.
- Unauthorized use of AI tools by employees (sometimes called "shadow AI"), where staff adopt third-party generative AI products without IT or legal review, creating unmanaged data exposure and compliance gaps.
These are not hypothetical concerns. Regulators in the EU, California, and other jurisdictions have responded with specific legal requirements addressing each of these risks.
Core Principles of Ethical AI
Ethical AI is grounded in several foundational principles that now appear, in varying forms, across regulatory frameworks worldwide:
- Transparency: users and affected individuals should understand how AI systems work and how decisions are made. The EU AI Act codifies this through transparency obligations for AI systems that interact with people, and California's ADMT regulations require businesses to provide plain-language explanations of automated decision-making logic.
- Fairness and non-discrimination: AI must avoid bias and ensure equitable treatment. The EU AI Act classifies certain biometric and social scoring systems as prohibited, and multiple U.S. state privacy laws include protections against profiling that produces discriminatory effects.
- Privacy and data protection: AI systems must handle personal data in compliance with privacy laws including the GDPR, CCPA/CPRA, and the growing body of U.S. state privacy legislation. This includes honoring data subject rights (access, deletion, correction) that extend to data used in AI training and inference.
- Accountability: organizations must maintain oversight of AI systems and take responsibility for their impact. This includes conducting risk assessments, maintaining documentation, and designating individuals with authority over AI deployment decisions.
- Human oversight: for high-stakes decisions affecting individuals' lives, humans must remain involved in the decision-making process. Both the EU AI Act (for high-risk systems) and California's ADMT regulations (for significant decisions) require mechanisms for human review.
Privacy and AI: Where the Regulatory Pressure Is Concentrated
AI and privacy regulation are converging. Generative AI models raise particular concerns when trained on personal or proprietary data without appropriate controls, and regulators have been explicit about the obligations that apply.
In September 2025, CalPrivacy (formerly the CPPA) finalized regulations under the CCPA/CPRA that directly address automated decision-making technology (ADMT). Beginning January 1, 2027, businesses that use ADMT to make "significant decisions" about California consumers, defined as decisions affecting financial services, housing, education, employment, or healthcare, must:
- Provide pre-use notices explaining how ADMT will be used in the decision.
- Allow consumers to opt out of ADMT-based significant decisions, with limited exceptions.
- Respond to consumer access requests with meaningful information about the logic of the ADMT, key parameters, and how outputs are used.
- Provide consumers the ability to appeal ADMT decisions, with review by a qualified human decision-maker.
The same regulatory package requires risk assessments before any processing that presents "significant risk" to consumer privacy, including processing personal information to train ADMT, using ADMT for significant decisions, and profiling based on presence at sensitive locations. Risk assessment requirements took effect January 1, 2026; ADMT compliance obligations begin January 1, 2027.
Under the GDPR, Articles 13, 14, and 15 require organizations to inform data subjects about the existence of automated decision-making, including profiling, and to provide meaningful information about the logic involved and the significance and envisaged consequences. Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significant effects, with exceptions for contractual necessity, legal authorization, or explicit consent. The Court of Justice of the EU has interpreted these provisions to require genuine explanations of automated decisions, not merely a statement that an algorithm was used (Case C-203/22, Dun & Bradstreet Austria, February 27, 2025).
The EU AI Act: A Risk-Based Regulatory Framework
The EU AI Act (Regulation EU 2024/1689), which entered into force on August 1, 2024, is the world's first comprehensive AI-specific legislation. It establishes a risk-based classification system with obligations that scale based on the potential harm of an AI system:
- Prohibited AI practices (effective February 2, 2025): systems that pose unacceptable risks, including social scoring by governments, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), manipulation of vulnerable groups, and exploitation of subconscious behavior. Penalties for deploying prohibited systems can reach €35 million or 7% of global annual turnover.
- General-purpose AI obligations (effective August 2, 2025): providers of general-purpose AI models must comply with transparency requirements, maintain technical documentation, and adhere to copyright law. Models with systemic risk face additional obligations including adversarial testing, incident reporting, and cybersecurity measures.
- High-risk AI system obligations (effective August 2, 2026): the bulk of the EU AI Act's compliance framework applies to AI systems used in areas including employment and worker management, education, creditworthiness assessment, law enforcement, migration, and administration of justice. Providers of high-risk systems must implement risk management systems, data governance measures, technical documentation, human oversight mechanisms, and post-market monitoring. Deployers must conduct fundamental rights impact assessments.
- Transparency obligations (effective August 2, 2026): AI systems that interact with people must disclose that they are AI-driven. AI-generated content (including deepfakes) must be labeled.
The EU AI Act applies to any organization that places an AI system on the EU market or whose AI system's output is used in the EU, regardless of where the provider is established. This means U.S. companies serving EU customers or deploying AI systems that affect EU residents must comply.
U.S. AI Governance: A Fragmented Landscape
At the federal level, the Biden administration's October 2023 Executive Order on AI (EO 14110) was rescinded by President Trump on January 20, 2025, his first day in office. The Trump administration issued its own executive order on January 23, 2025 ("Removing Barriers to American Leadership in Artificial Intelligence"), which emphasizes deregulation, innovation, and U.S. global competitiveness in AI. In December 2025, a further executive order ("Ensuring a National Policy Framework for Artificial Intelligence") directed the Department of Justice to challenge state AI laws deemed inconsistent with federal policy and instructed the Commerce Department to evaluate state AI legislation for potential preemption.
This federal-state tension means the AI governance landscape in the U.S. is actively contested. State-level regulation continues to advance: California's ADMT regulations are finalized and taking effect, multiple states are considering comprehensive AI governance legislation, and the Consortium of Privacy Regulators (a multi-state enforcement alliance) has signaled coordinated attention to AI-related privacy compliance. Meanwhile, the federal government is signaling intent to preempt what it considers burdensome state AI laws.
For businesses, the practical effect is that compliance planning must account for both the EU AI Act's risk-based framework and the evolving U.S. state-by-state requirements, while monitoring whether federal preemption efforts narrow or eliminate state obligations.
The NIST AI Risk Management Framework (AI RMF), released in January 2023, remains a widely used voluntary standard for AI governance in the U.S. Several state privacy laws reference NIST frameworks as safe harbor benchmarks, and the AI RMF's structure (Govern, Map, Measure, Manage) provides a practical foundation for organizations building internal AI governance programs.
AI Governance in Practice
For organizations deploying AI at scale, ethical AI requires operational governance, not just principles. This includes:
- Maintaining an inventory of all AI systems in use across the organization, including third-party tools adopted by employees without centralized approval.
- Classifying data used in AI systems to understand what personal information, sensitive data, or proprietary content is flowing into training sets, prompts, and inference pipelines.
- Conducting risk assessments and, where required, data protection impact assessments (DPIAs) before deploying AI systems that process personal data or make consequential decisions.
- Implementing consumer-facing rights mechanisms for automated decision-making, including opt-out, access, and appeal rights where required by law.
- Establishing policies for acceptable AI use that address employee adoption of generative AI tools, data handling requirements, and escalation paths for high-risk applications.
- Monitoring for unauthorized AI tool adoption and ensuring that new AI integrations are reviewed for privacy, security, and regulatory compliance before deployment.
DataGrail's AI governance solution helps organizations gain visibility into AI usage across their operations, detect unauthorized tools, classify data exposure risks, and enforce consistent governance policies. As regulatory requirements expand, platforms that connect AI governance to existing privacy programs, including DSR fulfillment, data mapping, and consent management, are becoming essential infrastructure.
Resources
EU AI Act – European Commission
EU AI Act Implementation Timeline
CalPrivacy – ADMT, Risk Assessment, and Cybersecurity Audit Regulations
GDPR Article 22 – Automated Individual Decision-Making
NIST AI Risk Management Framework