Skip to main content

On-demand webinar coming soon...

On-demand webinar coming soon...

Glossary of AI Governance, Privacy & Data Risk Terms

Understand the terms that power modern privacy, risk, and AI programs—and how they connect across your governance ecosystem.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A


AI Accountability

AI accountability refers to the frameworks, processes, and controls that ensure organizations developing or deploying artificial intelligence systems are responsible for their outcomes, impacts, and compliance with legal and ethical standards.

AI Conformity Assessment

An AI conformity assessment is a regulatory process that evaluates whether an AI system meets legal, technical, and ethical requirements under applicable frameworks.

AI DPIA

An AI DPIA is a data protection impact assessment designed to evaluate risks, compliance, and safeguards when deploying artificial intelligence systems.

AI Explainability

AI explainability is the ability to make artificial intelligence system decisions understandable, transparent, and interpretable for stakeholders, regulators, and end users. 

AI Fairness

AI fairness refers to the principle of developing and deploying artificial intelligence systems that make unbiased, equitable decisions across individuals and demographic groups. 

AI Impact Assessment (AIIA)

An AI impact assessment (AIIA) evaluates the potential risks, benefits, and compliance implications of artificial intelligence systems before and during deployment. 

AI Lifecycle Management

AI lifecycle management is the process of governing, monitoring, and optimizing artificial intelligence systems from development to decommissioning to ensure transparency, compliance, and ethical use. 

AI Model Inventory

An AI model inventory tracks all artificial intelligence models within an organization to ensure governance, compliance, and transparency. 

AI Model Drift

AI model drift occurs when an artificial intelligence model’s performance declines over time because of changes in data, environment, or user behavior. 

AI Risk Management

AI risk management is the process of identifying, assessing, and mitigating risks throughout the lifecycle of artificial intelligence systems to ensure safe and compliant use. 

AI Safety

AI safety refers to the practices and safeguards that ensure artificial intelligence systems operate reliably, ethically, and without causing harm to individuals or society. 

Algorithmic Bias

Algorithmic bias occurs when an artificial intelligence or automated system produces unfair, inaccurate, or discriminatory outcomes due to skewed data or flawed model design. 

Australian Privacy Act

The Australian Privacy Act is a national law that regulates how organizations handle personal information, ensuring transparency, accountability, and individual privacy rights across Australia. 

B


Bias Detection

Bias detection is the process of identifying and measuring unfair patterns or discrimination within artificial intelligence models, datasets, or automated decision-making systems.

C


CCPA (California Consumer Privacy Act)

The California Consumer Privacy Act (CCPA) grants California residents rights over their personal data, requiring transparency and control from organizations processing that data.

CPRA (California Privacy Rights Act)

The California Privacy Rights Act (CPRA) expands and strengthens the California Consumer Privacy Act (CCPA), enhancing consumer rights, defining sensitive data, and establishing California’s dedicated privacy regulator.

Cookie Consent

Cookie consent is the process of obtaining user permission to store or access cookies and similar tracking technologies on their devices in compliance with privacy laws.

Customer Data Rights

Customer data rights refer to the legal entitlements individuals have to access, correct, delete, or control how organizations collect and use their personal data. 

D


Data Classification

Data classification is the process of organizing data into categories based on sensitivity, confidentiality, and regulatory requirements to improve security and compliance. 

Data Ethics

Data ethics refers to the principles and practices that ensure data is collected, managed, and used responsibly, fairly, and transparently. 

Data Lineage

Data lineage is the process of tracking the origin, movement, and transformation of data across systems to ensure transparency, accuracy, and compliance. 

Data Mapping

Data mapping is the process of identifying and connecting data flows between systems to understand how personal information is collected, shared, and stored. 

Data Minimization

Data minimization is the principle of collecting and processing only the personal data necessary for a specific, lawful, and clearly defined purpose. 

Data Privacy

Data privacy refers to the responsible collection, use, and management of personal data to protect individuals’ rights and comply with legal and ethical standards. 

Data Retention Policy

A data retention policy defines how long an organization stores different categories of data and outlines when and how that data should be deleted or anonymized. 

Digital Personal Data Protection Act (DPDPA)

The Digital Personal Data Protection Act (DPDPA) is India’s comprehensive data protection law governing how organizations collect, process, and protect personal data while ensuring individuals’ privacy rights. 

DORA (Digital Operational Resilience Act)

The Digital Operational Resilience Act (DORA) is an EU regulation ensuring financial entities can withstand, respond to, and recover from ICT-related disruptions and cyber threats.

DORA Regulation (Digital Operational Resilience Act)

The DORA Regulation establishes a unified EU framework to strengthen the digital operational resilience of financial entities against information and communication technology (ICT) disruptions. 

DPIA (Data Protection Impact Assessment)

A Data Protection Impact Assessment (DPIA) identifies, evaluates, and mitigates privacy risks associated with personal data processing to ensure compliance with global data protection laws.

DSAR (Data Subject Access Request)

A Data Subject Access Request (DSAR) allows individuals to request access to personal data an organization holds about them, as required under privacy laws. 

E


Enterprise Risk Management (ERM)

Enterprise Risk Management (ERM) is a structured approach for identifying, assessing, and managing organizational risks across financial, operational, strategic, and compliance functions.

EU AI Act

The EU Artificial Intelligence Act (EU AI Act) is the European Union’s comprehensive regulatory framework designed to ensure that artificial intelligence systems are developed and used in a safe, transparent, and trustworthy way.

G


General Data Protection Regulation (GDPR)

The General Data Protection Regulation (GDPR) is the European Union’s comprehensive privacy law that governs how organizations collect, use, and protect personal data of individuals within the EU. 

GRC

Governance, Risk, and Compliance (GRC) is an integrated framework for aligning business objectives, managing organizational risks, and ensuring adherence to legal, ethical, and regulatory obligations. 

I


Incident Response Plan

An incident response plan is a documented strategy outlining the processes, roles, and procedures an organization follows to detect, contain, and recover from cybersecurity or data incidents. 

India Digital Personal Data Protection Act (DPDPA)

The Digital Personal Data Protection Act (DPDPA) is India’s comprehensive data protection law that governs how organizations collect, process, and protect personal data of individuals within India. 

M


Model Risk Management (MRM)

Model risk management (MRM) is the process of identifying, monitoring, and mitigating risks that arise from the design, implementation, and use of models in decision-making. 

N


NIS2 Directive

The NIS2 Directive is the European Union’s cybersecurity law that strengthens security and incident reporting requirements for essential and important entities across critical sectors. 

P


Privacy-Enhancing Technologies (PETs)

Privacy-enhancing technologies (PETs) are tools and methods that protect personal data by reducing exposure, enabling secure computation, and supporting regulatory compliance. 

R


Responsible AI

Responsible AI refers to the development and deployment of artificial intelligence systems that are ethical, transparent, accountable, and aligned with legal, social, and human values. 

Risk Register

A risk register is a centralized document or system used to identify, assess, and track risks that could impact an organization’s objectives, operations, or compliance posture. 

T


Tech Risk Management

Tech risk management is the process of identifying, assessing, and mitigating risks associated with an organization’s technology infrastructure, systems, and digital operations to ensure security, compliance, and resilience. 

V


Vendor Risk Assessment

A vendor risk assessment is the process of evaluating third-party vendors to identify, measure, and manage potential risks that could affect an organization’s security, compliance, or business continuity.