As the financial sector's reliance on Know Your Customer (KYC) systems continues to grow, so too does the risk of breaches and insider misuse. What was once thought to be a trust upgrade has become one of the industry's most fragile assumptions, with 40% of incidents in 2025 attributed to insiders and vendors who now sit squarely inside the system.
The problem is twofold: KYC workflows require highly sensitive materials to move across cloud providers, verification vendors, and manual review teams, widening the blast radius. Moreover, many KYC stacks are architected in ways that make leaks not just possible but likely. This was starkly illustrated by last year's breach of the "Tea" app, which exposed passports and personal information after a database was left publicly accessible.
The scale of vulnerability is now well-documented, with over 12,000 confirmed breaches last year resulting in hundreds of millions of records being exposed. Supply-chain breaches were particularly damaging, with nearly one million records lost per incident on average. Identity data is uniquely permanent, and when KYC databases are copied or accessed through compromised vendors, users may have to live with the consequences indefinitely.
Weak identity checks are a systemic risk, as recent law-enforcement actions have underscored how fragile identity verification can become when treated as a box-ticking exercise. Lithuanian authorities' dismantling of SIM-farm networks revealed how weak KYC controls and SMS-based verification were exploited to weaponize legitimate telecom infrastructure.
A.I.-assisted compliance adds another layer of complexity, with many KYC providers relying on centralized, cloud-hosted A.I. models to review documents and flag anomalies. In default configurations, sensitive inputs are transmitted beyond the institution's direct control, raising concerns about insider misuse and vendor compromise.
However, there is a way forward: confidential A.I. challenges the assumption that verification requires visibility by starting from a different premise – sensitive data should remain protected even from those who operate the system. Confidential computing enables this by executing code inside hardware-isolated environments known as trusted execution environments (TEEs). Data remains encrypted not only at rest and in transit but also during processing.
Research has demonstrated that technologies such as Intel SGX, AMD SEV-SNP, and remote attestation can provide verifiable isolation at the processor level. Applied to KYC, confidential A.I. allows identity checks, biometric matching, and risk analysis to occur without exposing raw documents or personal data to reviewers, vendors, or cloud operators. Verification can be proven cryptographically without copying sensitive files into shared databases.
Reducing insider visibility is not an abstract security upgrade – it changes who bears risk and reassures users that submitting identity documents does not require blind trust in unseen employees or subcontractors. Institutions shrink their liability footprint by minimizing plaintext access to regulated data, while regulators gain stronger assurances that compliance systems align with data-minimization principles rather than contradict them.
The time for a shift in KYC thinking is overdue. The industry cannot continue to normalize insider risk, given current breach patterns. Confidential A.I. does not eliminate all threats, but it challenges a long-standing assumption and offers a way forward – one that prioritizes data protection and user trust over outdated notions of verification and compliance.
The problem is twofold: KYC workflows require highly sensitive materials to move across cloud providers, verification vendors, and manual review teams, widening the blast radius. Moreover, many KYC stacks are architected in ways that make leaks not just possible but likely. This was starkly illustrated by last year's breach of the "Tea" app, which exposed passports and personal information after a database was left publicly accessible.
The scale of vulnerability is now well-documented, with over 12,000 confirmed breaches last year resulting in hundreds of millions of records being exposed. Supply-chain breaches were particularly damaging, with nearly one million records lost per incident on average. Identity data is uniquely permanent, and when KYC databases are copied or accessed through compromised vendors, users may have to live with the consequences indefinitely.
Weak identity checks are a systemic risk, as recent law-enforcement actions have underscored how fragile identity verification can become when treated as a box-ticking exercise. Lithuanian authorities' dismantling of SIM-farm networks revealed how weak KYC controls and SMS-based verification were exploited to weaponize legitimate telecom infrastructure.
A.I.-assisted compliance adds another layer of complexity, with many KYC providers relying on centralized, cloud-hosted A.I. models to review documents and flag anomalies. In default configurations, sensitive inputs are transmitted beyond the institution's direct control, raising concerns about insider misuse and vendor compromise.
However, there is a way forward: confidential A.I. challenges the assumption that verification requires visibility by starting from a different premise – sensitive data should remain protected even from those who operate the system. Confidential computing enables this by executing code inside hardware-isolated environments known as trusted execution environments (TEEs). Data remains encrypted not only at rest and in transit but also during processing.
Research has demonstrated that technologies such as Intel SGX, AMD SEV-SNP, and remote attestation can provide verifiable isolation at the processor level. Applied to KYC, confidential A.I. allows identity checks, biometric matching, and risk analysis to occur without exposing raw documents or personal data to reviewers, vendors, or cloud operators. Verification can be proven cryptographically without copying sensitive files into shared databases.
Reducing insider visibility is not an abstract security upgrade – it changes who bears risk and reassures users that submitting identity documents does not require blind trust in unseen employees or subcontractors. Institutions shrink their liability footprint by minimizing plaintext access to regulated data, while regulators gain stronger assurances that compliance systems align with data-minimization principles rather than contradict them.
The time for a shift in KYC thinking is overdue. The industry cannot continue to normalize insider risk, given current breach patterns. Confidential A.I. does not eliminate all threats, but it challenges a long-standing assumption and offers a way forward – one that prioritizes data protection and user trust over outdated notions of verification and compliance.