top of page

Their AI, Their Terms, Your Data


An Executive Briefing for Business Leaders on the realities of data protection using AI.

By Kathryn Giudes, Managing Director ORCA Opti.


The Problem You Didn't Know You Had

Your organisation is almost certainly using AI today. The question is do you understand the implication of the AI provider terms?


Over one billion people worldwide now interact with consumer AI platforms monthly. The overwhelming majority operate under terms that permit their conversations to be used for model training, reviewed by third-party contractors, and retained for years.  This is regardless of whether the user clicked "delete."  If your employees are using personal subscriptions, reimbursed Pro plans, or free tiers for any work-related task, your corporate data is likely already inside that pipeline.


Paying more does not buy privacy. ChatGPT Plus, Claude Pro, Claude Max and Google AI Pro all operate under the same legal terms as their free equivalents for training and data retention. The premium subscription buys speed and features, not a privacy posture.


What the Terms Actually Permit

Here is what the consumer and paid-individual tiers of the four dominant AI providers allow, by default, currently:

Provider

Consumer Training Default

Retention Reality

Human Review

ChatGPT 

(600M+ users)

ON

conversations train models

Indefinite hold under US court order New York Times v. Microsoft Corp et al; "deleted" data is not deleted

Yes,  third-party contractors

Claude 

(50–100M users)

ON

changed Sept 2025 via opt-out tick

5 years for opted-in users

Yes

Gemini 

(150M+ users)

ON

3 years for reviewed conversations, even after deletion

Yes,  Google advises users not to enter confidential information

Microsoft Copilot 

(200M+ reach)

Varies by surface

Yes on both the consumer tier, and the core models they rely on

 

The implication for your organisation: Every employee who uses a personal AI account for work, drafting emails, summarising contracts, analysing financials, preparing board papers, has potentially placed that material into a multi-year training corpus accessible to the provider, its contractors, and in some cases, the courts of a foreign jurisdiction.

 

Enterprise Tiers Help, But Don't Fully Solve the Problem

Enterprise agreements (ChatGPT Enterprise, Claude Enterprise, Gemini for Workspace, Microsoft 365 Copilot via Entra ID) do contractually exclude your data from training. That's a meaningful improvement. But three residual risks remain:


  1. Jurisdictional exposure. Your data still transits US infrastructure under US legal jurisdiction. Contractual exclusion from training does not override a lawful US government data request.  The CLOUD Act effectively gives the US government the ability to compel disclosure of any data that has been saved or transited through US cloud infrastructure.  This is being exercised in an ongoing case between New York Times v. Microsoft Corp et al, where et al includes Open AI and ChatGPT content.

If you are using China based services, AI or not, all data that crosses onto Chinese infrastructure is the property of the Chinese government.  As of June 2020, this includes Hong Kong. DeepSeek is the most popular AI from China, with over 1.6 billion downloads of the app in Australia and ~97 million active monthly users worldwide. 

US government and Chinese government have access to the data across the top 6 AI models globally:

·       ChatGPT, ~600 million active monthly users

·       Meta AI, ~500 million bundled monthly users

·       Google Gemini, ~150 million active monthly users

·       DeepSeek, ~97 million active monthly users

·       Claude (Anthropic), ~75 million active monthly users (50 – 100m reported on different sources

·       Microsoft Copilot, ~200 million passive reach (leveraging ChatGPT, Gemini and Claude models across their solutions)

Australia, Singapore and Japan all have similar laws regarding sovereign data access.  The focus is on protecting consumer, business and government data from exposure as defined by the Privacy Act and Medical Information regulations.  These protections apply beyond the Australian border.


  1. Shadow AI. The moment an employee signs in with a personal account, and the interface looks nearly identical, all enterprise protections vanish. Anthropic's September 2025 legal consent change made this acute: every employee who clicked "Accept" without unticking a pre-set toggle consented to corporate data entering a five-year training corpus.


  1. Architectural vulnerability. Enterprise contracts change who can train on your data. They do not change whether the AI can be manipulated into exfiltrating it. Every major provider has disclosed,  and in some cases declined to fix,  prompt injection vulnerabilities, classifying them as inherent limitations of how large language models work.


OpenAI's CISO has publicly described prompt injection broadly as, "a frontier challenge being researched and mitigated rather than solved."


Anthropic's Claude Code system card explicitly states that the Security Review GitHub Action is, "not hardened against prompt injection by design."

 

A Fundamentally Different Approach: ORCA Opti

ORCA Opti was built to resolve these risks by architecture and active guardrails, not by contract negotiation.


Your Data Stays in Your Security Boundary and Sovereignty  

All data, prompts, responses, documents, and outputs, remain within your organisation's tenancy. It does not leave that boundary to train underlying models. This is not a toggle in a settings menu. It is how the platform is built.

Files get shared via AI the same way they always have.  The recipient needs to have access to the file before they can open it.  All major AI platforms that interact with files outside the platform enable prompt injection vulnerabilities, by design.

Data sent to China and the USA are inherently available to those governments if they transit or reside in those countries.  ORCA stays in Australia and is stored with strong encryption.  This data is not accessible to The CLOUD Act.


The Power of Leading AI, Without the Privacy Trade-Off

ORCA Opti leverages the capabilities of the world's most advanced models, Claude, Gemini, ChatGPT and others, as the intelligence layer. Your organisation gets frontier AI performance. What you don't get is your data flowing into those providers' training pipelines, being reviewed by their contractors, leakage through their operating design, or being retained under their commercial terms.


All Logging Stays in Your Tenancy

The only logging that occurs is within your organisation's own environment, available to you for quality control, audit, and compliance purposes. There are no third-party human reviewers. No provider-side retention windows. No court orders from foreign jurisdictions holding your "deleted" conversations indefinitely.


Sovereignty by Design

For organisations operating under the Privacy Act, Australian Privacy Principles, SOCI, ISM or IRAP frameworks, ORCA Opti's sovereign hosting provides a clean answer to the data residency question, not a negotiated one that depends on the goodwill of a US-headquartered provider's legal team.

 

The Decision Framework for Executives

Question

Consumer/Pro AI

US Enterprise AI

ORCA Opti

Is my data used to train models?

Yes, by default

No (contractual)

No (architectural)

Can third-party reviewers read my conversations?

Yes

Typically no

No

Does "delete" mean delete?

No

Negotiable

Your retention policy applies

Where does my data reside?

US

US (with contractual controls)

Your tenancy, sovereign jurisdiction

Who controls the audit log?

Provider

Shared

You,  exclusively

What happens if terms change?

You accept or leave

Renegotiation

Architecture doesn't change with a provider's business model

 

What This Means for Your Next Purchase Decision

If you are currently:

  • Reimbursing individual AI subscriptions,  you are funding a compliance liability, not a productivity tool. Your data is operating under consumer terms designed in the provider's interest.

  • Evaluating enterprise AI agreements,  you are buying contractual protection that improves your position materially, but leaves jurisdictional and architectural risks on the table.

  • Responsible for regulated data,  client records, patient information, financial data, defence-adjacent material, critical infrastructure documentation,  you need a platform where the privacy guarantee is structural, not contractual.


ORCA Opti gives your organisation access to the full capability of frontier AI models while keeping every byte of data inside your security boundary. The logging is yours. The audit trail is yours. The retention policy is yours. The underlying models do what they do best,  reason, generate, analyse,  without ever ingesting your data into their training pipelines.

 

The Bottom Line

The AI industry's trajectory is clear: defaults are moving against users, retention periods are lengthening, and providers are openly acknowledging that certain security vulnerabilities are features, not bugs. At over a billion consumer relationships and counting, the scale of sensitive data flowing into these systems is without precedent.

Your organisation doesn't need to choose between AI capability and data sovereignty. With ORCA Opti, you get both,  by design, not by negotiation.

 

For a confidential discussion about how ORCA Opti can support your organisation's AI deployment within your existing security and compliance framework, contact the ORCA Opti team.

 
 

Interested in Becoming an Investor in
ORCA Opti?

Subscribe to ORCA Opti

Stay up to date with compliance and cyber news

ORCA Opti Square no tagline on light.png

Brisbane Head Office

1 Ella St Newstead QLD 4006

Australia

Sydney Office

Suite 409, 15 Lime Street,

Sydney NSW 2000

Australia

hello@orcaopti.ai

© 2025 ORCA Opti Software Ltd. ACN 687 583 099

All Rights Reserved. 

  • LinkedIn
bottom of page