Foundations|Module 8 of 8|25 min|Intermediate

Responsible Use: Ethics, Data, and the Business of AI

Your AI inputs go somewhere. Your outputs may not be yours. This module covers data privacy, copyright, workplace policies, AI economics, and building a personal responsible use framework.

Illustration for Responsible Use: Ethics, Data, and the Business of AI

What you'll learn

01

Explain what happens to your data when you use free vs paid AI tools

02

Describe the current state of copyright law for AI-generated content

03

Evaluate whether your organisation's AI policy covers the risks that matter

04

Build a personal framework for deciding what to delegate, what to keep, and how to stay sharp

You open your company’s new AI acceptable use policy. It’s twelve pages long. You skim it, sign it, and go back to pasting client briefs into ChatGPT. Same as yesterday. Two months later, a colleague’s AI-generated report goes out with fabricated statistics attributed to the company. The policy covered this. You signed it. You just didn’t read it. And honestly, even if you had, you wouldn’t have known which parts mattered and which were corporate boilerplate. This module is the version of that policy that actually explains why each part exists and what it means for your daily work.

Over the past seven modules, you’ve built a working understanding of what AI tools are, how they work, and how to use them well. You know how to prompt effectively (Module 5), when to trust the output (Module 7), and how tools like RAG and agents extend what’s possible (Module 6). This final Foundations module tackles the decisions that sit underneath all of that: what happens to your data, who owns what you create, and how to use these tools without quietly making yourself worse at your job.

Data privacy: what happens to your inputs

In 2023, three Samsung engineers pasted confidential data into ChatGPT within a 20-day period. One entered proprietary source code to check for bugs. Another pasted internal meeting notes to generate a summary. A third uploaded semiconductor chip testing data. Samsung discovered all three incidents, banned generative AI tools company-wide, and started building an in-house alternative.

The Samsung incident is the one that made headlines. But it wasn’t unusual. Cyberhaven’s research found that 3.1% of knowledge workers have pasted confidential company data into AI tools. That might sound small until you calculate what 3.1% means across an organisation of 10,000 people. We wrote about the maths behind that number in The 3% Problem.

The thing most people miss about AI data privacy: it’s not about whether the tool is “secure.” It’s about which tier you’re paying for. The same product can have completely different data policies depending on your subscription.

Tool Free/Personal Tier Paid/Enterprise Tier
ChatGPT Inputs may train the model (opt-out available) Team/Enterprise: inputs not used for training
Claude Inputs may be used (opt-out in settings) Pro/API: inputs not used for training
Gemini Conversations may be reviewed by humans Workspace/Vertex: excluded from training
Microsoft Copilot Varies by product M365 Business: data stays within tenant boundary

The pattern is consistent: free tiers almost always use your inputs for training. Paid tiers generally don’t. The distinction matters because training means your data gets baked into the model permanently. It can’t be deleted after the fact. It becomes part of what the model “knows.”

Key Term: Data Privacy (in AI) — What happens to data you provide to AI systems. Key questions: Is your input used to train the model? Is it stored, and for how long? Who can access it? Policies vary widely between consumer and enterprise tiers. See the Glossary for details.

Three questions to ask about any AI tool before you put work data into it: (1) Are my inputs used for model training? (2) How long is my data retained? (3) Can humans at the provider read my conversations? If you can’t find clear answers in the tool’s privacy policy within two minutes, that tells you something.

Tip: Treat the free tier of any AI tool the way you’d treat a public Wi-Fi network. Don’t put anything through it that you wouldn’t want a stranger to read. If your organisation pays for an enterprise tier, use that instead. The data handling is different in ways that matter.

Copyright and IP: the unresolved landscape

You spend an hour prompting an AI tool to draft a market analysis for a client. It’s good. You polish a few sentences and send it. Your competitor publishes a very similar analysis the following week. Can you stop them? Under current law, probably not.

The US Copyright Office published a report in January 2025 that laid out four principles that practitioners need to understand:

  1. Works created entirely by AI are not copyrightable. No human author, no copyright.
  2. Human contributions within AI-assisted works can be copyrighted, but only the human-authored portions.
  3. Anyone registering a work that contains AI-generated content must disclose the AI involvement.
  4. Prompts alone generally don’t make the output copyrightable. Typing instructions isn’t the same as creative authorship.

The courts have backed this up. In Thaler v Copyright Office, the D.C. Circuit Court of Appeals affirmed in March 2025 that AI-generated works without human authorship can’t receive copyright protection. The ruling was pretty clear cut. Human authorship remains a requirement.

What this means in practice: the more you personally shape, edit, restructure, and add to AI output, the stronger your copyright claim. The less you touch it, the weaker. There’s a spectrum from “entirely AI-generated” (not protectable) to “AI-assisted with substantial human creative input” (protectable). Where your work falls on that spectrum matters.

Misconception: “I prompted it, so I own it.” Reality: Prompts alone don’t create copyrightable work under current US law. The Copyright Office has made clear that giving instructions to an AI tool isn’t the same as authorship. Your copyright attaches to what you personally create, edit, and contribute, not to what the AI generated from your prompt.

The bigger question, whether AI companies can legally train on copyrighted content, remains unresolved. Over 70 copyright lawsuits have been filed against AI companies. The NYT v OpenAI case is the highest-profile. No court has decided the core fair use question yet. The first ruling is expected in mid-2026.

Workplace AI policies: and why they exist

McKinsey’s 2025 Global Survey found that 88% of organisations now use AI, and 72% use generative AI specifically. But governance hasn’t kept pace. Deloitte’s 2026 survey of 3,235 senior leaders found that only 21% of companies have a mature governance framework for AI tools. 73% cited data privacy and security as their top concern, but most haven’t translated that concern into actual policy.

The gap between adoption and governance is where problems live. We wrote about the scale of this in Your CEO Is Using Personal ChatGPT Too: The $8.1B Shadow AI Economy. The numbers are worth paying attention to: 68% of employees use unauthorised AI tools at work (Gartner, 2025), up from 41% in 2023. Nearly half use personal accounts, completely bypassing whatever enterprise controls exist.

AI policies exist to manage real risks, not to block productivity. When you understand what the policy is actually protecting against, compliance becomes intuitive rather than adversarial. The risks worth managing:

Data exposure. Confidential information entering models that train on inputs, or being accessible to provider employees. This is the Samsung risk.

IP leakage. Proprietary strategies, unreleased product details, or client information becoming part of a model’s training data, potentially surfacing in other users’ outputs.

Hallucination liability. AI-generated content containing fabricated claims attributed to the company. Module 7 covered why this happens. The policy question is: who’s responsible when it does?

Compliance violations. Regulated industries (healthcare, finance, legal) have specific rules about where data can go and how decisions must be documented. AI tools can violate these rules silently.

If your organisation has an AI policy, evaluate it against these four categories. If it covers all four with specific guidance (not vague warnings), it’s probably adequate. If it’s mostly “be careful with AI,” that’s security theatre.

Try This: Check the data policy for the AI tool you use most at work. Find answers to three questions: (1) Are your inputs used for model training? (2) Where is your data stored? (3) Can you delete your data? If you can’t find clear answers in under 2 minutes, that itself is useful information.

The economics of AI: what you’re actually paying for

The AI tool you use at work probably costs your company more to run than it charges. That’s not generosity. It’s a market share play.

Consider the numbers. OpenAI brought in roughly $13 billion in revenue in 2025, but its internal documents project $14 billion in losses for 2026, with cumulative losses reaching $44 billion through 2029 (PC Gamer, citing internal documents). Anthropic grew from $1 billion annualised revenue in late 2024 to $19 billion by early 2026, but expects to break even only by 2028.

The reason: AI infrastructure is expensive in a way that traditional software is not. Training a single frontier model costs between $30 million and $191 million in compute alone (Stanford HAI, Epoch AI). Running inference (every time you send a prompt and get a response) costs money per token, and those costs don’t shrink the way software hosting costs do at scale. AI companies run gross margins of 50-60%, compared to 80-90% for traditional SaaS.

This matters for you because current pricing reflects market-share strategy, not sustainable economics. When you choose an AI tool for your team, ask: is this company burning cash to acquire users, or is its pricing model sustainable? Tools priced far below their actual cost will either raise prices, reduce quality, or shut down. Understanding the economics helps you pick tools that will still exist in two years.

Key Term: Token Pricing — How most AI services charge: per token processed. Both input tokens (your prompt) and output tokens (the response) cost money. Prices vary widely by model. See the Glossary for details.


AI economics comparison showing training costs, gross margins, and pricing sustainability across major providers

Your personal AI framework

A 2025 study published at CHI (the premier human-computer interaction conference) surveyed 319 knowledge workers and found something uncomfortable: people reported that generative AI made their tasks “cognitively easier,” but they also described feeling less capable when working without it. A Lancet study found direct evidence of this in medicine. Endoscopists who routinely used AI-assisted detection saw their unassisted detection rates drop from 28.4% to 22.4% when the AI was removed. Skills they’d spent years building got worse with regular AI dependence.

The World Economic Forum estimates that 39% of existing worker skill sets will be transformed or become outdated between 2025 and 2030. The ACM calls this the “deskilling paradox”: the more you delegate to AI, the less capable you become at the delegated task. But the freed capacity can be redirected to higher-value work, if you’re intentional about it. We explored the tension between AI-driven speed and genuine capability in The Productivity Lie.

Responsible AI use isn’t a corporate checkbox. It’s a personal decision framework you apply to every interaction. Three questions to ask before you delegate:

1. What am I giving up by delegating this? If the task builds a skill you need (analysis, writing, critical thinking), doing it yourself, at least some of the time, keeps that skill sharp. If it’s mechanical work that doesn’t build capability (reformatting, data entry, scheduling), delegate without guilt.

2. What’s the worst case if this output is wrong? Module 7’s verification triage applies here too. High-stakes output needs human oversight regardless of how good the AI is. Low-stakes output can move faster.

3. Am I getting better at my job, or just faster? Speed without growth is worth watching out for. If you’ve been using AI for six months and you’re faster but not more capable, something needs to change. The people I’ve seen do well with AI aren’t the ones who delegate the most. They delegate strategically, keeping the work that builds their expertise and automating the work that doesn’t.

Misconception: “AI will replace you.” Reality: AI changes what skills are valuable. It doesn’t make you worthless. The radiologists in the Lancet study didn’t become unnecessary. They became dependent. The ones who maintained their skills alongside AI had both speed and judgement. That’s the goal.

Apply This Monday

Audit the AI tools you used this week. For each one: find the data policy, determine whether your inputs are used for training, and classify whether you’ve been putting anything into them that you wouldn’t put in an email to a stranger. Write down what you find. Then pick one task you’ve been fully delegating to AI and do it manually this week, just once, to check whether you can still do it at the standard you’d expect. You now have the start of both a personal data audit and a skills maintenance plan.

Key Takeaways

01

Your data goes where your tier allows it. Free AI tools almost always train on your inputs. Enterprise tiers generally don't. The distinction matters more than any other privacy setting.

02

AI output isn't automatically yours. Under current US law, only human-authored contributions are copyrightable. The more AI generates without your creative input, the less legal protection you have.

03

AI policies exist to manage real risks. Data exposure, IP leakage, hallucination liability, and compliance violations are the four categories that matter. Policies that don't address these specifically are mostly security theatre.

04

AI economics are unsustainable at current prices. Most AI tools are priced below cost. Understanding the business model helps you pick tools that will survive and predict when prices will rise.

05

Delegation without intention leads to deskilling. The professionals who thrive with AI are the ones who delegate strategically: automating the mechanical work, keeping the work that builds expertise.

Check Your Understanding

Further Reading

Curated articles, videos, and resources to explore this topic further.