Onsombleonsombleai
FeaturesHow It WorksPricingBlog
Sign InGet Early Access
Onsombleonsombleai

Research. Write. Present.
All in one workspace.

Product
  • Features
  • Pricing
  • Docs
Resources
  • Blog
  • Changelog
  • Help
Legal
  • Privacy
  • Terms
Connect

© 2026 Onsomble AI. All rights reserved.

Built for knowledge workers who ship.

Onsombleonsombleai
FeaturesHow It WorksPricingBlog
Sign InGet Early Access
Onsombleonsombleai

Research. Write. Present.
All in one workspace.

Product
  • Features
  • Pricing
  • Docs
Resources
  • Blog
  • Changelog
  • Help
Legal
  • Privacy
  • Terms
Connect

© 2026 Onsomble AI. All rights reserved.

Built for knowledge workers who ship.

RSS
Contents
  • Why AI makes smart people slower (at first)
  • 1) The context tax
  • 2) The verification tax
  • 3) The dopamine tax
  • The turning point: I stopped “using AI” and started designing my workflow around it
  • Step 1: Prompt engineering that reduces rework
  • The Job / Constraints / Output pattern
  • Force the model to commit
  • Ask it to expose assumptions upfront
  • Step 2: Context engineering so you stop re-explaining yourself (and why Claude Skills are brilliant for this)
  • The Context Pack I keep for every serious project
  • Where Claude Skills fits
  • Step 3: Workflow engineering (tight loops, guardrails, and measurement)
  • The loop I use for almost everything
  • Three guardrails that saved me weeks
  • The measurement habit that closes the perception gap
  • The honest takeaway: AI productivity is real, but it’s not automatic
Onsombleonsombleai
FeaturesHow It WorksPricingBlog
Sign InGet Early Access
Back to Blog
The Productivity Lie: Why AI Made Me Slower Before It Made Me Faster (and the 3 techniques that finally fixed it)
Workflows9 min read•December 14, 2025

The Productivity Lie: Why AI Made Me Slower Before It Made Me Faster (and the 3 techniques that finally fixed it)

A founder’s honest take on why AI can slow experienced developers down (METR found a 19% slowdown), why it feels faster, and the three techniques—prompt engineering, context engineering, and workflow engineering—that actually improved my output.

Rosh Jayawardena
Rosh Jayawardena
Data & AI Executive

I had one of those deeply unglamorous founder moments a few months back.

Not a “we shipped a feature” moment. Not a “customer loved it” moment. A moment where I realised I’d spent the better part of a morning doing something embarrassingly simple:

Explaining the same project… again.

To three different AI tools.

Same repo. Same architecture. Same requirements. Same constraints. And somehow I was back at square one, retyping the context like I was onboarding a new contractor who’d never seen the codebase before.

By lunchtime I’d made almost no progress, but I’d generated a lot of words.

And that’s the trap: AI is brilliant at producing words. But words are not output. The output is the merged PR, the shipped feature, the resolved incident, the thing your customer can actually use.

That day was the first time I caught myself thinking:

“Hang on… why do I feel productive, but nothing has moved?”

Then I read a study that put a number on the exact same experience.

Experienced developers — working on real issues, in codebases they already knew — were measured as being slower with AI. Not slightly slower. Meaningfully slower. And here’s the kicker: they walked away believing the opposite. They felt faster.

That gap between perception and reality is what I now call the productivity lie.

And if you’ve ever finished a day with AI feeling “busy” but suspiciously light on tangible progress, you’ve felt it too.


Why AI makes smart people slower (at first)

The hype narrative is simple: “AI writes code faster, therefore you ship faster.”

But experienced work isn’t “write code”.

It’s:

  • navigating messy history and hidden assumptions

  • deciding what not to change

  • aligning to constraints nobody wrote down

  • making a change that doesn’t break three systems downstream

AI can help with pieces of that. But it also introduces a new kind of tax.

1) The context tax

Most AI tools don’t know your world.

They know a thousand worlds like it.

So they ask for context. You give it. Then you switch tabs, start a new thread, or come back tomorrow… and you pay the tax again.

It’s not just annoying. It’s momentum loss. Every time you re-explain your system, you’re not progressing — you’re resetting.

2) The verification tax

Even when the output looks correct, you still have to review it. Often line-by-line.

Because the failure modes aren’t always loud. They’re subtle. Slightly wrong function signatures. Invented APIs. A “reasonable” assumption that violates your system’s rules.

So yes, you type less. But you often spend more time thinking slowly — validating, cross-checking, cleaning up.

3) The dopamine tax

This one’s uncomfortable.

AI feels productive.

It responds instantly. It generates a lot. It gives you the sensation of forward motion.

And that sensation is easy to confuse with progress.

You can finish the day feeling like you did a lot because you were constantly interacting — prompting, refining, iterating — while the actual deliverables quietly stayed the same.


The turning point: I stopped “using AI” and started designing my workflow around it

The fix wasn’t “better prompts” in the usual sense. It wasn’t model-hopping. It wasn’t learning obscure magic words.

It was realising that AI productivity isn’t something you get by default.

It’s something you engineer.

Once I treated AI like a new team member who needed structure — rather than a genie — everything changed.

Here’s the three-step approach that finally made AI a net win for me:

  1. Prompt engineering (reduce ambiguity)

  2. Context engineering (make your project reality persistent)

  3. Workflow engineering (tight loops, guardrails, and measurement)

This is the system I wish someone had handed me at the start.


Step 1: Prompt engineering that reduces rework

Most prompts fail for one boring reason:

They describe the task, but they don’t include the decision constraints.

So the model fills the gaps with assumptions. You correct it. It apologises. You clarify. It tries again.

And suddenly you’ve done twelve rounds of conversational project management.

Instead, I use a simple pattern that forces clarity from the start:

The Job / Constraints / Output pattern

1) The job
What you want done, in one sentence.

2) The constraints
The rules the answer must obey. Your stack. Your patterns. The boundaries. What not to touch.

3) The output format
Exactly how you want the response structured.

Here’s what it looks like when I’m coding:

Job: Implement X behaviour in Y module.
Constraints: Don’t change public interfaces, keep backwards compatibility, follow existing logging/error conventions, add tests, no new dependencies.
Output: A short plan, then a diff-style patch, then test cases, then a quick “how to validate” checklist.

Two small upgrades make this even more effective:

Force the model to commit

If I’m using AI for decisions (not just boilerplate), I add:

“Make a recommendation. Then list the top two risks and how you’d mitigate them.”

It stops the polite fence-sitting and surfaces trade-offs early.

Ask it to expose assumptions upfront

Add:

“Before you write anything, list the assumptions you’re making about the system.”

Half the time, the assumptions reveal the real problem: the model is working in a different reality to yours. Better to catch it immediately than after it’s produced 200 lines.


Step 2: Context engineering so you stop re-explaining yourself (and why Claude Skills are brilliant for this)

Prompting helps. But it breaks the moment the model doesn’t have stable, reusable context.

That’s the trap: every new chat becomes a fresh onboarding session.

So I started treating context like an asset. Something I build once, reuse everywhere, and refine over time.

The Context Pack I keep for every serious project

1) Project Reality (one page)

  • what the system does

  • architecture shape (high level)

  • core entities and flows

  • non-negotiables (security, performance, compliance)

  • what “done” actually means in this repo

2) Decision Log
A running list of architectural decisions and why they were made.
This prevents AI from constantly proposing “clean refactors” that violate your real constraints.

3) Glossary
Internal terms, acronyms, and domain language. Anything you’d have to explain to a new joiner.

4) Do-Not-Touch list
Fragile modules, regulated areas, legacy boundaries, politically sensitive components. The list that saves you from accidentally “improving” the wrong thing.

This pack reduces the context tax dramatically because it stops you from constantly retyping your worldview.

Where Claude Skills fits

This is where Claude is genuinely excellent: Skills.

If you’re coding in Claude, Skills let you package up project-specific instructions, conventions, and reusable resources so the system behaves consistently across sessions. Instead of pasting the same context pack into every chat, you can treat it like a durable, repo-adjacent capability.

Practically, I treat a “Project Skill” like an operating manual:

  • coding conventions (naming, patterns, error handling, logging)

  • architecture guardrails (“never call X directly”, “writes go through Y”)

  • testing expectations (what to unit test vs integration test, fixture style)

  • PR hygiene (commit messages, release notes, rollout checklist)

  • optional helper scripts (repo summariser, test runner wrappers)

So instead of “here’s my project again…” it becomes:

“Use the Project Skill. Implement X. Don’t touch Y. Output a patch + tests.”

The big benefit isn’t just better answers.

It’s continuity.

And continuity is where real productivity lives.


Step 3: Workflow engineering (tight loops, guardrails, and measurement)

This is the one most people skip.

They focus on prompts and models, but the real productivity gains come from designing a repeatable execution loop that:

  • prevents rabbit holes

  • forces fast verification

  • measures reality (not vibes)

This is the antidote to the productivity lie. You stop feeling faster and start being faster.

The loop I use for almost everything

Plan → Patch → Prove → Integrate

1) Plan (2–5 minutes)
Ask for a short plan with explicit assumptions and risks.
If the plan doesn’t match your mental model, don’t proceed. Fix the plan first.

2) Patch (deliverable-first)
Never ask for “an answer.” Ask for an artefact:

  • a diff-style patch

  • a function you can drop in

  • a SQL query + expected output

  • a PR description

  • a runbook section

If it can’t be applied, it’s not progress.

3) Prove (verification is non-negotiable)
AI output isn’t “done” until it passes at least one of these:

  • tests

  • a reproducible command (“run this to validate”)

  • a clear acceptance checklist

This is the step that turns AI from “typing assistant” into “delivery assistant.”

4) Integrate (human judgement stays at the edges)
You still make the calls. AI accelerates the middle.

Three guardrails that saved me weeks

Guardrail A: The 15-minute context rule
If I’m still explaining after 15 minutes, I’m paying context tax.
I stop, update the Context Pack / Skill, and restart from a stable base.

Guardrail B: “Show me the blast radius first”
Before any big change, I require:

  • a list of files that will be touched

  • why each file needs to change

  • what could break

This catches the “confident but wrong surface area” problem early.

Guardrail C: “Tests or it didn’t happen”
If the change is code, it must include tests or a validation script.
No exceptions. This single rule kills a huge amount of silent rework.

The measurement habit that closes the perception gap

If you want to avoid the productivity lie, measure outcomes.

Not obsessively. Just enough to calibrate.

For the next five tasks you do with AI, track two numbers:

  • time to first working version

  • time to merged/shipped

You’ll learn quickly where AI helps and where it quietly taxes you.

And then you can design around reality.


The honest takeaway: AI productivity is real, but it’s not automatic

AI can make you faster.

But before it does, it often makes you slower — especially if you’re experienced, working in systems with real constraints, and you treat AI like a shortcut.

What changed things for me wasn’t using AI more.

It was building a system around it:

  • prompts that reduce ambiguity

  • context that persists

  • workflows that force verification and protect momentum

That’s what turned AI from “busywork generator” into something genuinely useful.

So if you’ve been feeling the slowdown, good.

You’re not imagining it.

The next move is simple:

Stop trying to use AI to go faster.

Start designing how you work so AI can’t waste your time.

#ROI#Generative AI#LLMs#Agents#Case Study
Rosh Jayawardena

Rosh Jayawardena

Data & AI Executive

I lead data & AI for New Zealand's largest insurer. Before that, 10+ years building enterprise software. I write about AI for people who need to finish things, not just play with tools

View all posts→

Discussion

0

Deep dives, delivered weekly

AI patterns, workflow tips, and lessons from the field. No spam, just signal.

Onsombleonsombleai

Research. Write. Present.
All in one workspace.

Product
  • Features
  • Pricing
  • Docs
Resources
  • Blog
  • Changelog
  • Help
Legal
  • Privacy
  • Terms
Connect

© 2026 Onsomble AI. All rights reserved.

Built for knowledge workers who ship.