
Gaslighting Your AI Into Better Results: What the Research Actually Shows
A Reddit post about telling Claude you work at a hospital went viral. Turns out there's actual research explaining why this works across all LLMs.
Data & AI Executive
I lead data & AI for New Zealand's largest insurer. Before that, 10+ years building enterprise software. I write about AI for people who need to finish things, not just play with tools.
Why I built Onsomble AI
The tools are good. NotebookLM is great for research. ChatGPT is great for refining ideas. Google Docs is great for writing. The problem is moving between them. Every switch means re-uploading files, re-explaining context, and losing your train of thought.
And as your documents grow, you hit another wall. The AI you started with doesn't know what you've written since. You end up copying your own work back into a chat window just to get feedback on the thing you're building.
I built Onsomble because I wanted one workspace where research and creation happen together. The AI already knows your document because it helped you write it. You go from scattered sources to polished output without starting over every time you move to the next step.

A Reddit post about telling Claude you work at a hospital went viral. Turns out there's actual research explaining why this works across all LLMs.

Microsoft just told thousands of engineers to install Claude Code and compare it to Copilot. When you're running internal benchmarks against a competitor, you're not confident you're winning.

How you split your documents determines whether RAG finds what you need or returns noise. Here's the complete breakdown with code.

Long context windows are getting massive—but that doesn't mean RAG is dead. Here's when each approach actually works, with real numbers.

Everyone obsesses over prompts. The pros optimize their documents. Here's what actually moves the needle.

The consulting industry's biggest shift isn't happening at McKinsey or BCG. It's happening in home offices and co-working spaces, where independent consultants are using AI to punch above their weight.

Enterprise AI has a 5% success rate. Consumer tools hit 40%. No wonder employees are going rogue.

AI models invent facts because they're guessing, not looking things up. There's a fix — and it's the difference between an AI with amnesia and one with a library card.

RAG isn't magic — it's a four-step system. Here's how documents become answers, explained without code.

Most RAG tutorials skip the hard parts. This one doesn't — here's how to actually ship a working system.

Most RAG tutorials stop at "it works." This one shows you how to make it work well.

RAG and fine-tuning solve different problems. Here's how to decide which one your project actually needs.

Your knowledge is scattered across a dozen tools, and none of them talk to each other. The AI tools are supposed to help, but they forget everything the moment you close the tab.

We are seeing a shift from AI that chats to AI that acts. This week, we look at 5 open-source projects redefining the build—from autonomous coding agents to infinite video generation.

A founder’s honest take on why AI can slow experienced developers down (METR found a 19% slowdown), why it feels faster, and the three techniques—prompt engineering, context engineering, and workflow engineering—that actually improved my output.