A few weeks back, we wrote that many AI writing problems are not really prompt problems. They are setup problems.
That raises the next question: what does better setup look like in practice?
A lot of AI advice treats the prompt as the place where everything has to be loaded in — the audience, the purpose, the tone, the facts and the structure. But that is too much weight for one instruction to carry. It also means you end up re-explaining the same things every time you want to do a new piece of work.
Persistent workspaces (’Projects’, ‘Gems’ or ‘Notebooks’ depending on which AI you’re using) can let you attach files and save instructions for reuse. AI memory can learn your preferences over time. And integrations can access your email, documents and CRM data. All this can help — but it can also add noise.
The more useful distinction is between the prompt and the working environment around it. If the source material is thin, scattered or out of date, the prompt is already doing rescue work. When the model cannot easily distinguish the right material from the noise, you’ll end up compensating by writing prompts that are longer and more complicated.
We learned this the hard way when we built agents to help with scripts for Fear & Greed. Early results were terrible and we tried to improve things by adding more rules to the prompt. Do not pick stale stories. Do not lead with commodities. The federal opposition has a new leader.
The instruction set went from one page to five. But the results did not improve. Rules started colliding with each other, and the AI was trying to obey everything at once.
The real fix was not a better prompt. It was a better working environment. We built separate assets — a curated story pipeline database, a style guide, a cheat sheet, examples of scripts that had actually gone to air.
And we built another agent to validate the draft script against what we actually ran with each day and suggest improvements.
Once we did that, the prompt could be three words — ‘write tomorrow’s script’.
In fact, a Notion custom agent now does this automatically every day.
That is why many prompting frustrations are not really about prompting. They are source problems, access problems, briefing problems or decision problems. The model is often being asked to write before the people around it have done the work of setting the job up properly.
So what does that setup work involve?
We think there’s a few broad things:
Gather the relevant source material and remove the model’s access to anything irrelevant.
Define the audience.
Define the job the piece needs to do including what good looks like.
Set some non-negotiables
Include a strong example if one exists.
Yes, that work can take longer at the start. But if the same kinds of tasks keep coming up, it is much more efficient than asking the model to guess its way through every brief from scratch, or have an editor work on the fly in each piece.
The aim is a reusable system.
To be fair, this is where enterprise reality can sometimes get in the way. Approved tools do not always have clean access to the right internal material. So people end up copying and pasting half the context into chat windows, re-explaining things the organisation already knows, and improvising workarounds. A lot of what gets called prompting is really compensation for weak context.
But generally, before rewriting the prompt, check what the model actually has to work with. If the facts, examples and boundaries are not gathered in one usable place, that is usually a problem to fix first.