Originally written: Sep 2024
A fee earner clicks “Summarise” on a 40-page contract. Twenty seconds later, the AI returns a clean four-paragraph summary. Then nothing happens.
That’s the bit that bothers me.
We built summarisation into Proclaim. The document comes back neatly condensed, and the user is left holding it – no obvious next step, no onwards action, no signal to the rest of the system that anything has happened. The summary just sits there. A dead end.

The tool works. Fee earners are impressed by the speed and accuracy. But “impressed” is not the same as "useful". A summary that doesn’t lead anywhere is a parlour trick. If it’s going to earn its place in someone’s working day, it needs to do something on its own. So the question became: what does “doing something” actually look like?
What comes after is the product
Most AI features stop at the output – the summary, the draft, the answer. That’s where the effort goes, and almost none of it goes into what happens next.
That moment after is where features either become part of someone’s day or get quietly abandoned. A clever output with no next step gets used twice and forgotten, while a workmanlike output that flows into the next bit of work gets used every day.
Watching fee earners use early summarisation, the pattern was consistent. They read the summary, nod, then jump back to their inbox or case file. The AI hasn’t moved them anywhere, so they move themselves. But their job isn’t a single interaction; it’s a chain of decisions. If summarisation is going to land properly, it needs to understand what it’s just read and suggest what to do about it.
The five questions
Instead of trying to classify documents, I focused on follow-through. Five questions run against every summary the moment it’s generated. If they all come back “no”, nothing happens. If any come back “yes”, the summary moves into an action layer.
The questions aren’t clever; they’re practical: Does this contain important guidance or best practice? Does it affect client outcomes or strategy; does it involve money; is it legally complex or worth a second pair of eyes; is it urgent?
Each “yes” contributes to a pool of possible actions – things like reviewing it, assigning it, sharing it, or putting it somewhere it won’t be missed. The system scores those actions and surfaces the top three as inline buttons, leaving the rest out of the way.
The user never sees the matrix. They just see three actions that happen to be the right ones. That’s the design move: not exposing every option, not asking the user to configure anything, just deciding based on what the document actually is.
Making it real
To see if this held up, I ran it against real summaries. A cost schedule with a court deadline prioritised reminders and calendar actions. A precedent document surfaced for review and sharing. In each case, it matched what an experienced fee earner would do next.
The point isn’t that it’s smart, it’s that it’s consistent. It applies the same judgement every time, without relying on the user to remember what to do next.
What changed
Once the matrix sits behind it, summarisation stops being a destination and becomes a transition. On the surface, the change is small – three buttons under a summary. Underneath, there’s a decision layer determining whether that summary should trigger anything at all, and what that should be.
The summary now pushes itself into the Evo Feed as an actionable item, with the next steps already prioritised. The user doesn’t have to think, “What do I do with this?” The system has already answered it.
This shouldn’t be unique to summarisation
This is the part that matters more than the feature itself. Summarisation isn’t special. Every AI feature has the same problem: it produces something, then stops. The user has to bridge the gap between output and action.
If you don’t design that second half, you end up with a product full of clever outputs that don’t go anywhere. Impressive in isolation, but disconnected from the actual work.
The matrix is one way of solving that. Not the way, but the idea behind it should be everywhere. Every AI output should pass through a layer that asks whether it’s worth acting on, and if it is, what happens next. Sometimes the right answer is nothing. Most of the time, it isn’t.
What’s actually valuable
The interesting work here wasn’t the prompt or the model. It was deciding what comes after.
A good summary is impressive. A summary that turns into the right next step – quietly, every time – becomes part of the job. That’s the difference between an AI feature and a working system.
