Manage the Loop, Not the Artifact
What changed when AI made everyone in the room a designer.
The designers in my one-on-ones are eventually asking some variant of the same question. “What is my job now?”
It’s a fair question. This week alone: one of our executives made an ROI dashboard concept of his own. Two product partners brought a dashboard prototype concept of their own into another meeting, and my design lead wasn’t sure what to do next with it.
If you’re a designer watching this happen, the reflex is to feel cut out. The thing you used to be the bottleneck for now arrives in the meeting in someone else’s hands, and the world keeps turning.
I think this is the best news design has gotten in twenty years. But you have to squint a little.
This is Balsamiq, again
Twenty years ago there was a tool called Balsamiq. You sketched UIs in a marker-on-napkin style, and the people who loved Balsamiq mostly weren’t designers. When somebody walked in with one, my posture was always the same: this is visual thinking. The mockup wasn’t a spec. It was an encapsulation of someone’s thinking — a way to get an idea out of their head and onto a surface where the rest of us could look at it.
That was the move. That’s still the move. The artifact has gotten radically more sophisticated — Balsamiq is a vibe-coded prototype now, with real components, real data, and click-through behavior — but the move underneath hasn’t changed. Whether it’s Balsamiq twenty years ago or a vibe-coded prototype right now, I just think of it as an encapsulation of someone’s thinking.
What’s changed is that prototyping has become an emergent practice of concept development across the PDLC. Some prototypes start from scratch; others are seeded by design — a story set or a steelthread we elaborated with the team, turned into a prompt. Either way, our job is the same: enable that thinking, pull it into our loop, and synthesize it into something shippable.
That’s not a workaround. That’s the work.
What the prototype can’t do
Here’s what’s already true about a vibe-coded prototype that wasn’t true about a Balsamiq: the person who made it has learned more about the problem than they would have by only writing a PRD. To make a good prototype, the maker had to saturate themselves in (or at least make contact with) the user’s world — the workflows, the metrics, the personas, the visible-at-a-glance moments. They show up to the meeting with a fluency they wouldn’t have had from a brief.
That’s a gift. It’s a real gift. But it’s not a finished design.
Every time, I take screenshots of the prototype and drop them into a FigJam. I need to see the flow through the screens, side by side, with room to draw lines between them and write questions in the margins. The canvas is where the prototype stops being a self-contained artifact and becomes one piece of a larger design problem.
On the canvas, I run the prototype against Marci’s day in the life in my mind. Marci is one of our property-management personas. It’s Tuesday morning, she just walked into the office. Where did she come from to get here? What triggered this view? What is she trying to do — figure out the next move, or step back to see her operation as a whole? How does this view relate to the IA we’ve already shipped? How do the agents she’s engaging with relate to each other behind the scenes? What happens when she clicks the thing? A hundred times out of a hundred, the artifact didn’t ask any of these.
And the irreducible cluster: is this even the right thing to build — for the customer, given what’s technically possible right now, and at a release scope we can actually ship?
These are three different questions and they all matter. We can rarely take a vibe-coded concept and ship it in a single move. It takes product scoping and a release sequence. The prototype gives us a destination; it doesn’t draw the path.
Anyone in the org can build something a product manager tells them to build. Figma Make can do the same thing. What Figma Make can’t do is push back. It can’t tell me I’m wrong about building what I think I want to do. It doesn’t know our IA, our agent infrastructure, or the existing code. And it can’t tell me how to break what it just rendered into something we can ship in two sprints. That needs a good amount of human conversation in the loop.
All three of those — the should-we, the can-we, and the how-do-we-sequence — are design questions. The prototype is a starting point, not an answer.
So the loop continues. We screenshot to the canvas, we annotate, we ask the questions, and eventually we evolve the prototype into something we can scope, sequence, and ship. Seed → extend → diagnose → evolve. That loop is the design work.
The mental-model trick
When I look across multiple vibe-coded prototypes for the same problem space — and right now I’m looking across at least three different ones for one of our platform views — what I’m actually seeing is two or three perspectives on the same problem, each carrying different assumptions about what mode Marci is in.
One prototype shows Marci a task list. It assumes she’s trying to figure out what to do next. That’s a perfectly valid perspective.
Another prototype shows Marci a process-health overview. It assumes she’s stepping back to see how her operation is doing — maybe she’s heading into a one-on-one with her boss Lori. Also perfectly valid.
My value as the designer isn’t deciding which prototype wins. The trap is to read these as competing answers when they’re actually competing assumptions — about which mode Marci is in when she lands on this surface. She’s in different modes at different times. The design job is to recognize that and weave together a UX that serves her depending on context and trigger. We’ll likely need to build for several of these modes, not pick one — and after enough iteration, the views will look meaningfully different from the instigating prototypes. The relationships between them, the transitions, will have done as much design work as the screens themselves.
Anyone can prototype. But not anyone can name the mode the prototype is implicitly assuming, and not anyone can put two prototypes next to each other and say, here is the assumption the team actually has to decide on.
The altitude shift
So what is the job now?
Here’s the version I keep landing on, out loud in my one-on-ones this week: We’ve created more designers — meaning more people producing draft artifacts. The job is to manage that workflow and turn the drafts into something shippable.
I mean that almost literally. The posture I take with anyone bringing in a prototype is design-director-to-designer — not because they’re junior, but because the artifact in their hands is one draft in a longer loop, and the loop is the job.
Practically, the moves are these:
Take it onto the canvas. Treat the prototype like visual thinking — a Balsamiq, free design work. Then put it on a FigJam (or your version) where you can draw lines between the screens and write questions in the margins. The canvas is where it stops being self-contained and becomes part of a larger design problem.
Seed the prompt with stories, not buttons. “Help Marci take an action when X happens” produces a more useful prototype than “make a button labeled Action.” Ceding that to “just make me a screen” is where the front of the loop breaks.
Ask the questions the artifact didn’t. Marci’s day in the life. Triggers. IA fit. Agent relationships. Technical possibility. Release sequencing. And especially: what mode of Marci does this prototype assume? When two prototypes disagree about layout, the disagreement is rarely about layout — it’s about which mode they think she’s in. Name it. We’ll likely need to build for several of them.
Own the spec the moment a flow emerges. The spec hasn’t gone away; if the prototype is the encapsulation of the thought, the spec is still the encapsulation of the behavior. The single screen is cheap now. The flow isn’t. Write down what the software has to do at every transition the prototype glossed — which of your systems or modules will handle which piece, and what the conversation between Marci and the system is supposed to be. The spec is where the contest gets resolved.
Iterate toward something shippable, with customers in the loop. Take the prototype back to customers with the right questions — not “do you like this” but how does this fit, where doesn’t it fit, what ambiguities in your business process did we miss, how do humans and AI work together here. The ship-ready version will look meaningfully different from the instigating prototype.
What shifted, what got better
What changed about the making isn’t whether you do it. What’s new in SaaS + AI is the chance to see more vectors — to generate richer ideas about what software can be. We’re all exploring the same old functions — view a task, drill into a detail — but not from a SaaS page alone, but with a co-pilot sidebar, an in-context chat, a group chat with AI and humans, or an impossibly individualized generative experience. New next to the old. That’s a permission slip we didn’t used to have. The shift isn’t really about more alternatives. It’s about more bespoke, more aware, more intelligent software that meets people where they are by introducing new ways of getting work done — work that design has always wanted to be doing.
And we’re not just imagining it. An AI-assisted SDLC means expectations to build it all, at warp speed and with unyielding quality. The speed of development has accelerated alongside the complexity of what we ship — non-deterministic flows, multi-surface orchestration, conversation as interface. The customer doesn’t see any of that. They just expect the transition to the future to make sense.
We’re witnessing a dramatic step change in the capabilities of the people around us. The right response isn’t to defend the artifact we used to make. The right response is to ask: how should we be adapting to those new capabilities, in the service of the humans who will have to use this stuff that we make?
I’m a come-along guy on this. Let’s all do it. You want to make mockups? Make them. Bring them to the meeting. Tell me what you were thinking.



