Building Fast, Building Slow: In Defense of Depth

A case for intentional tool relationships and preserving creative agency

Expanding & Unfinished Projects 

Expanding

I began writing a speculative piece a couple months ago that turned into a case study on reimagining music discovery. I started with a simple idea: flip the DSP model on its head, weight for underrepresented talent. 

But then I had to understand why the current system works the way it does. So I read Mood Machine by Liz Pelly. Then I thought, why not build it? 

Each expansion was depth creep. I was unconsciously resisting the “ship it fast” mentality because something felt incomplete, and feeling guilty for it at every stage.

Unfinished

I send love to all the unfinished projects out there, as someone who witnessed the creative process behind many unfinished projects during research for her thesis. I’ve experienced the unique privilege of peering into digital graveyards and creative archives of friends and strangers alike. 

Many abandoned pieces represent repeated choices in favor of depth over distribution. Sometimes, drafts are archaeological layers of someone trying to create something enduring while working within rhythms that prioritize frequency over depth. Idea fermentation is real, but the timing of cues to return is ambiguous.

I’ve seen projects that clearly provided knowledge groundwork to inspire later curiosities. From this research, I learned about connections that showed up between people’s workspaces across different software and physical media, considering what was necessary to bridge them in a unified process that self-detects strong signals of interests. I’ve considered the ethical implications of steering thought, or providing dials to pull. 

My stake in this topic: understanding the creative process encompasses my feature-finding scope. This is my learning playground.

During my v1 design process, I was thinking (and hoping) that model architecture will become more visible and pliable by the user as time goes on, and curious about how integrated AI builds interpretability into inherent opacity. The way I went about designing my solution reflects that curiosity. 

Later, I came across Dario Amodei’s “The Urgency of Interpretability,” which affirmed my wish to see the DNA of creativity and represent it within an interpretable model and its relationship to the user — a model that I didn’t have the software engineering background to materialize (quickly).

Context and Creation

While thinking about how to preserve creative context in my tool design, I came across The Work of Art in the Age of Mechanical Reproduction by Walter Benjamin, who introduced the concept of “aura.”

“The uniqueness of a work of art is inseparable from its being imbedded in the fabric of tradition. This tradition itself is thoroughly alive and extremely changeable. An ancient statue of Venus, for example, stood in a different traditional context with the Greeks, who made it an object of veneration, than with the clerics of the Middle Ages, who viewed it as an ominous idol. Both of them, however, were equally confronted with its uniqueness, that is, its aura.”

Aura — the unreproducible essence tied to specific time, place, and human intent. That music discovery model I brainstormed was a reaction to seeing music stripped of context, origin, human curation — misrepresenting the underlying fabric of tradition. 

How do you modularize a system so that the AI handles pattern recognition while preserving the human context that gives work its meaning?

To me, this means I have to design the boundaries like a careful archaeologist. The AI fetches patterns — frequency of access, temporal clustering, semantic similarities — but doesn’t decide what those patterns mean. The human is the meaning-maker.

I imagine “context containers” — spaces where the human intent, the emotional weight, the cultural significance lives untouched. The AI can notice that you return to certain ideas repeatedly, but it can’t interpret whether that’s because you’re obsessed, procrastinating, or genuinely onto something important. It can flag potential connections between my notes on Walter Benjamin and my music discovery project, but it can’t decide whether that connection is profound or coincidental.

The modularization protects the irreplaceable human judgment about what matters and why.

Vibe Coding and Human Agency

The New Creative Process

I resisted AI-assisted coding for some time, because there was something deeply unsettling about creating something I didn’t understand in full. I had to reckon with some fundamental shifts in human-machine collaboration to get comfortable with an unfamiliar learning process. 

I imagined my tool as an architecture of different components that talked to each other, and wondered how I could brainstorm and map that architecture without writing a line of code.

I first wanted to understand top-down architectural principles to clearly communicate intent and specifics of execution. 

I happened across the Parnas papers. First, I read “On the Criteria to be Used in Decomposing Systems into Modules,” and learned about some of the founding principles about modularization and information hiding. These are philosophies about how complex systems should be designed with human understanding in mind. When you apply these principles before touching AI tools, you’re better maintaining the “aura” of software creation: the architectural intention that is more difficult to reproduce.

Reading up on the history of software design was an antidote to my concerns about when and where tools will choose to preserve human agency.

I had to answer a practical question: “How am I going to build this fast, and how will I build it slow?” Claude Code became part of that answer, catering well to descriptive longer-form prompting, which neatly put it here in my creative process:

  • Idea

  • Reading up 

  • Imagining its scope— paper only (maybe a paper PRD too to force specificity)

  • Writing and editing (or other synthesis) at any points in this process

  • Drafting architecture [establish agency]

  • Drafting architecture with Claude Code → overseeing + editing implementation approaches, debugging [relinquish some agency]

  • Reclaim agency 

This is my approach right now, and it lends itself to rapid creative experimentation and useful unfinished drafts that inform the two active projects that aren’t at risk of being abandoned.

But there are some comically insecure vibe coded apps out there, and I’m painfully aware that vibe coding fundamentally conflicts with cybersecurity’s foundational principle (you can’t secure what you don’t understand). 

Computer security analyst and risk management specialist Dan Geer said in an interview that “…the wellspring of risk is dependence,” going on to quote Nassim Taleb’s description of complexity as the enemy of security. These words stay with me and remind me to be keenly aware of my own dependencies, and to treat software with the respect it deserves. 

Prototyping and launching are different ballgames. The “reclaim agency” step above can’t be neglected.

Agency Audit

This process is teaching me to evaluate tools differently. When I’m choosing software, I ask: 

  • At what points does this tool want me to cede control? And can I take it back when I need to?

  • What happens when the tool’s assumptions about my workflow don’t match reality?

  • When something breaks or behaves unexpectedly, do I have enough information to understand why? 

  • Does the interface reveal its logic, or does it hide behind ‘smart’ automation that I’m supposed to trust blindly?

I started paying attention to the moments when tools ask you to trust them completely — handoff points where you either surrender agency or fight to keep it. Notion wants you to trust its database structure. Figma wants you to trust its collaboration model. Claude wants you to trust its code generation. 

The good tools let you peek under the hood when things go wrong, or better yet, they design the handoffs to be transparent from the start.

Tool-finding as Founder-finding 

The subheading to Ken Thompson’s 1984 Turing Award lecture “Reflections on Trusting Trust” reads:

To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.

My tool-finding strategy has shifted into alignment with founder-finding. Why not discover good tools the same way I discover good friends? When I see a founder who talks openly about their design trade-offs, who builds in escape hatches rather than lock-in, who preserves user agency even when it’s less profitable — that’s someone whose software I want to use.

And this philosophy drives how I’m building my own tool: as a transparent instrument that amplifies what’s already irreplaceable about how you think.

….

Thanks for reading! Consider leaving a comment — what resonated, what didn’t, what I’m missing here, etc. I plan on continuing to write about my process and learning in public as a designer, with a focus on the tools that shape our thinking.

And if this gets any views from people who can recommend something new for my Readwise PDF inbox, I’d appreciate it :)

My Readwise PDF Inbox