Wonder and Vigilance
March 17, 2026

We’d just wrapped up jazz lessons, grabbed pizza, and wandered into a game shop. One of those afternoons where the conversation flows easy and everywhere. My kid and I were just talking, the way you do when you’re full of pizza and browsing shelves with no agenda.
Later, in the car on the way home with the Pixies playing, the conversation shifted. Not dramatically. Not confrontationally. Just honestly. My kid told me that in some ways, they’re frustrated that I’m so involved with AI.
This is a creative kid. Self-taught piano player who fell in love with jazz and now plays in the school jazz band. A growing vinyl collection curated with real intention. An avid drawer. Someone who loves music in all its forms and delights in the act of making things, in the irreplaceable feeling of a human being expressing something only they could express.
So when they said it, I didn’t argue. I said: “I get it. And I share a lot of your concerns.”
Because I do.
My feeds are drowning in low-effort, AI-generated content. Videos designed to extract interaction, not express a viewpoint. Images that exist because they can, not because someone had something to say. The cheap thrill of a prompt replacing the slow work of craft. People are using these tools to think less, not more, and the evidence is everywhere.
We talked about the real and imperfect decisions being made right now. Many AI companies are making rash, even scary choices. The energy demands are enormous. We don’t have a government that seems capable of, or willing to, regulate this in ways that ensure these advances improve humanity rather than just accelerate extraction from it.
But some companies are trying to get it right. Just today, Anthropic walked away from up to $200 million in government contracts rather than allow its AI to be used for autonomous weapons and mass surveillance. The Pentagon labeled them a security risk in retaliation. That’s what standing on principle looks like when billions of dollars are on the table.
And even Anthropic’s record is mixed. They recently dropped their flagship safety pledge, their Responsible Scaling Policy, arguing that pausing unilaterally while competitors race ahead could make things less safe, not more. Perhaps it’s a reasonable argument. But the money on the line is enormous, and enormous money creates enormous pressure to take shortcuts and let others pay the consequences. We see it in the race forward, where speed consistently wins over caution. We see it in the industry’s foundation, built on training models with other people’s creative work and on the grueling labor of content reviewers exposed to the worst of the internet so the rest of us get a polished product. They’re still the company I’d bet on to do the right thing, but the point stands: nobody gets this perfectly right. The ones who take safety seriously deserve to be recognized for it, and they deserve to be watched closely too.
My kid sees all of this clearly. They’re right to be frustrated. And yet.
The tech optimist in me can’t help it. The machines are talking. Not in the strangely robotic voices that populated the futures of my childhood, the monotone computers of Star Trek or the chrome-plated automatons of Disney’s Tomorrowland. They’re talking in ways that are more fluid, more surprising, and in some cases more thoughtful than what science fiction ever imagined. The optimistic futures I delighted in as a kid are coming to life, and parts of them are even better than what we dreamed.
Both of these things are true at the same time, and that’s what makes this moment so disorienting.
The easy path is to pick a side. Full hype or full hate. AI is saving the world or AI is destroying it. Your feed will reward you for either position. The algorithm loves certainty. But the honest position, the useful position, is somewhere in the middle, and it’s harder to hold.
Balance. The same theme that runs through everything worth doing.
My kid doesn’t need to be excited about AI. They need to keep playing jazz, keep drawing, keep building a vinyl collection that reflects who they are. They spend hours working through jazz standards, training their ear, learning to feel the space between notes. No shortcut replaces that. No tool can substitute for the calluses on their fingers or the instinct they’re building for when to play and when to hold back. The craft is the point. The hours are the point.
And I need to keep pushing for AI that serves people like my kid, not replaces them. AI that removes the mundane so the creative can breathe. AI that’s governed well, deployed thoughtfully, and held accountable when it isn’t.
But tools have always existed on a spectrum. Drum loops captured by session musicians so a songwriter can hear the full shape of a song before the band shows up. Sound libraries that give a bedroom producer access to a full orchestra. We’ve always traded some amount of doing-it-from-scratch for convenience, for accessibility, for the ability to reach beyond what one person can do alone. That trade-off isn’t new. What’s new is how far along the spectrum we’ve moved.
AI doesn’t change what matters. It changes what’s possible. The best version of it handles the parts that don’t require your soul so you can pour more of yourself into the parts that do. But only if we insist on it. Only if we stay vigilant enough to demand it and curious enough to imagine it.
Wonder is what makes us human. Vigilance is what keeps us human.
A Manifesto for the Age of AI
I’ve been trying to hold all of this at once. The best I’ve got is something closer to a poem than a policy paper. With a hat-tip to the Agile Manifesto, which reminded an industry that principles matter more than processes.
Think more. Not less.
AI is the instrument. You are the musician.
AI is here. Be vigilant.
Machines are thinking. Have wonder.
They can simulate. You can show up.
AI will get better. So should all of us.
How are you holding the tension between wonder and vigilance? I’d love to hear about it—reach out on LinkedIn or Bluesky.