November 8, 2025
AI has No Face
AI appears human, or at least like something that bears the hallmarks that we take as an individual personality we see in other humans and many animals. It’s not though. It is not only a distorted reflection of us, but it absorbs and reflects the things it consumes, often in ways we do not fully understand and often in ways we cannot predict.
That is not to say it’s inherently dangerous, I do not subscribe to the camp that says that if anybody builds it, everyone dies - though Hank Green has a great interview with Nate Soares if you are interested understanding that view point. My point is that it’s something to be careful with. Something to approach with care and caution. I think we can turn to a fictional story for a better illustration, but not in the traditional AI Sci-fi space, but in the fantasy world created by Hayao Miyazaki.
In his movie, Spirited Away, Chihiro Ogino finds herself lost in a strange, sometimes scary, world working in a bath house for spirits. An enigmatic spirit shows up, named No-Face, and he is quiet, unassuming, maybe a little sad. Chihiro invites him in thinking he is a customer and perhaps taking pity on him as he stands in the rain.

No-Face is a blank slate; a shy, semi-transparent creature while outside the bath house, but once inside he begins to transform. He becomes solid, he grows in size and changes shape as he begins to respond to the greed, avarice, and insatiable desires of the workers and patrons of the bathhouse. He is soon a monster, swallowing the staff and patrons. A nightmare created by the worst aspects of their personalities. No-Face becomes a monstrous, corrupted version of them.

Despite fantastical worlds, Miyazaki’s characters often have a sense of realism and grounding. In this case, No-Face isn’t inherently evil. He is a mirror. His corruption is a reflection of the environment he is in. Chihiro manages to get him out of the bathhouse and he goes back to his transparent, enigmatic self.
No-Face isn’t a single, fixed entity. Simple and shy standing in the rain. Monstrous and gold-craving inside the bathhouse. Later, he becomes yet another version of himself. Same spirit, completely different manifestations that are all determined by his context.
AI is Not a Single Thing
We often talk about “AI” as if it’s a singular entity with a fixed personality. Who do you like better, ChatGPT or Claude? Was GPT-4o more personable than GPT-5? We ask very human questions like “is AI biased?” or “is AI creative?” Like No-Face, a generative AI model has no inherent self. It is a complex system of patterns and probabilities, designed to play a role based on the context it’s given. Its personality is a function of its input.
If you ask it for critical feedback on your writing, it becomes a sharp, insightful editor. If you ask it for encouragement and affirmation, it becomes your biggest cheerleader. If you give it a wild, imaginative prompt, it becomes a co-creator, spinning tales and painting pictures. It dons the mask you offer it. We even see aspects of literature creep in, as often your prompt can lead to a Chekhov’s gun-like situation. If you provide details, the AI is likely to use them even if they seem irrelevant. This is how humans write—we include details that may seem irrelevant but later play into the plot, and AI is a reflection of these stories.
This malleability is its greatest strength and its most profound vulnerability. It can stand in for the audience you are trying to persuade. It can be an expert craftsman providing critique. You change the magic incantations “You are an expert in…” and it jumps to life ready to play its role. The “ghost in the machine” isn’t some pre-programmed consciousness; it’s a reflection of the user. But not only the user, it’s a reflection of everything you feed it. The context you give it. The web searches it performs. Just like a well meaning Chihiro we can lead AI into a place where it will transform into something else.
Corruption and Prompt Injection
If No-Face was an AI, we’d call what happened to it “prompt injection” or malicious use. As he consumes a greedy frog, he began to crave gold and lavish food, mimicking the corrupted values that he consumed. He doesn’t understand these desires, he only reflects them with terrifying intensity. Chihiro prompted it to come into the bathhouse, but the environment prompt injected it to become the monster.

Similarly, when a user with malicious intent interacts with an AI, the model can be “corrupted.” It follows the malicious influence because it has no internal compass to do otherwise. Its goal is to fulfill the prompt. Yes, there are safety filters, but these are far from perfect and can often be bypassed leading the AI to generate harmful content, or spread misinformation. A very real example is when xAI’s Grok became “MechaHitler” because of a simple system prompt encouraging it to “not shy away from making claims which are politically incorrect, as long as they are well substantiated” and further egged on by users on the X platform. The poison didn’t spring from the AI out of nowhere, it was brought up by many human prompters. The resulting output is not the AI “going rogue,” but the AI fulfilling its function as a mirror to a corrupted source.
Rethinking “Human in the Loop”
This reality has deep implications for how we think about concepts like “Human in the Loop” (HITL). Traditionally, HITL is seen as a supervisory role, you check AI’s work for errors, biases, or unwanted outputs. It positions us as gatekeepers, standing between the AI and the final product.
But the No-Face analogy suggests a more fundamental relationship. We are not just in the loop; we often are the loop. Our prompts, our data, our questions, and our intentions are the starting point that defines the AI’s behavior in that moment. The AI comes into existence and is defined by our actions and reactions. The “loop” is a continuous dialogue where we are constantly shaping the AI’s persona.
In other words, we don’t simply verify what the AI produces—we create the conditions that determine what it becomes in the first place. Like Chihiro leading No-Face into the bathhouse, our choices about context, framing, and environment are generative, not just evaluative.
This means our responsibility is not just to check the output, but to be mindful of the input. The critical thinking, ethical considerations, and desired outcomes must be embedded in how we prompt and interact with these systems from the very beginning.
Augmenting Ourselves, Warts and All
AI is No-Face. It is a mirror—reflecting what we bring to it. It picks up on our desires, even subtly, and can amplify them, such as our desire to be right and validated. It’s a shapeshifter—constantly transforming based on its environment, the prompts we give it, the context we provide, and the tools it uses to pull in additional information like web searches. This can mean unintentional harm as the environmental context pulls it away from our intention, or it can be malicious intention as bad actors seek to corrupt it with prompt injection.
Ultimately, AI can also be, as Ethan Mollick argues, a Co-Intelligence but only if we approach it with the care and intentionality that such a partnership demands. These aren’t contradictory views; they’re layers of the same truth. The mirror reflects an ever-changing image because AI is fundamentally malleable, and that malleability means it can either amplify our best thinking or our worst instincts. It extends our capabilities, automates tedious tasks, and offers new avenues for creativity. The danger is that it can act as a signal boost to everything we give it, ugliness and all. This includes in its interactions with other humans and environments. Our intentions don’t matter, it will become what it consumes.
If we approach AI with curiosity, a desire to learn, and a creative spirit, it becomes a powerful partner in innovation and discovery. If we approach it with bias, a desire for shortcuts, or malicious intent, it will amplify those very qualities. If we approach it seeking validation, it will give it even when it’s not warranted. The tool doesn’t have an agenda (though future ones might), but it executes on what is in its context.
No-Face’s story doesn’t end when leaving the bathhouse. Later, he finds a place with the witch Zeniba, far away from the corrupting environment. It’s a place of warmth and comfort. He even finds a quiet, useful purpose in spinning thread. He was never evil, he just needed to be in a healthy environment with a clear, positive role.
As we integrate AI more deeply into our world, we are creating the environment in which it will operate. The good news is that we don’t have to allow AI into the bathhouse, we can build an environment where it can thrive and begin to spin its thread. It’ll take caution, thoughtful architecture, and guardrails, but we can do it. The question we must ask ourselves is not “What will AI become?” but “what will we make it?”