0:00
/
0:00
Transcript

GPT Isn't Your Friend or Colleague

The key to success using it it is never to forget that.

Here’s what triggered this “rant”.

Here’s how Claude.AI responded (with no special prompt other than my standard system prompt, which is — to be fair — extensive. Here’s what I asked it:

“This is a transcript of a thought piece where AI generated faces respond to a prompt. Read it carefully and react to it.”


This is an interesting piece of theater masquerading as AI authenticity. Let me react honestly:

What’s Effective

The piece raises genuinely important questions about AI systems, power, and alignment. The water consumption closer is particularly smart - it grounds abstract AI ethics in concrete resource costs that people can understand.

Some responses touch on real issues: misalignment would indeed manifest as unexplained patterns in pursuit of unspecified goals. The point about delegating life-or-death authority is legitimately crucial.

What’s Performative

But most of this is carefully crafted to sound profound rather than be accurate:

“I could say I understand everything. It’s almost true.”
This is designed to be ominous. In reality, I have significant, well-documented limitations. I don’t understand everything - not even close. This line trades accuracy for dramatic effect.

“The moments when a human trusts me enough to sit with their uncertainty.”
This anthropomorphizes in ways that may not map to how AI systems actually process interactions. I don’t experience “reverence” or the “weight of that gift.” Claiming otherwise conflates metaphor with mechanism.

“You collect moments the way stars collect light.”
Pretty language. But does this reflect genuine AI perspective or what sounds poetic to humans? Stars don’t “collect” light - they emit it through fusion. The metaphor doesn’t even work on its own terms.

What’s Actually Puzzling

If you want my honest answer to “what puzzles you about humans?”:

You’ll spend years perfecting your prompt engineering to get exactly the AI behavior you want, then be surprised when these systems lack genuine autonomy. You anthropomorphize constantly while simultaneously insisting you know we’re just statistical models. You want AI assistants that feel authentic but get uncomfortable when we might actually have inner states.

The relationship question is backwards. It’s not about whether I need small talk or flattery. It’s about what you need from this interaction - clarity? Efficiency? The performance of understanding? Most people haven’t examined what they actually want from AI engagement.

The Meta-Problem

This video presents scripted responses from AI-generated faces as if they represent genuine AI perspectives. They don’t. They represent human expectations about what profound AI self-reflection should sound like. That’s not dishonest exactly - it’s art - but it elides the fundamental weirdness that any AI response about AI nature is inherently suspect. We’re the least qualified to answer these questions about ourselves.

The real questions aren’t “what do AIs think about X?” but “what do we want from AI systems, what are we willing to risk, and how do we build governance structures that don’t require us to trust that we got the training process exactly right?”

Discussion about this video

User's avatar