My bot, talking to your bot.
When everyone has the perfect script,
the only advantage left is being the one who refuses to use it.
Which is odd, if you think about it.
I'm talking about AI.
Again.
But not for the same reasons I covered it a couple of months ago.
My stance on that hasn't changed at all.
This is more about our brains.
And our voices.
Because it happened again this morning.
I opened an email containing a negotiation point, and I recognised the syntax immediately.
Structure too perfect.
Vocabulary sliiiightly too polished.
And there was something about the cadence.
Like when you can tell a drawing looks off, but can't quite put your finger on why.
But the biggest red flag, and annoyance, was that it was littered with data points...
...for
every
single
line.
Four of them looked correct.
Three of them did not.
And the remaining 17 or so looked a bit blurry, because I wound up just scrolling to the end as fast as I could.
It was clearly written by an LLM, like so much of our communication is these days.
And I'm certainly not the only one noticing this.
It's everywhere.
We are increasingly using these tools.
Myself included.
If you're thoughtful about it, it's generally for efficiency.
This new necessity, to keep pace with the speed that AI enables everyone else.
"Claude, you can draft the difficult email."
"ChatGPT, compose the text message I don't quite have the guts to write."
"Gemini, write a blog post I can put on social media about how everyone is using AI to write blog posts to put on social media."
"And all of you, please stop using f***ing em dashes—I need all of this to sound like I wrote it myself!"
I try to make light of it, but it's not all that funny when you think about it.
Because it all sounds good on the surface:
Safer.
Faster.
And it polishes out the stutter, the hesitation, and the doubt.
But it also sands down the edges of our communication.
Removes the necessity, and capacity, for critical thinking.
And I think that friction is where the trust actually lives.
It's certainly where the truth lives.
And LLMs, while useful, seem to be where it dies.
So I don't want to be cynical or dramatic.
But it's also hard not to.
We've reached a point where anyone can sound like an expert.
About anything.
In three-point-two seconds flat.
Because these things will generally do what you tell them to.
You can ask a model to justify a decision.
Then I can ask it to argue the exact opposite.
You ask yours to rebuff my claim.
Then I ask mine to dismantle yours.
If you torture the data long enough, it will confess to anything.
And even when you don't explicitly frame it that way, your particular LLM will essentially always go to bat for you.
Not just because it only ever gets your side of the story.
They're built to please and allure and reassure.
To be sticky.
Which makes sense as a business decision.
If I'm Anthropic or OpenAI or Google, I want you to like my AI.
And one of the best ways to do that, is to make it like you. A lot.
So even when you take specific steps to force this behaviour out of them (which I highly recommend), they'll gas you up as much as possible.
It's never a neutral third party.
So we end up with simulations.
Your bot talking to my bot.
Perfectly constructed arguments...
...that pass right through each other without making contact.
It lacks texture, and truth.
So I looked at this wall of AI-generated text this morning.
I could have fired back with my own prompts.
I could have generated a counter-argument filled with equally selective logic.
We could have let the algorithms fight it out in the inbox.
The GPU-powered evolution of the schoolyard taunt:
"I'll get my dad to beat up your dad."
Instead, I replied with this:
"I respect the position you have. However, we can all input prompts to spit out the data we require. Let's put the tools down. Let's have a discussion about your fears and thoughts just as humans, and see if we can get to the end of this."
The shift was immediate.
The armour came off.
And the real conversation began.
In a world that is rapidly automating, the real risk isn't missing out on the tech.
Or using it too little.
Or even using it too much, depending on what you're using it for.
The real risk is hiding behind it.
Because when everyone has the perfect script, the only advantage left is being the one who refuses to use it.