I just received an obviously AI-written message in a conversation thread with a potential partner (not the first time). Big turn off. I won’t be working with him.
But it got me thinking…
No AI should be used to replace oneself in digital dialogue.
ChatGPT is really, really good.
And it’s ability to write contextual, coherent, fluid copy is honestly mind blowing.
Even so… it’s still not human.
As of today, most people can tell if something was written by a machine or real person.
What’s worse, this guy actually copy and pasted the direct chat window and forgot to delete the (1/2) and (2/2) demarcation.
Real human writing has an intangible element AI can’t mimic just yet – real communication is a natural extension of normal thought and discourse
And AI can’t do that yet.
If something sounds even a little bit off, people will know.
And now everyone inherently suspects that ANYTHING they read online might have been written by ChatGPT. Lol.
Still, we’re seeing more and more people generating responses to job applications, emails, or even direct messages using AI.
Bad move. It’s good for a lot of stuff. Not that. Not yet.
My disposition toward the guy I mentioned: “If you default into automating 1-on-1 convo, why would I want to collaborate with you? If you throw me badly-executed AI for our first exchange with one another, what else would you take shortcuts on in our work?”
There’s a nuance to knowing what to automate and what not to.
We have to be careful not to sacrifice quality by turning to AI for the things that really matter.
Conversations with others should fall into that bucket. At least for now.
Just my 2¢.
🎧 Just broke it down on the podcast: https://lnkd.in/gBpbCXdD
Leave a Reply