I recently received my first blatantly AI-generated text message. I didn’t love it.
As someone who’s spent many years contemplating the human / tech relationship, I couldn’t help but dive deeper.
I believe in direct communication (in most instances), so I decided to reach out to someone whose digital presence had become a consistent part of my life, but whose cadence and tone shifted abruptly. I wanted to better understand and open up a path for honest communication, so I carefully crafted a message that acknowledged the change and asked if they might be willing to share what was happening. It wasn’t a long message, but it was a thoughtful one, which I spent quite a bit of time composing. I didn’t want to seem angry or accusatory, so I tried to channel the trifecta of nonjudgement, curiosity, and warmth. All of which can be hard to communicate in the dry, harsh terrain of texting.
Within minutes, I received a response. Promising! But it was all downhill from there.
We rely heavily on text messages, not only to make plans and share updates, but to build connection (whether that is the best way to do that or not is a different discussion). Just as you pick up on visual cues with those in your life — the way they look away when they’re trying to remember something, the tilt of their head when they want to soften whatever they just said, the dilation of their pupils as they’re listening intently — so, too, do the non-linguistic aspects of text messages reveal as much or more about what they’re actually saying. Over time, larger patterns emerge. The pause between messages. The energy that radiates from the anticipation and delivery. The curated use of punctuation and emojis. Texting is an art, and a text is never just a text.
By observing changes to those patterns, I knew something in our connection was different— and it’s also why there were several dead giveaways that something was amiss with their response:
LENGTH: The response was several times longer than this person’s usual texting habits. They’re more prone to the stream-of-consciousness style of text, where each line / thought is deemed worthy of its own separate message, while this was a proper paragraph. Hmmmm.
PUNCTUATION: Multiple sentences ended with an ellipsis… something they might use pointedly at the end of a (much shorter) message, but to include them intermittently throughout the paragraph? No. There was also a comma present — a small but significant detail. They embrace the popular trend of removing most punctuation from their texts, making it more casual, immediate, rapid. So for this long message to contain not only more punctuation but this particular type of punctuation — I was suspicious.
WORD CHOICE: AI has an affinity for certain types of words. They’re the kind of words one might see strung together in a quarterly report or in a customer service email. “Impact” is a prime example. Is it possible you’ve received a casual text written on-the-fly by a live human that uses the word “impact”? Sure. But I promise you: this particular human did not choose that word.
TONE: This is the nail in the coffin. AI specializes in bland (with occasionally pained attempts at cheekiness). That’s it’s whole personality. It has to be, as it’s catering to all of us simultaneously. Yes, it adjusts slightly to pick up on clues from the user’s input and can somewhat match style, but when you’re asking it to draft an isolated text, it’s simply not enough info to fully capture someone’s voice. Inserting an AI-generated text in between messages you’ve written is like asking a voice actor to take over mid-conversation, thinking no one will notice. People notice.
This extended, ellipsis-riddled word salad said a lot, without saying anything at all. There was a sort of narrative softening at work — something AI does soooo well with its therapy-coded language. We put in our jumbled thoughts and needs, and the promise of AI is that it will spit out some generic, vanilla version that feels more palatable. And yet, in reality, it’s more offensive than the original. It’s the worst possible outcome.
We could say this was really just a sloppy AI text job. That the problem was not the use of AI, per say, but the fact that the editing (seemingly) ended there. There was no attempt to personalize it or even thinly veil the robot-assisted nature of the message — which, if I’m being honest, was perhaps the most offensive part of all. It’s one thing to consult AI to refine a phrase or search for a better word, but to simply outsource the feeling part of the correspondence is jarring.1
Should you not feel confident that something was AI generated (I was confident), there are a number of tools available to help you verify your hunch. Are they completely reliable? No. But they also aren’t usually fully wrong.
As an exercise in curiosity, I pasted the message into an AI detector. The verdict? Likely 100% AI generated. Ouch.
Those tools are still flawed and unreliable, but you don’t need to be a writer or a digital AI detector to pick up on the things I noticed (though I do marvel that they thought a writer wouldn’t notice). However, I do see how a chronically online person who doesn’t spend much time reading off of social media and message boards (no shade, just facts) might see AI’s approach to the written word and think, “Ooooh, this is good! I’m using that!”
So what happens when we permit AI to speak on our behalf?
I understand the appeal. I really do. We could argue enlisting AI to compose your difficult messages is merely an attempt to say the “right” thing. A generous perspective, but let’s go with it: What does it mean to say the “right” thing in personal correspondence? Is it merely saying words that don’t offend? Or does it demand something more vulnerable, more human?
The energetic shift that prompted my initial message left me feeling confused and hurt. So it took courage for me to acknowledge the feeling and ask for clarity. To then receive a response that felt more like a corporate response — “look, I’ve pleasantly replied, now we can move on” — denied what we both knew to be true (something had shifted for reasons I wasn’t yet privy to) and boldly assumed I would both buy that response and not notice the AI-assist. Oof.
Did this come from one of the most important people in my life? Definitely not. But that’s the point: We are asked to presume that our personal connections — regardless of the duration of the relationship or the depth of tie — are genuine. It’s part of the social contract. Adding AI to the equation cheapens the connection, marginalizes past correspondence, and casts a shadow of skepticism over any potential future communication.
I expected this from my psychopathic landlords who used AI to make themselves seem more human, not less, in their twisted rationalization of why they were keeping my deposit. I did not, however, expect this from someone I thought fondly of in my personal life. But, judging by the abundance of AI tools specifically aimed at writing wedding vows, it seems people have no problem outsourcing what many consider the most meaningful words they’ll ever speak — so a text message may seem harmless by comparison.
I know this person is also prone to consulting AI for relationship advice, and as strange as that may sound, on this point I will defend them. When we don’t show up for each other, what do we expect? I have friends — people I would consider “close” friends — that I might reach out to for advice and not only expect a prolonged delay in reply, but not know if I’ll get a response at all. How, then, can one resist the immediacy and reliability of AI as our personal life coach?
Much has been written about the threat that AI will overtake the therapy industry. This reliance on digital counsel leads to the possibility of something even more frightening: Robots not only assisting our relationships, but replacing them.
A recent New Yorker article, ominously titled, “Your AI lover will change you” (is that a threat or a promise?), predicts “you’ll be hiring tech bro gigolos.” (No, you can’t unsee that.)
“A future where many humans are in love with bots may not be far off. Should we regard them as training grounds for healthy relationships or as nihilistic traps?”
I’ll let you guess which one I believe it is.
If you think this is in some distant sci-fi future, please think again. I’m personally obsessed with the exploration of consciousness — where it originates, where it goes, how it functions, how we define it — and the grand pursuit of AI is not to create more efficiency, but to achieve (and, we might argue, replace) human consciousness. A kind of Turing test on steroids. Will that algorithmically-coded consciousness include the ability to show love? Or is that an inherent contradiction?
The author, computer scientist Jaron Lanier, who also wrote Ten Arguments for Deleting Your Social Media Accounts Right Now (which argues that social media is destroying your capacity for empathy and making you an asshole — amongst other things), warns:
“We are all about to be presented, in our phones, with a new generation of A.I. simulations of people, and many of us may fall in love with them. They will likely appear within the social-media apps to which we are already addicted. We will probably succumb to interacting with them, and for some very online people there won’t be an easy out. No one can know how the new love revolution will unfold, but it might yield one of the most profound legacies of these crazy years.”
That should give you chills. And don’t think that just because you’re partnered off you’ll be spared this dystopian nightmare. Friendships — something many people are currently lacking — are an equal target. And why stop there? Aren’t AI kids amazing? So obedient, smart, and drama-free!
Not pissing in your pants yet? How about this admission from Lanier: “Many of my colleagues in tech advocate for a near-future in which humans fall in love with A.I.s.” With straight faces, they proclaim this as their answer to the loneliness epidemic they helped create when they unleashed the scourge of social media. So, really, when you think about it, they’re just correcting their mistake. AI connections are their compensatory gift to humanity. We should be grateful.
I read this article and thought: This is it. This is how they win. The companionate promise of AI — not its ability to refine our writing or do research — will be the tipping point. Humanity won’t go out with a bang. It’ll end with a digital crush.
Know this: Anywhere you feel the most, need the most, lack the most, there’s some tech lab somewhere cooking up a way to capitalize on and profit from it.
As the article notes, some techno-optimists argue that it’s not about replacing humans, but training them. (“We’re all lab rats now,” Lanier writes in his book.) Maybe that’s what was happening in the clunkily-executed attempt at interaction that I received: maybe my friend was just using AI to become a “better” texter. You can decide.
* * *
So… how did I respond? Reluctantly, I played along by sending a brief, equally bland response. Calling out their behavior with kindness and vulnerability the first time earned me robot-fueled placation. No, this required a different tack. The only way to effectively confront the crazy-making emptiness of this brand of communication is to mirror it.
But let me be clear: this is no victory lap. My response didn’t feel like a dunk; it felt as soulless and depressing as their message. We’d devolved into two bots, exchanging perfectly pleasant niceties, devoid of the blood and guts that got us there in the first place. I felt numb. Sad. Deflated. And, most profoundly, disconnected. The whole thing bummed me out.
That is what using AI to do our dirty work does to us. Despite its quaint offer of a journal prompt or a mantra for every difficult situation you might input, there is no catharsis in this type of exchange. No feeling of release or triumph. The very engagement feels like defeat by default.
We are messy, imperfect people who don’t always say or do the right thing. The goal of any text message should not be linguistic perfection. As with the spoken word, language is a tool for expression and, ultimately, connection. The minute we enlist technology to pen our most intimate correspondence, we forfeit our emotional investment — a relationship suicide mission.
Perhaps, for some, the use of AI is a polite way of exiting. A low-effort response to avoid the shame of ghosting, while simultaneously shunning the more honorable embrace of honesty. For others, like my friend, it may be a temporary deer-in-headlights reaction: Immobilized by what’s being asked of us, we let AI take the wheel.
Regardless of your motive, let this be a cautionary tale: If you ever feel tempted to outsource your emotions to AI, I encourage you to choose imperfection and awkwardness over polite avoidance.
That’s what’s missing from AI and what we desperately need more of. Acknowledge it: “Hey, apologies in advance if I f— this up. I want to say the right thing, but I’m not great at this.” Cause you know what AI would never do? Write that!2
As I’ve written over and over again, I’m a bit of a pessimist when it comes to our tech-fueled future (exhibit A: Lanier’s article above). But I do believe in our continued individual agency, IF we resist the digital candy beckoning us to surrender to it. Do not take the bait. AI wants you to feel great about yourself because it wants you to keep using it. So to rely on AI for validation is about as useful as asking a salesperson who works on commission how you look in their clothes. You’re always fabulous to AI. It is more concerned with keeping you dependent on it than on helping you show up as a whole person in your “real” life. Never forget that.
To be fair, AI does some stuff really, really well, and in the very near future, it’ll do other things disturbingly well. I use it regularly. But when it comes to enlisting it to voluntarily opt out of the hardest parts of our relationships, we should be clear-eyed in acknowledging that AI’s quest to excel in the most intimate parts of our lives is in its best interest — not ours.
In a very meta thought exercise, I asked ChatGPT what it would recommend to a person who used AI to write a personal text message, only to find out that the recipient recognized what they’d done and wrote an article about it. AI, predictably, offered lots of tips and examples of what to say (“Thanks for speaking your truth!), in addition to (of course) offering to draft a response in their tone. It also gave a list of things to avoid. First on the list: “Don’t blame AI.”
Maybe AI has an edge after all.
I DO think the case can be made for running something sensitive you’ve written through AI to get feedback on blindspots. Have you said something that might offend or feels reactive? Refinement is different from composition.
Bonus PSA: Let it be known that adding more y’s to “Heyyy” in your text messages never makes for a smoother landing. Just a slipperier one. Spread the word.
This topic burns fresh for me, as I recently paid a resume writer who unabashedly took my money in exchange for 100% AI generated content. When I confronted her about her use of AI, she ghosted me. I suppose this was better than receiving her AI generated response on how to handle a dissatisfied customer.
What strikes me about your article, Anna, is that we're finding it harder and harder to be human. Instead of working through bumps in our friendships, we outsource the work of authenticity, and/or move on.
As a thanatologist, I've seen AI creep further and further into the world of death, an area we were already having trouble submersing ourselves in. There are now AI platforms catering to those in need of eulogies, obituaries, and grief counseling.
A decade ago, I was a research assistant for a software developer writing an app that diagnosed complicated grief. I recall sitting with the clients who volunteered to test the app, and how confused, put-off, and angry they were by the tech-driven responses to their very real, traumatic losses. Today, there are several grief apps, and bereaved individuals are turning to AI on their own looking for support in their grief. What does this say about our ability to be with each other in tragedy?
We're outsourcing our emotions, as well as our mortality, to AI.
Such a good article! I have been thinking so much in how these static predictable interactions with AI bots is going to alter our true humanity. As you put it; there is much in the line. Moreover with the trend to use AI for therapy. What happens when we outsource our grief and our humanity to a bot? How is these altering how we create life together, how we relate? With other but more pressingly - with ourselves.