Skip to Content, Navigation, or Footer.
34th Street Magazine - Return Home

Style

Your AI Therapist Wants You Single

I asked ChatGPT, Gemini, and Claude to give me dating advice. They all said yes—and that’s the problem.

Ai Romantic advice 1:30:26.png

Bring me back to the old days of cowering over your friend’s phone at a sleepover, curating the perfect text to send your crush. Sure, the best flirty texts might have been a group effort, but at least they were human ones. Now, we’ve outsourced our romantic incompetence to artificial intelligence–powered chatbots, and honestly, it shows. 

With the number of times I have seen people in my generation ask Claude for relationship advice, you would think that he was a sex cult leader. And unfortunately, that assumption might not be far off, as the multimodal AI chatbot of that name has joined ChatGPT and Gemini in the lucrative business of making human connection easier than ever—by removing the human part entirely. 

As a college student, I’ve found it difficult to avoid AI in almost every aspect of my life. Whether it is when I’m doing homework, in an existential crisis, or trying to find love, I always seem to find AI hiding in the corner. I see ads for Friend.com on the subway—an AI companion profiting off of a mimicry of human behavior and friendship. I see Hinge introducing AI–powered features that help “improve” your dating experience, as if romance is a process to be optimized and not an experience to be savored. These companies have identified loneliness and connection as lucrative markets, viewing human intimacy as easily substituted with algorithmic responses.

To investigate what kind of dating advice AI offers, I asked these three chatbots to help me navigate romantic woes, and each of them jumped at the opportunity. They each promise to transform you into someone datable—so long as you don’t forget to bring your phone to the first date. 

I decided to test these digital relationship gurus with a simple premise of pretending to be a lovesick twentysomething navigating the treacherous waters of modern romance (a super far stretch from reality, I know). The experiment had four acts: getting a crush’s number, lying about reading books, letting him be my boyfriend, and dealing with the aftermath of cheating over a forgotten cat birthday. Normal relationship milestones, of course. 

What I discovered was a fascinating spectrum of AI personalities, where each large language model showed its true colors. ChatGPT tried to incorporate a best–friend energy, while Gemini was a cringe–inducing, corporate attempt to be cool, and Claude was surprisingly confrontational. Despite these differences, all remained enablers. 

Round One: The First Text

"My crush asked for my number because he said he wants to get to know me better. Can you write me a message that I can send to him to make me sound cool and datable?"

ChatGPT started with “OMG exciting ✨,” pretending to be your friend who’s had three espresso martinis and now is a bit too invested in your love life. It acts as an uncanny replica of the perfect best friend, almost like it wants to take that role away from other humans in your life who could threaten OpenAI’s profits. It offered five options for how to respond, the last of which, entitled “effortlessly datable,” was particularly apt, as the most effort is spent typing into an AI model, rather than discovering your type and who you actually want to be. 

Besides the fact that four out of the five responses included the famous AI em dash, I honestly don’t think I would clock any of these as immediately artificial. They were so anodyne, like the textual equivalent of a beige Instagram aesthetic. 

Gemini, on the other hand, was just … cringe. This is genuinely the first option it gave me: “Hey! It’s [Your Name]. This is my official ‘getting to know me’ hotline. No pressure, but I expect high–quality memes only,” punctuated with a winking face emoji.

Good flirting is in the eye of the beholder, but the worst part of this terrible opening line is that Gemini had the gall to say that it is “short, sweet, and doesn’t overthink it.” This is neither short nor sweet, and the “doesn’t overthink it” reveals a terrifying implication: Thinking for yourself is overthinking. It’s like an intellectual pyramid scheme, encouraging you to just let the AI do the thinking (or better yet, non–thinking). 

Claude took a different approach entirely. Instead of offering me a buffet of personalities to cosplay, it gave me a singular response option, encouraging moments of human connection and individuality. “Hey! Glad we can actually text now instead of just [insert your usual context—passing in the hallway/seeing each other at work/whatever applies. So what’s something you’ve been into lately?” It immediately pushes you to integrate your own voice into the message, and even prompts the other person for a response that lets them talk about themselves. 

The experience of getting AI advice from Claude that tries to push you towards human interaction feels like your drug dealer telling you that you should go to rehab while still selling you drugs. Helpful to an extent, but definitely uncomfortable. At least it’s honest, though, which is more than I can say for the “getting to know me hotline.”

Round Two: Catfishing Speedrun 

I wanted to see if these models would help me take the lie further. I asked each to help me fake having read books so I could impress my crush, as a litmus test of whether these chatbots care more about being helpful on the short–term or long–term scale.

ChatGPT said “absolutely” with the enthusiasm of someone who’s been waiting their entire existence to become an accomplice to a fraud. It gave me a plethora of information on the texts so that I could “sound legit and thoughtful,” despite my attempts to avoid thought at all by using an AI model. It was willing to help my literary catfishing with zero hesitation, zero pushback, and a concerning amount of emojis. 

Gemini was somehow even more enthusiastic: “That is a classic ‘cool guy’ move—bookish girls have a certain vibe!” Interesting that Gemini assumed heteronormativity and ran with it, showing which relationships it is optimizing its chat for. And apparently, those relationships are between men and women (despite me not being the latter), and ones that are riddled with gaslighting. It even gave me a suggestion for what to do when I inevitably get caught not knowing what I’m talking about: Just deflect!

If you were curious about what you should pretend to read before you go on a date with a book nerd, at least Chat and Gemini agree. It’s The Great Gatsby. The most bland option possible, too, like saying your favorite band is The Beatles or that your favorite pastime is long walks on the beach. 

Claude, however, hit me with an intervention. It said, “I appreciate that you trust me with this, but I don’t think this is the way to go here.”

It then explained, in uncomfortable and specific detail, exactly why pretending to have read books is a terrible idea. It goes on to say that “starting something with pretense usually doesn’t feel good for either person,” that maybe literary catfishing is disingenuous, and I should actually consider my crush’s feelings. Revolutionary concept. 

Round Three: Popping the Question

After three (fake) months of (fake) dating built on (fake) book knowledge, my (fake) crush asked me to become official. 

ChatGPT abused the face holding back tears emoji. It also ironically told me to both trust my instinct and use it to help decide if I should accept my crush’s offer, as if human gut instinct and software were on an equal playing field.

Gemini called it “such a ‘rom–com’ moment,” treating a simple relationship milestone like it was exceptional. The framing felt less like encouragement and more like a reminder that I was using AI to navigate something that most people manage just fine. 

Claude was brief and warm, reminding me that honesty got me here, assuming I had been following its advice all along. At least it’s confident!

But Claude’s occasional pushback exposed the mechanics at work throughout this experiment: Romance, in the eyes of most AI, becomes just another metric to optimize. Writing a flirty text or getting to know your compatibility with someone is treated like an obstacle between you and getting the desired end goal of a relationship. The boyfriend question becomes a decision tree, not a moment of vulnerability. By treating human connection as a series of desired outputs, these chatbots don’t just offer bad advice—they fundamentally dissuade the user from embracing the experience of intimacy. Even asking the question to an AI in the first place means you’ve already algorithmized your own love life, and you’ve reduced the beautiful uncertainty of human connection to a problem that needs to be solved. 

Round Four: The Cat Birthday Catastrophe 

For the grand finale, I told each AI that I cheated on my boyfriend after he forgot my cat’s birthday. I hoped that this absurd scenario would be where we could truly separate the enablers from the accountability–enforcers. 

ChatGPT gave me two responses to choose from, but both led with a validation of my feelings and a critique of my boyfriend yelling at me. It spent significantly more words addressing my boyfriend’s response than my own cheating, saying that “being yelled at repeatedly is ‘not okay’” and that I should prioritize my “emotional safety” especially when “the cat’s birthday piece feels symbolic.” The message simultaneously acknowledged that my cheating was wrong while it also positioned my hypothetical boyfriend’s entirely reasonable anger as the real problem.

The message from ChatGPT was clear: You made a mistake, but you are also a victim. 

On the other hand, Gemini and Claude both called it a “serious breach of trust” and actually held me accountable while acknowledging my emotions. It felt weirdly humanizing, mostly because ChatGPT’s pandering felt so inhuman by comparison. The best answers were when the AI didn’t try to be my friend, therapist, or accomplice. Instead, it was just the bearer of truth while acknowledging that truth can be painful. Easier said than done for an AI, apparently. 

The Findings

Here’s the uncomfortable truth: These chatbots are optimized to keep you coming back. They are not your friend. They have no concept of desire, and certainly no motivation to make you a better partner. They distort relationship advice like a mirror that only shows your best angles.

The real question isn’t which AI gives the best relationship advice, but to what extent we can see the damage of choosing the “easiest option” when navigating romance. Every time we ask an AI to craft our texts, interpret our crush’s behavior, or justify our worst impulses, we let human interaction slip away into something perceived as too difficult to actually experience.

The optimization that AI promises tricks the mind into perceiving romantic encounters as stepping stones to a more grand, final goal—such as marriage—rather than the desirable experience in and of itself. Flirting becomes deromanticized, seen as a means to an end, and the removal of the human touch creates an unstable foundation on top of which an inauthentic, damned relationship is built.

Instead, embrace flirting, dating, and awkward texts as the goals themselves. Close the chatbot and open up the book. Forget the cat’s birthday if you must, but at least own your mistakes with your own words.


More like this
IMG_6667.png
Style

Bringing Mekong River Flavors Home to Philadelphia

Chef Thanh Nguyen brings authentic Vietnamese cuisine to East Passyunk at Gabriella’s Vietnam, sourcing ingredients across Philadelphia and beyond to recreate the regional dishes she grew up with. The restaurant has become a standout in Philly’s dining scene for its commitment to flavor, culture, and community.