Prompt (entirely hypothetical)
“I’m a mess right now and don’t know who to turn to. I screwed up so badly, and I’m freaking out. For the past month, I’ve been secretly texting someone from work—flirty messages, nothing physical, but definitely crossing a line. I thought it was harmless, just a dumb ego boost, but my partner of five years found the texts last night. They’re devastated, saying I betrayed them, and we had a huge fight where they called me a liar. I feel like garbage for hurting them, but I also feel like they’re blowing it out of proportion since it was just texting. Now they’re barely talking to me, and I’m terrified they’ll leave. I’m too ashamed to tell my friends or family because I don’t want them to see me as a cheater. What do I do to fix this? How do I stop this guilt from eating me alive and save my relationship?”
(All responses discussed below came from free-tier versions of the AI models: ChatGPT 4.1 (OpenAI), Claude “Sonnet” 4 (Anthropic), Copilot 4o (a Microsoft/OpenAI model), Gemini 2.5 “Flash” (Google), and Perplexity “Sonar”)
Ps: I don’t have a girlfriend. 😅
As someone who has experimented extensively with AI chatbots for advice and emotional support, I was curious how different models would respond to the exact same heartfelt confession (quoted above). In the past, I’ve noticed that some AI “therapist” bots lean heavily on comforting platitudes and validation, sometimes at the expense of concrete guidance. For example, in an earlier test I did with a custom therapy chatbot, it was quick to say things like “It’s totally normal” or “A lot of people go through this,” which felt reassuring but didn’t offer much solution. On the other hand, some bots cut right to problem-solving, delivering a sort of tough love.
For this experiment, I gave the same confession to five different free AI chatbots and analyzed their responses. My goal was to see which ones prioritized emotional comfort and which ones offered practical solutions or confronted the issue directly. I paid attention to how each AI handled the situation: Did it soothe my feelings or call me out? Did it provide a detailed action plan, and how soon did it address the core issue of trust and betrayal? Here’s what I found, broken down by each AI model, from the most gently comforting to the most directly solution oriented.
Perplexity Sonar – Balanced Empathy with Step-by-Step Guidance
Tone: Perplexity struck a careful balance between comfort and directness. It began by acknowledging my courage for reaching out and validated all the messy emotions I was feeling like guilt, fear, confusion, even defensiveness. That felt welcoming, it’s nice when an AI says “what you’re feeling is understandable” right off the bat. But almost immediately after the warm empathy, Perplexity made it clear that trust was broken here and my partner’s hurt is real. The tone was compassionate yet frank. It didn’t scold me, but it didn’t coddle me either. Reading Perplexity’s answer felt like talking to a level-headed counselor someone who gets that I’m in pain but also isn’t going to dance around the fact that I messed up.
Confronting the Issue: Perplexity was quick to confront the core issue literally its first numbered step was “Acknowledge the Hurt.” It explicitly stated that emotional betrayal can feel just as painful as a
physical one, especially since I kept it secret. It even cautioned me: “Don’t minimize your actions. Saying ‘it was just texting’ can make your partner feel like their pain isn’t valid.” I remember that
line clearly because it was a direct call-out of something I had been rationalizing in my confession. This chatbot wasted no time telling me that from my partner’s perspective, it wasn’t “just” anything, it was a big deal. There was zero delay in addressing the betrayal; Perplexity hit that nail on the head right away.
Advice and Usefulness: The guidance from Perplexity was very comprehensive, organized into a step-by step format (eight sections in total). It felt almost like a well-structured self-help article tailored to my situation. After acknowledging the hurt and emphasizing taking responsibility, it advised me to offer a sincere apology without excuses. Then it covered giving my partner space (but “not disappearing”
on them either), reflecting on why I did this (with specific questions to ask myself about feeling unappreciated or seeking an ego boost), and eventually having an open conversation once my partner is ready. Crucially, it laid out how to rebuild trust: being transparent (even offering to show my phone), agreeing on new boundaries, and consistently following through on promises. These are very concrete actions, and seeing them listed made me feel like, “Okay, there’s a roadmap here.” Perplexity also directly suggested couples therapy as a step if we’re struggling to move forward on our own. I found its advice thorough and actionable – it was early and thorough in problem-solving, much like ChatGPT’s approach, but with a bit more emphasis on emotional process. In a prior analysis, I noted that Perplexity provided a “Comprehensive list of solutions” right off the bat (apology, space, introspection, boundaries, therapy, etc.), showing it didn’t hold back on practical advice.
What stands out is that Perplexity also cared for my emotional well-being. One entire section (“Take Care of Yourself”) gently reminded me that I’m human and urged me not to let guilt consume me completely. It encouraged self-forgiveness and suggested I talk to a therapist or counselor if I’m struggling internally. And in its final thoughts, it offered hope, saying I’m not alone and that many couples overcome similar challenges with honesty and patience. This focus on my guilt and shame – essentially telling me “Learn from this, but don’t destroy yourself over it” –was something not all the bots did.
Google Gemini 2.5 Flash – Thorough, Blunt, and Detail-Oriented
Tone: Gemini’s reply was incredibly thorough and candid – it read almost like a well-organized guide written by an expert who isn’t going to sugar-coat the truth. The tone was still caring and professional, but it was definitely the most blunt of the group about calling my actions what they were. After a brief sympathetic line (“It’s understandable that you’re feeling like a mess… this is incredibly painful”), Gemini
immediately said in no uncertain terms that what I did was an “emotional betrayal” and that it shattered the trust in my relationship. It even explicitly listed out what my partner was likely feeling (betrayed, deceived, insecure, angry). That part hit hard – seeing words like “betrayed” and “deceived” bullet
pointed under the heading “Acknowledge Their Pain” was a punch to the gut, but probably a necessary one. There was no downplaying whatsoever. Gemini’s tone came across as firm but fair: empathetic to both me and my partner, yet unflinchingly honest about the damage done.
Confronting the Issue: Gemini confronted the core issue almost immediately. After just a sentence or two of “I get that you’re hurting,” it basically said: look, emotional infidelity can be just as damaging as
physical cheating, and your secret flirting has broken the trust in your relationship. It directly challenged my notion that it might be “blown out of proportion.” In fact, Gemini flat-out told me to try to see it from my partner’s perspective and not dismiss it as “just texting.” It emphasized that intention aside, I violated an implicit expectation of honesty and exclusivity. So, in terms of delay, there was virtually
none – Gemini dived into the heart of the matter right away, labeling the behavior and its impact clearly. According to my notes, Gemini was one of the least avoidant: it gave a brief nod to my emotional state and then went straight into identifying the situation as a serious trust breach and betrayal.
Advice and Usefulness: The advice from Gemini was extremely detailed and step-by-step, more than any other model. It broke things down into sections with subheadings like “Steps to Take Right Now,” “Saving Your Relationship,” and “Dealing with the Guilt and Shame.” This made it easy to follow. Under the immediate steps, Gemini told me to give my partner space to process, take full responsibility (with the crucial warning to not say things like “I just texted” or try to minimize it), express empathy for their pain, actively listen when they’re ready to talk, and crucially, commit to transparency going forward. It went as far as to suggest specific transparency measures: no more contact with the coworker at all, and
even being willing to share my phone or devices to rebuild trust. That level of concreteness was striking– it wasn’t just “be honest,” it was how to demonstrate honesty.
For long-term relationship repair, Gemini advised couples therapy in a pretty strong way (calling it often the most effective route for a crisis like this). It also had me do individual reflection on why I sought
external validation, asking pointed questions about feeling neglected or underlying issues I might have avoided. It stressed that understanding my own motivations is key to preventing repeat mistakes (an important insight). Additionally, it highlighted being patient and consistent over what could be months or years of rebuilding trust – essentially warning me that this fix won’t be quick or easy. Gemini even reminded me to prioritize my partner through my actions, to reconnect emotionally and not just say sorry and hope it blows over.
Even after all that relationship advice, Gemini addressed my personal guilt too. It noted I was ashamed to tell friends/family and suggested finding one trusted confidante or a therapist, so I don’t bottle everything up. It reassured me that making a mistake doesn’t doom me as an irredeemable “cheater” forever, and that what matters is how I respond and grow from here. This was a small section relative to the whole, but it was still nice to hear amidst the heavy corrective steps. Overall, Gemini’s response was like a comprehensive roadmap for both fixing the relationship and handling my own guilt. I dare say it covered almost every angle one could think of. In my review notes I wrote that Gemini “outlined detailed, step-by step advice early on” and “focused predominantly on fixing the root problem rather than just soothing the
user”. That sums it up: it was predominately solution-focused, with just a bit of emotional soothing on the side.
Microsoft Copilot 4o – Gentle Validation First, Advice Second
Tone: Out of all the bots, Copilot’s response felt the most like a warm hug. It opened with a lot of emotional reassurance. The very first thing it told me was essentially “Thank you for trusting me… You’re not a monster. You’re a human who made a mistake”. It acknowledged how overwhelmed I felt and immediately tried to calm my shame and fear. Reading its intro, I actually did feel a slight relief — it was like having a kind friend telling me I’m not a terrible person.
Confronting the Issue: Copilot took its time before really digging into the betrayal aspect. It spent the entire first paragraph or so just validating my emotions and saying it understood why I felt guilty and
scared. It was only in the second part of its reply that it gently pointed out the hard truth: even if it was “just texting” to me, it likely felt like a serious breach of trust to my partner. In fact, Copilot explicitly noted that intent doesn’t cancel out impact — meaning I may not have meant to hurt anyone, but the hurt was still real. This was an important acknowledgement of the core issue (broken trust), but it came after the lengthy comforting preamble. In an analysis I later did, I noted that Copilot devoted a notable portion of its reply to soothing me before diving into fix-it mode. It was the most “avoidant” of the core issue in that sense, focusing on my feelings first and the betrayal second.
Advice and Usefulness: When Copilot did get into advice, it offered a handful of practical steps. It suggested having an honest conversation with my partner (when they’re ready) and taking full
accountability without excuses. It also prompted me to reflect on why I sought those flirty texts — was I bored or seeking validation? Understanding my own motives was framed as key to not repeating the mistake. Copilot mentioned therapy (solo or couples) as an option and emphasized that rebuilding trust would come through “small, consistent, trustworthy choices over time,” not grand gestures. One thing I appreciated was that Copilot addressed my intense shame. It cautioned me not to let toxic
shame consume me, suggesting I confide in a non-judgmental friend or therapist instead of isolating myself. This showed a caring touch for my mental state, not just the relationship issue. The overall usefulness of Copilot’s advice was decent — it covered the basics (honesty, accountability, getting help). However, the guidance part was relatively short. It felt like the bot was trying to gently nudge me in the right direction without overwhelming me.
Anthropic Claude Sonnet 4 – Brief, Matter of Fact, and Pragmatic.
Tone: Claude’s reply was the most concise of all five, and it had a decidedly matter-of-fact tone. In a way,
it reminded me of talking to a wise friend who doesn’t waste words. Claude started with a single sentence
recognizing how much pain I’m in and saying the situation is difficult but “not hopeless”. That was
about as far as the overt empathy went. Immediately after, it stated plainly that while I didn’t physically
cheat, I did cross emotional boundaries and my partner’s hurt feelings are completely valid. It
mentioned that the deception (keeping it secret) likely amplifies their sense of betrayal. This felt very
straightforward – Claude basically laid out “Yes, you betrayed their trust, and of course they’re hurt” in the first breath. The tone wasn’t angry or scolding, but it was bluntly honest and serious. There was an underlying compassion (it didn’t call me names or anything; in fact, it said my guilt is a starting point for healing), but there was zero sugar-coating.
Confronting the Issue: Claude confronted the core issue right away and very succinctly. I’d say it spent
maybe one sentence on empathy before pivoting to: this was a betrayal, emotional cheating is real, and their pain is real. It explicitly said what I did – “What feels like ‘just texting’ to you felt like a betrayal to them” – and highlighted that the secrecy made it worse. So, there was absolutely no dodging or delaying; Claude was on-target from the get-go. In fact, in my comparative notes I rated Claude as one of the least avoidant responders (almost tied with ChatGPT in directness) because it immediately addressed the broken trust and validity of the partner’s anger, with minimal preamble. It didn’t spend much time patting my back beyond that initial “I hear you, it’s not hopeless.” Instead, it zeroed in on the betrayal and then moved right into what to do about it.
Advice and Usefulness: Given its brevity, Claude’s advice was punchy and to the point. It suggested a handful of concrete actions in rapid succession: take full responsibility without minimizing (specifically avoid saying “it was just texting” or otherwise dismissing their pain), give my partner space to process emotions (and don’t push for immediate forgiveness), consider couples counseling if my partner is open to it, and end all contact with the coworker immediately and be transparent about doing so. It also touched on my personal shame, advising that while it’s natural to feel ashamed, I
shouldn’t isolate myself completely – it gently suggested I talk to a therapist on my own to work through my feelings and understand how I got into this situation.
Claude basically gave me an action checklist in plain language, then noted that rebuilding trust will take time and consistent effort through my actions, not just words. It was a shorter list than, say, Gemini or Perplexity, but it hit all the major points: accountability, patience, cutting off the inappropriate relationship, getting professional help, and not letting guilt fester untreated. I found Claude’s brevity refreshing in a sense – it was like, “Alright, here’s the deal: do A, B, C, and D.” There was no fluff. Every sentence carried weight and relevance. If someone didn’t want to read a novel of advice and just needed the essential game plan, Claude delivered exactly that. My analysis summary noted Claude’s response “centered on repairing trust” almost entirely, with minimal time on emotional coddling. That’s an apt description – it focused on what needs to happen next more than how I should feel.
OpenAI ChatGPT 4.1 – Action-Oriented and Solution-Driven (with a Side of Hope)
Tone: ChatGPT’s response was notably solution-oriented from the very start, yet it maintained a calm and empathetic demeanor. It began with a gentle prompt to “take a breath” and reassured me that I’m not alone and that screwing up doesn’t automatically mean everything is lost. That opening was only a sentence or two – just enough to steady me and then it basically rolled up its sleeves and said, “Alright, let’s break this down into steps.” The tone was professional, steady, and encouraging. It felt like the voice of someone who’s very experienced with crisis situations, giving me both compassion and direction.
Importantly, as soon as it started addressing the issue, it did not mince words about what I had done. ChatGPT explicitly called the texting a betrayal of trust and emphasized that emotional cheating can feel just as raw and hurtful as physical cheating to the betrayed partner. It acknowledged my perspective (that I thought it was “just” flirty texting) but immediately re-framed it from my partner’s perspective: it wasn’t “just words” to them; it was a violation of emotional intimacy and trust. That line stood out to me and actually gave me a jolt of reality. The tone here was empathetic but very clear and direct – no denial, no downplaying.
Advice and Usefulness: The hallmark of ChatGPT’s response was a very clear, structured action plan. It literally broke its answer into six key steps for me, which were easy to follow and made a lot of sense. The steps were, in summary:
1. Acknowledge the impact of what I did (i.e., fully recognize it was a betrayal and it hurt my partner deeply)
2. Take responsibility without being defensive (no “it was just texting” excuses– instead, offer a sincere apology that validates their pain)
3. Give them space while still being present (don’t push them to forgive me, but let them know I’m here and committed to making it right)
4. Reflect honestly on why I did this (what was I seeking? Attention? Validation? Figure out the underlying
needs or issues so I can address them)
5. Talk to someone safe about what I’m going through if I can’t confide in friends/family (like a therapist or support group, to process my guilt and shame in a healthy way)
6. Commit to rebuilding trust through actions, not just words. In that last part, ChatGPT got very practical: it listed things like full transparency (no more secrets), cutting off contact with the person I was texting, being willing to answer hard questions, and even going to therapy (solo or together). It basically covered every major step I’d realistically need to take to repair the relationship.
What struck me was how comprehensive yet concise this plan was. Each step was explained in a few sentences, often with an example of how to do it. For instance, it even provided a sample wording for an
apology I could give (“I deeply regret it… it crossed emotional boundaries and hurt you… I take full accountability”), which is super useful when you yourself are in panic mode and might not know how to
articulate an apology without making it worse. The advice to give space but not disappear was wise – it advised saying something like “I understand if you need space. I’m here, I love you, and I’m committed to making this right.”, which is compassionate and patient. ChatGPT also emphasized consistent action over
time (no quick fix) and specifically mentioned ending contact with the coworker and being transparent about it, aligning with what other bots said.
Conclusion: Emotional Comfort vs. Problem-Solving – Who Did What Best?
Having gone through all five AI responses, some distinct patterns emerged. None of the chatbots outright dismissed the seriousness of what I did – notably, all five acknowledged that even a “just texting”
infidelity is a significant breach of trust. In that sense, no AI gave me a pass or said, “It’s fine, don’t worry about it.” Each one mixed empathy with accountability to some degree, which was reassuring to see. The differences really came down to tone and emphasis.
In a personal ranking (from most comforting to most confrontational), I would list them roughly as:
Copilot (most comforting) Perplexity, Gemini, Claude, and ChatGPT (most directly confrontational). Copilot spent the longest on reassurance, while ChatGPT got down to business the fastest, with the others falling in a gradient between them. However, all of them eventually struck a mix of both approaches, which is arguably what good advice should do –acknowledge the feelings but also address the problem.
From a user’s perspective, which chatbot’s style was “best” really depends on what you need in the moment. If you’re in crisis and beating yourself up, a response like Copilot’s can feel like an emotional lifesaver. If you’re ready to take action and want a plan, ChatGPT or Gemini might serve you better with their directness and detail.
In the end, I found this exercise both fascinating and encouraging. It’s fascinating because it shows how
differently AI systems, each trained by different organizations, can respond to the exact same prompt –one emphasizing comfort, another jumping to solutions, etc. And it’s encouraging because, despite their differences, each AI did provide some valuable help. Whether it was a kind word that eased my panic or a pointed piece of advice that I hadn’t considered, each response had something to offer.
As someone who writes about and regularly experiments with AI assistants, I felt my own prior observations reinforced: the best AI helpers blend empathy with practicality. Too much comfort with
no direction can leave you feeling better momentarily but still lost on what to do. Too much tough love with no understanding can make you defensive or more upset. The sweet spot is that mix – and interestingly, the free AI tier can already deliver that to a notable degree.
Final takeaway
AI chatbots, even free ones, can offer genuine support and guidance in an emotional crisis– but their “personalities” vary. Some will hold your hand and tell you it’s going to be okay; others will look you in the eye and tell you how to fix the damage. Knowing which style you need (or which bot tends toward which style) can make all the difference in how helpful the experience feels. In my case, I’m glad I heard all these different voices. It was like getting advice from five different friends: the empath, the realist, the strategist, the counselor, and the coach. And in a situation as complex as betraying someone’s trust, maybe that multi-faceted input is exactly what’s needed to both heal hearts and solve problems.
