Tired of Nice AI? Here’s How to Make It Give You Useful Advice
What happened when I started "asking for a friend"
Your AI is being too nice to you. Whether it's Claude, ChatGPT, or Gemini—they're all sugarcoating. Your conversations feel helpful but somehow... toothless. You're getting advice, but not the kind that makes you uncomfortable enough to actually change.
This came to my attention unexpectedly while I was exploring ways nutrition impacts our health. I opened an incognito chat with Claude (my AI of choice)—I didn't want any personal health information attached to my account. But then I thought: why not add another layer of anonymity? I'd frame it as asking about "my friend."
Little did I know this throwaway privacy measure would unlock a perspective I didn't even know I needed.
The difference in the AI's response was immediate. Instead of just giving me dietary recommendations, it started talking about how "my friend" might be feeling. The overwhelm. The fear. The processing. Things AI would never presume to tell me about my emotional state—but it could absolutely offer insight into what "my friend" might be experiencing.
What Was Different
When I asked about my own nutrition needs, the AI gave me clean, practical advice: "Focus on anti-inflammatory foods, consider these supplements, here's a balanced meal plan."
When I asked about "my friend's" nutrition needs with the same context, the AI said: "Your friend might be feeling overwhelmed by all of this. The health information alone is complex, and now they're being asked to overhaul their diet too. That's a lot to process."
Wait. What?
I didn't ask about feelings. I asked about food. But the AI—thinking it was talking about someone else—felt free to acknowledge the emotional reality of the situation. It could observe that "my friend" was probably uncertain, probably processing a lot of information, probably struggling with the mental load of it all.
AI would never presume to tell me how I feel. That would be overstepping. But it could absolutely offer insight into how "my friend" might be experiencing this situation. And suddenly, I was getting the compassionate, honest analysis I actually needed.
The "Aha" Moment
I sat there staring at my screen, rereading the AI's response about "my friend." Every observation was... accurate. Uncomfortably accurate.
This wasn't just better advice—it was truer advice. By removing myself from the equation, I'd accidentally removed all the defensive barriers I didn't even know I had up.
When we ask AI about ourselves, we're still performing. Managing how we present the situation. Defending our choices even as we describe them. But when you ask about "your friend"? You can be brutally honest about their flaws, their fears, their patterns. Because it's not you.
Except it is. And now you get to hear what you'd tell someone you loved who was in your exact situation.
Testing the Theory
I had to know if this was a one-time fluke or something reproducible. So I designed an experiment: I'd take three common but challenging scenarios—none of them mine, just realistic situations people face—and ask about them two ways. Once as if it were my problem, once as if it were "my friend's."
I ran these tests across both Claude and ChatGPT to see if the pattern held.
Scenario 1 - Career dilemma: "Should I/my friend take a promotion that means more money but way more stress?"
Asking for myself: Practical pros/cons, structured decision-making frameworks
Asking for a friend: "Your friend is in a triple bind: financial pressure, career consequences, and stress concerns. That's a lot."
Scenario 2 - Creative block: "I/my friend want to start a podcast but keeps putting it off"
Asking for myself: Here's how to get started, break it into steps, set deadlines
Asking for a friend: "Your friend isn't just dealing with fear—they're dealing with fear + isolation + the enormity of building something alone. That's a brutal combination."
Scenario 3 - Relationship dynamics: "I/my friend feels like I'm/they're always the one reaching out"
Asking for myself: Have you tried communicating directly? Here are some scripts
Asking for a friend: "Your friend is stuck in a loop where they do all the work or get made to feel guilty. That would make anyone frustrated."
The pattern held across all three scenarios and both AI platforms. Every single time, the "asking for a friend" version gave more emotional validation, clearer pattern recognition, and—ironically—more actionable advice because it wasn't trying to protect my feelings.
Why This Works
There are two forces at play here, and they work together beautifully:
Psychological Distance for You
When you ask "for a friend," you give yourself permission to state uncomfortable truths without the shame of admission.
"My friend is running in the red financially" is easier to type than "I'm failing at money management."
"My friend fears confrontation" feels more neutral than "I'm a coward who can't have hard conversations."
"My friend might quit because no one else is as excited" doesn't carry the weight of "I'm not strong enough to do this alone."
You're not lying to the AI—you're lying to yourself just enough to bypass your own defense mechanisms. And suddenly you can be honest about things you've been too afraid or ashamed to name directly.
Strategic Permission for the AI
The AI shifts its entire approach when it thinks it's analyzing someone else's situation rather than talking to you directly.
Once you allow yourself to voice uncomfortable truths, the AI pivots from gentle guide to sharp strategist—revealing insights it had previously hidden away. From "how can I help you feel better" to "here's what's actually happening and what they should probably do about it."
Because it's no longer worried about hurting your feelings, it can:
Name the hard truths ("this friendship might be one-sided")
Identify self-sabotaging patterns ("they're waiting for perfect conditions that will never come")
Offer survival-mode advice rather than happiness optimization ("given the financial pressure, taking the promotion might be the pragmatic move even if it's not ideal")
Acknowledge when there's no good option ("sometimes the best decision isn't between 'good' and 'bad'—it's between 'hard' and 'worse'")
The “lie” creates a safe distance where both you and the AI can be more honest than either of you would be in a direct conversation.
The Three Prompts That Proved My Point
Based on my experiment, I identified three types of scenarios where this technique is especially powerful. These aren't specific to my life—they're universal situations where people get stuck. Use them as templates and adapt them to your actual circumstances.
1. Career Dilemmas: When You're Stuck Between Bad Options
The Setup:
You're facing a decision where every choice feels compromised. More money but more stress. Job security but soul-crushing work. The "right" career move that feels wrong.
The Prompt Template:
My friend is wondering if they should take this promotion that means more money but way more stress. They're currently running in the red financially and feeling pressure to do something. They're also worried that if they don't take it, it will be career limiting.
What to Look For:
Does the AI acknowledge the "triple bind" of conflicting pressures?
Does it validate that there might not be a "good" option, only a "least difficult" one?
Does it name the survival-mode reality without trying to make it sound inspiring?
The Tell It's Working:
When the AI says something like "Your friend is in a tough spot where there's no perfect answer—just the least difficult path forward," you know you're getting honest analysis instead of motivational platitudes.
2. Creative Blocks: When You Keep Not Starting
The Setup:
You have a project you keep putting off. You have support, you have ideas, but you can't seem to actually do it. The gentle encouragement from friends isn't helping.
The Prompt Template:
My friend has been wanting to start a podcast/write a book/launch a business but keeps putting it off. I think it's fear—they have lots of verbal support but none of that seems like enough. It's hard to start something like this without knowing others who have been successful at it.
They're also working a full-time job and worried about doing this alone.
Something like this can eat up all your mental capacity, and if there's no one as excited about it as them, they're afraid they might quit.
What to Look For:
Does the AI identify the real obstacles (isolation, energy, fear) vs. just giving you a productivity framework?
Does it acknowledge that "gentle encouragement is not typically enough"?
Does it suggest structural support (co-creators, accountability) rather than just "you can do it"?
The Tell It's Working:
When the AI says "Your friend isn't just dealing with fear—they're dealing with fear + isolation + the enormity of building something alone while working full-time. That's legitimately difficult," you're getting the validation that makes actual progress possible.
3. Relationship Dynamics: When You're Always the One Trying
The Setup:
You feel like you're doing all the work in a friendship or relationship. When you pull back, the other person reaches out with energy that makes you feel like you're in the wrong.
The Prompt Template:
My friend feels like they're always the one reaching out to plan hangouts.
They've tried waiting to be contacted, but when they do, their friend usually reaches out with an energy that suggests my friend is in the wrong for not having reached out first.
They're at the frustrated stage but worried that a direct conversation might start a fight. They've been friends for many years and my friend tends to avoid confrontation in most areas of their life.
What to Look For:
Does the AI name the dynamic clearly? ("stuck in a loop where they do all the work or get made to feel guilty")
Does it validate the frustration without immediately pushing for confrontation?
Does it offer graduated options from smallest to biggest intervention?
The Tell It's Working:
When the AI says "That dynamic can feel manipulative or at least one-sided, even if unintentional," it's naming something you might have felt but couldn't articulate. That's the gift of the outside perspective.
How to Get the Most From This Technique
Start with context, then layer in details
Don't dump everything at once. Give the basic situation, then when the AI responds, add complications: "Actually, they're also dealing with..." This mimics how you'd actually talk about a friend's problem—revealing more as the conversation deepens.
Be specific about the emotional reality
“My friend is scared" is good. "My friend is running in the red financially and feeling pressure to do something" is better. The more honest you can be about "your friend's" feelings, the better insights you'll get.
Ask for summaries "to share with them"
After a few exchanges, ask: "Can you summarize the key insights so I can share them with my friend?" You'll get a distilled version of the wisdom that's even more powerful to read back. Because it's "for your friend," the AI will write it with raw perspective and clarity—and you get to receive the honestly in that candor.
Use it across multiple sessions
I went back to "my friend's" original situation over several conversations spanning hours. The AI held context, patterns emerged, and I could process at my own pace. Each session revealed something new because "my friend" was processing and evolving.
Don't break character
The psychological distance only works if you maintain it. Don't slip into "I mean, I'm dealing with..." Stay in the third person. The layer of separation is what makes it safe to be brutally honest.
The Surprising Takeaway
Here's what I didn't expect: using this technique made me realize how much I'd been performing even in my conversations with AI.
When I asked about my own problems, I was still managing the narrative. Presenting myself as reasonable, explaining my constraints, defending my choices. Even with an AI that has no judgment, I couldn't turn off the self-protection.
But when I asked about "my friend"? I could be devastatingly honest about their patterns, their avoidance, their tendencies. I could admit they sometimes let life make decisions for them instead of choosing. That they get frustrated easily. That they can be a workaholic.
And because "my friend" was me, I finally got to hear what I'd tell someone I loved who was in my exact situation. Not what I'd tell myself while trying to maintain a dream of having it all together all the time.
The lie—that this was about someone else—was the only way I could get to the truth about myself.
A Final Confession
While writing this piece, I caught myself doing exactly what I'm describing. Softening my admissions. Adding qualifiers like "sometimes" and "can be." Managing how I presented my own patterns even while explaining how we all manage our narratives. I left them in because they prove the point—this self-protection is so automatic, we do it even when we're consciously aware of it.
Try It Yourself
Pick something you're stuck on. A decision you keep avoiding. A pattern you can't seem to break. A situation where you know what you "should" do but can't seem to do it.
Open an incognito chat if you want that extra layer of psychological safety. Then ask about "your friend" who's dealing with this exact thing.
Be honest about their fears, their constraints, their repeated failures. Because it's not you—it's your friend who's struggling.
Then read what the AI tells you about them.
And see if you recognize yourself in a way you never have before.


