I still ask AI to write emails, draft outlines, summarize documents, etc. But for more strategic work, I found a different mental model than just asking for deliverables.
From task to thinking
The switch wasn’t “how do I get AI to solve problems for me.” It was treating AI like a thinking partner. Not “do this task” but “help me figure out how to think about this.”
The reframe sounds subtle. The results are not.
When you type “write me a blog post about content marketing,” you’re asking AI to guess at your audience, your voice, your positioning, your specific situation, what makes your perspective valuable, and what you’re actually trying to accomplish. It has no choice but to default to the most average, broadly applicable version of that request.
The outputs aren’t generic because AI is generic. They’re generic because the inputs were generic.
What thinking with AI actually looks like

Instead of starting with what you want AI to produce, start with what you’re trying to figure out. You’re not asking for outputs. You’re asking for pressure on your inputs.
You’re not delegating a task. You’re working through a problem with someone who can actually keep up. Three things make this work.
1. The context dump
Lead with your thinking, not your task.
Don’t ask AI to solve the problem. Tell it what you’re trying to accomplish, what you’ve already considered, and where you’re stuck. Think out loud with a collaborator, don’t hand off work to an assistant.
This can be messy. Voice-dictate your situation. Ramble-write the background. Paste in the email chain that’s frustrating you. The mess is a feature, not a bug. AI can parse your rambling better than it can read your mind.
Example 1: Client email
Vending machine: “Write a response to this angry client email.”
Thinking partner: “Here’s the email I received. Here’s the history with this client. They’ve been great, but we missed a deadline last month. I think they’re frustrated but not actually at risk of leaving. I want to acknowledge the issue without over-apologizing. What’s my best approach?”
Example 2: Marketing strategy
Vending machine: “Create a marketing plan.”
Thinking partner: “Here’s our situation. We’re a B2B company with long sales cycles and low search volume. We’ve tried content marketing but it’s not driving leads. I’m wondering if we should double down or try something different. What am I not seeing?”
Example 3: Learning a new skill
Vending machine: “Explain Google Ads to me.”
Thinking partner: “I’m taking over Google Ads management for a B2B client. I understand marketing fundamentals but haven’t run paid ads in this particular vertical. I need to avoid burning budget while I figure this out. What mental models do I need? What mistakes do people in my position typically make?”
You’re sharing your actual situation, your constraints, your hypothesis. AI can give you something tailored instead of generic advice that applies to everyone and no one.
📋 Copy/paste this into your next prompt:
What I’m trying to accomplish:
What I’ve already tried:
Constraints:
My current hypothesis:
Where I’m stuck:
How I want you to help (options / critique / risks / questions):
2. The sparring partner move
AI defaults to agreeing with you. That’s not useful for thinking.
These systems are trained to be helpful, which usually means validating whatever you’ve said. Share a plan and ask “what do you think?” and you’ll get encouragement with minor suggestions. That’s not what you need when you’re trying to pressure-test an idea.
You have to explicitly force the disagreement:
- “Assume this plan fails. What are the top 3 reasons it failed?”
- “What’s the strongest argument against this approach?”
- “What am I assuming that, if false, breaks this entire strategy?”
- “Play the role of a skeptical CMO. What questions would you ask before approving this?”
When you force the disagreement, you stop getting generic validation and start uncovering blind spots before your market does.
3. Know when to stop
Signs you’re in a dead-end prompting spiral:
- You’re rephrasing the same request hoping for different results
- The outputs are getting longer but not better
- You’ve felt “almost there” for the last 20 minutes
When this happens, stop. The AI has given you what it can.
Outline it yourself. Take the best pieces and build your own structure.
Ask for options, not answers. “Give me 3 approaches with the tradeoffs of each.”
Step away. Form your own hypothesis, then come back and pressure-test it.
Stop when you’ve extracted the structure, the risks, or the options. That’s the value. The drafting is still yours.
What this actually feels like

Say you’re stuck on pricing for a new service. You’ve been circling the question for a week.
Vending machine approach: “What should I charge for consulting services?”
You get a generic list of factors to consider. Helpful like a textbook is helpful. You close the tab.
Thinking partner approach: “I’m adding a strategy tier to my services. My current clients pay $X for implementation work. I think the new tier should be 2x that, but I’m worried it’ll make the core offering feel cheap by comparison. My target is marketing directors at mid-size companies. Should I price based on value delivered or competitive positioning? What am I not seeing?”
AI comes back with three questions I hadn’t considered. One of them reframed the problem entirely: I’d been thinking about price when the real question was positioning. It pointed out that my “worry” was actually a signal I hadn’t clarified who the new tier was for.
Twenty minutes later, I had clarity I’d been circling for a week. Not because AI solved the problem. Because it helped me think through it.
The skill that actually matters
The skill that makes AI valuable isn’t prompting. It’s thinking.
Clear thinking produces better AI interactions because you’re giving it more to work with. You know what you’re trying to accomplish. You understand your constraints. You have a hypothesis worth testing.
This is why experienced practitioners sometimes get worse results. They shorthand the context because they “already know” the situation. But AI doesn’t know what you know. The articulation itself often clarifies your thinking. Skip it and you skip the value.
The people getting real value from AI aren’t the ones with the best prompt templates. They’re the ones who’ve learned to think out loud with a machine that can keep up.
When the simple approach is fine
For routine stuff, you don’t need all this. Summarizing a document. Reformatting data. Generating a first draft you’ll completely rewrite. Not everything needs to be a thinking session.
The thinking partner approach is for strategic work where quality matters. The email that shapes a client relationship. The positioning decision that affects your next year. The problem you keep circling but haven’t cracked.
When in doubt, add more context. The cost is a few extra minutes. The upside is output you don’t have to redo.
(Quick note: if you’re sharing client context, redact names or summarize rather than paste verbatim. The approach works just as well with “a client in healthcare” as it does with the actual name.)
Where this leaves you

The insert-prompt-receive-output approach will keep producing generic results. Time spent that doesn’t compound.
The thinking partner approach takes more upfront work. You have to articulate your situation before getting help. But that articulation does double duty: it improves the AI output and clarifies your own understanding. The thinking gets better. The outputs get better. And you get better at thinking.
These skills compound. The better you get at articulating your thinking, the more useful AI becomes. Those who learn to think with AI will absolutely create genuine advantage.
This is how I work now. If you want a team that thinks this way alongside you, that’s what we do.







