Don’t tell me what AI told you.

Why we need original thinkers

I've caught myself thinking this more often lately. Sometimes politely, sometimes not. It happens in meetings when a colleague rehashes what Claude told them, almost verbatim. In Slack threads filled with overly polished answers that somehow say nothing.

I love using AI. I use it all day long. But I'm increasingly worried about what we're trading away: our own judgment.

The insecurity no one talks about

We've given names to AI's workplace side effects: Workslop, automation bias, lazy prompting. But there's a more human factor we don't talk about enough: a growing insecurity around our own knowledge and judgment. When AI seems to have access to everything, it's easy to assume it must know more than we do.

The default mode has flipped: produce first, think or assess later. And in that flip, something crucial gets lost. Not just quality, but ownership. The ability to defend a point of view. To say "here's what I think, and here's why."

So what do we do about it?

Three practices for rebuilding judgment

1. Think in public

Product-led companies have popularized the idea of building in public. I believe the same principle applies to thinking. One of the best antidotes to human-less output is thinking in public and encouraging structured, deep thinking.

Back in 2023, an article from Wes Kao stuck with me, titled Rigorous thinking: No lazy thinking. In her words “Rigorous thinking is asking critical questions about tactics, and having a systematic way of making decisions. [..] It’s an approach to problem solving that allows you to deconstruct ideas, gain clarity, and make decisions that are far more likely to be right.”

The contrast is clear:

Lazy thinking: “Hey boss, can we do this tactic?”

Rigorous thinking: “I recommend we do this because it is likely to work given these conditions. The risks are here. We can reduce them by testing it this way.”

We don’t need to turn every chat into a deep conversation, but moments like planning sessions, strategy review, or a simple one-on-one are great spaces for managers to encourage this rigorous thinking, with no chatbot involved.

2. Treat workslop as a performance issue.

Most teams are vague about quality, and AI fills that gap with volume. That’s how workslop creeps in, and it shouldn’t be treated as a stylistic preference or a minor annoyance.

It’s a performance issue.

Leaders set the tone here. They need to be explicit about quality, about how AI should be used, and just as importantly, how it shouldn't be.

An example: An exec update that summarizes AI-generated themes without making a call isn't quality work. A good one states a recommendation, explains the rationale, acknowledges the risks, and names the decision required. The difference is ownership.

3. Close the loop

If we never revisit our decisions, it’s hard to improve how we make them. Judgment does not compound on its own.

At Stripe, decisions are often made with incomplete information, high stakes, and real customer impact. To build better judgment over time, teams use decision logs as a lightweight practice. When a decision is made, they write down what was decided, why it was decided, and what they expect to happen. Later, they revisit it to compare expectations with reality.

Kevin Yien, who leads product for merchant experiences at Stripe, describes three simple habits:

  1. Log decisions as they are made, even when data is incomplete

  2. Capture the rationale and assumptions, not just the outcome

  3. Review what happened to see where thinking was right or wrong

This creates a tight learning loop. Teams get better at operating under uncertainty instead of relying on intuition or hindsight.

What I'm really asking for

So when I find myself thinking, “Don’t tell me what AI told you,” what I’m really asking for is ownership. AI can help explore ideas, draft options, and stress-test assumptions, but judgment still has to come from a person.

What teams need now is more visible thinking. Fewer polished answers and more defended points of view. More willingness to share how a conclusion was reached and where the uncertainties are.

This is also how we will leverage the best of human expertise within our teams.

Previous
Previous

What marketers can learn from luxury brands.

Next
Next

January is a time for vision boards