Welcome back.

Have you thought about subscribing? It's free.
seths.blog/subscribe

What’s for breakfast?

Peter Drucker didn’t say “culture eats strategy for breakfast,” but reality rarely gets in the way of a good quote.

But what does it mean?

I think what ‘not Drucker’ meant was that MBA tactics will always be subverted by the power of systems, and that systems disguise themselves as culture (“what are things like around here.”)

So a more accurate restatement of the fictional quote would be, “Systems eat tactics for breakfast.”

Maybe that would have been a better subtitle for my book.

PS I was on Tim Ferriss’ podcast this week. You can see 500 of my video and podcast appearances here.

In the words of Harry Truman, “42% of all the quotes on the internet are misattributed.”

What sort of progress?

Nothing stays still. Relative to the rest of the world, even something that’s not moving is changing.

It’s tempting to talk about not making fast enough progress.

But it’s far more useful to ask which direction we’re progressing.

Often, people will point to the velocity of the change they’re making without pausing to consider the direction of that change.

Strategy is the hard work we do before we do the rest of the hard work. Where to?

Organizing for urgent

There are many ways to prioritize our time and focus, but the easiest and most vivid way is to do the urgent things first.

If we wait until a house plant is sick before we take care of it, though, it’s too late.

Deadlines, loud requests and last-minute interventions are crude forcing functions. They’re inefficient and common.

It’s far more effective to organize for important instead.

We thrive when we do things when we have the most leverage, not when everyone else does. Waiting for trouble means that you’re going to spend your days dealing with trouble.

Analysis = Facts + Interpretation

If you fail to show us the facts, it’s difficult to accept your analysis.

While it’s tempting to simply share an interpretation of what’s happening, credibility and persuasion are based on showing your work.

Getting clear about brand value

Consulting firms rank brands on value. Marketers promise to increase it.

But brand value has little to do with whether a company is famous or even profitable.

The accurate measure of brand value is the premium that consumers will spend over the generic.

What time, money or risk will they take for a valuable brand compared to the very same offering from an unknown?

Luxury goods, by necessity, have high brand value, because the generic knock-offs sell for a tiny fraction of the price. (Heinz ketchup commands a much smaller price premium. You may have heard of them, but you don’t care that much.) Familiarity is not always a proxy for high value.

New products launched by high-value brands get off to a faster start because consumers who trust them feel like they’re taking a smaller risk.

Valuable brands often get applications from potential employees and partners of higher quality than an upstart might.

And yet…

ChatGPT, Perplexity and Claude all gained enormous traction at the expense of some of the most highly ranked brands in the world.

Systems change, and user experience and the network effect often defeat brands. Plan accordingly.

Checking all the boxes

The simplest way forward is to see which boxes your target market has and then check all of them.

Unfortunately #1: The audience doesn’t publish their actual list of boxes, they conceal many of them.

Unfortunately #2: They don’t all have the same boxes.

Unfortunately #3: If it were that straightforward, your competition would have done it all already.

Great work finds emotions, stories and possibility. Great work invents new boxes.

The Yellow Brick Road is mostly an illusion.

Expertise and credentials

In the ideal world, credentials would be awarded to all experts, and withdrawn from all charlatans.

But they don’t always line up as neatly as that.

An expert is someone who can keep a promise. Point to the results that demonstrate your skill and understanding and commitment and we’ll treat you as an expert.

Credentials, on the other hand, are awarded to folks who are good at being awarded credentials. The place you went to school or the number of followers you have online are credentials. If they help you create value, that’s great. But they’re not the same as expertise.

The weird arithmetic of coordinated action

Twenty handwritten letters received by someone in power are worth a hundred times as much as two letters.

And when that becomes a hundred different personal letters, increasing in volume, from different people, delivered to an organization every week for a year… it’s worth a million times as many as just twenty.

Honesty about better

“I don’t want to learn to be better,” is something we rarely admit.

We don’t say:

I don’t want to learn statistics, even though it will dramatically improve my decision making.

I don’t want to learn a new programming language, even though it will get me a better job.

I don’t want to learn methods for creativity, strategy or marketing, even though they will help me get unstuck.

I don’t want to learn how AI will transform my work, even though it will make me more productive.

I don’t want to learn how to use the shortcuts on my apps, even though it will save me time.

I don’t want to learn basic selling skills, even though they will help me make a difference.

I don’t want to understand what happened decades ago, even though it will help me be a better citizen.

All of these things (and many more) are now easily learned, for free, online, with no peer pressure.

But we hesitate. We hesitate because:

  • Learning requires effort
  • Once we learn something, we might have to change our mind
  • Changing our mind shifts how we see the world, and that can be unsettling
  • Change feels risky

There are countless things I’d like to learn, but if I’m being honest, my problem is that I don’t care enough to do the work.

The most difficult part of adult learning is choosing to learn.

Trusting AI

For generations, humans have been entrusting their lives to computers. Air Traffic Control, statistical analysis of bridge resilience, bar codes for drug delivery, even the way stop lights are controlled. But computers aren’t the same as the LLMs that run on them.

Claude.ai is my favorite LLM, but even Claude makes errors. Should we wait until it’s perfect before we use it?

If a perfect and reliable world is the standard, we’d never leave the house.

There are two kinds of tasks where it’s clearly useful to trust the output of an AI:

  1. Recoverable: If the AI makes a mistake, you can backtrack without a lot of hassle or expense.
  2. Verifiable: You can inspect the work before you trust it.

Having an AI invest your entire retirement portfolio without oversight seems foolish to me. You won’t know it’s made an error until it’s too late.

On the other hand, taking a photo of the wine list in a restaurant and asking Claude to pick a good value and explain its reasoning meets both criteria for a useful task.

This is one reason why areas like medical diagnosis are so exciting. Confronted with a list of symptoms and given the opportunity for dialog, an AI can outperform a human doctor in some situations–and even when it doesn’t, the cost of an error can be minimized while a unique insight could be lifesaving.

Why wouldn’t you want your doctor using AI well?

Pause for a second and consider all the useful ways we can put this easily awarded trust to work. Every time we create a proposal, confront a decision or need to brainstorm, there’s an AI tool at hand, and perhaps we could get better at using and understanding it.

The challenge we’re already facing: Once we see a pattern of AI getting tasks right, we’re inclined to trust it more and more, verifying less often and moving on to tasks that don’t meet these standards.

AI mistakes can be more erratic than human ones (and way less reliable than traditional computers), though, and we don’t know nearly enough to predict their patterns. Once all the human experts have left the building, we might regret our misplaced confidence.

The smart thing is to make these irrevocable choices about trust based on experience and insight, not simply accepting the inevitable short-term economic rationale. And that means leaning into the experiments we can verify and recover from.

You’re either going to work for an AI or have an AI work for you. Which would you prefer?