Peter Drucker didn’t say “culture eats strategy for breakfast,” but reality rarely gets in the way of a good quote.
But what does it mean?
I think what ‘not Drucker’ meant was that MBA tactics will always be subverted by the power of systems, and that systems disguise themselves as culture (“what are things like around here.”)
So a more accurate restatement of the fictional quote would be, “Systems eat tactics for breakfast.”
Maybe that would have been a better subtitle for my book.
PS I was on Tim Ferriss’ podcast this week. You can see 500 of my video and podcast appearances here.
In the words of Harry Truman, “42% of all the quotes on the internet are misattributed.”
There are many ways to prioritize our time and focus, but the easiest and most vivid way is to do the urgent things first.
If we wait until a house plant is sick before we take care of it, though, it’s too late.
Deadlines, loud requests and last-minute interventions are crude forcing functions. They’re inefficient and common.
It’s far more effective to organize for important instead.
We thrive when we do things when we have the most leverage, not when everyone else does. Waiting for trouble means that you’re going to spend your days dealing with trouble.
Consulting firms rank brands on value. Marketers promise to increase it.
But brand value has little to do with whether a company is famous or even profitable.
The accurate measure of brand value is the premium that consumers will spend over the generic.
What time, money or risk will they take for a valuable brand compared to the very same offering from an unknown?
Luxury goods, by necessity, have high brand value, because the generic knock-offs sell for a tiny fraction of the price. (Heinz ketchup commands a much smaller price premium. You may have heard of them, but you don’t care that much.) Familiarity is not always a proxy for high value.
New products launched by high-value brands get off to a faster start because consumers who trust them feel like they’re taking a smaller risk.
Valuable brands often get applications from potential employees and partners of higher quality than an upstart might.
And yet…
ChatGPT, Perplexity and Claude all gained enormous traction at the expense of some of the most highly ranked brands in the world.
Systems change, and user experience and the network effect often defeat brands. Plan accordingly.
In the ideal world, credentials would be awarded to all experts, and withdrawn from all charlatans.
But they don’t always line up as neatly as that.
An expert is someone who can keep a promise. Point to the results that demonstrate your skill and understanding and commitment and we’ll treat you as an expert.
Credentials, on the other hand, are awarded to folks who are good at being awarded credentials. The place you went to school or the number of followers you have online are credentials. If they help you create value, that’s great. But they’re not the same as expertise.
Twenty handwritten letters received by someone in power are worth a hundred times as much as two letters.
And when that becomes a hundred different personal letters, increasing in volume, from different people, delivered to an organization every week for a year… it’s worth a million times as many as just twenty.
For generations, humans have been entrusting their lives to computers. Air Traffic Control, statistical analysis of bridge resilience, bar codes for drug delivery, even the way stop lights are controlled. But computers aren’t the same as the LLMs that run on them.
Claude.ai is my favorite LLM, but even Claude makes errors. Should we wait until it’s perfect before we use it?
If a perfect and reliable world is the standard, we’d never leave the house.
There are two kinds of tasks where it’s clearly useful to trust the output of an AI:
Recoverable: If the AI makes a mistake, you can backtrack without a lot of hassle or expense.
Verifiable: You can inspect the work before you trust it.
Having an AI invest your entire retirement portfolio without oversight seems foolish to me. You won’t know it’s made an error until it’s too late.
On the other hand, taking a photo of the wine list in a restaurant and asking Claude to pick a good value and explain its reasoning meets both criteria for a useful task.
This is one reason why areas like medical diagnosis are so exciting. Confronted with a list of symptoms and given the opportunity for dialog, an AI can outperform a human doctor in some situations–and even when it doesn’t, the cost of an error can be minimized while a unique insight could be lifesaving.
Why wouldn’t you want your doctor using AI well?
Pause for a second and consider all the useful ways we can put this easily awarded trust to work. Every time we create a proposal, confront a decision or need to brainstorm, there’s an AI tool at hand, and perhaps we could get better at using and understanding it.
The challenge we’re already facing: Once we see a pattern of AI getting tasks right, we’re inclined to trust it more and more, verifying less often and moving on to tasks that don’t meet these standards.
AI mistakes can be more erratic than human ones (and way less reliable than traditional computers), though, and we don’t know nearly enough to predict their patterns. Once all the human experts have left the building, we might regret our misplaced confidence.
The smart thing is to make these irrevocable choices about trust based on experience and insight, not simply accepting the inevitable short-term economic rationale. And that means leaning into the experiments we can verify and recover from.
You’re either going to work for an AI or have an AI work for you. Which would you prefer?
January 22, 2025
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy