Near-future science fiction is a fine way to consider our now. Without the reality of today, we can think hard about the tomorrow we’re about to live in.
Summer reads are supposed to be a bit lighter.
Technological change is making our near future a bit harder to dance with, and yet, here are some books I strongly recommend–not because they gloss over our possible futures, but because they give us the scaffolding to look hard at it while we can still make an impact.
Foundry is a thriller about semiconductors. Fast-moving and classic Peper.
When H.A.R.L.I.E. Was One is the very first novel I read about AI. I was 12 years old. Gerrold wrote this a lifetime ago, and yet it will make you think.
The Ministry for the Future is heartbreaking and life-changing. Every human should be offered a copy and encouraged to read it.
The Very Nice Box helps us think about workplace roles (with some marketing and design as a bonus).
The Last Policeman is fabulous metaphor and a great way to get clear about how you’d like to spend tomorrow.
For thousands of years, and as recently as the 1930s, phrenology was seen as a useful proxy to judge someone’s character.
Carefully charting the bumps on someone’s head, along with the slope of their forehead and other telltale signs was seen as a thoughtful and proven way to determine whether someone was creative, honest or empathic.
Even with the nutty pseudoscience we are all surrounded by, it’s pretty easy to tell that this is nonsense.
And yet, we want proxies so badly, we embraced this idea for centuries, despite a lack of evidence.
We engage in this soothsaying search for proxies every time we do a job interview with someone. Unless we’re interviewing for people who have interviewing as their job, there isn’t a lot of evidence that doing a great job in the interview means you’re going to do a great job.
False proxies are expensive. They also create significant social and moral hazards.
Perhaps hanging up this poster is a good way to remind us not to fall into that trap.
The thunderstorm doesn’t know we exist. Rain dances and wishes are ineffective at bringing or preventing a storm, because it isn’t caused by our actions.
Metaphorical weather is tempting to mistake as a response. When someone cuts us off in traffic or doesn’t engage with us the way we might hope for, it’s easy to take it personally.
But the weather would be there with or without you.
There are two useful questions:
The first is whether we’re signing up to be in weather conditions that aren’t safe or helpful.
And second, when the tables are turned, is to ask ourselves, “Am I being someone else’s weather right now?” Because that’s something we do have control over.
Dan Dennett explained that it began as a survival mechanism. It’s important to predict how someone else is going to behave. That tiger might be a threat, that person from the next village might have something to offer.
If we simply wait and see, we might encounter an unwelcome or even fatal surprise. The shortcut that the intentional stance offers us is, “if I were them, I might have this in mind.” Assuming intent doesn’t always work, but it works often enough that all humans embrace it.
There’s the physical stance (a rock headed toward a window is probably going to break it) and the design stance (this ATM is supposed to dispense money, let’s look for the slot.) But the most useful and now problematic shortcut is imagining that others are imagining.
There used to be a chicken in an arcade in New York that played tic tac toe. The best way to engage with the chicken game was to imagine that the chicken had goals and strategies and that he was ‘hoping’ you would go there, not there.
Of course, chickens don’t do any hoping, any more than chess computers are trying to get you to fall into a trap when they set up an en passant. But we take the stance because it’s useful. It’s not an accurate portrayal of the state of the physical entity, but it might be a useful way to make predictions.
There’s a certain sort of empathy here, extending ourselves to another entity and imagining that it has intent. But there’s also a lack of empathy, because we assume that the entity is just like us… but also a chicken.
The challenge kicks in when our predictions of agency and intent don’t match up with what happens next.
AI certainly seems like it has earned both a design and an intentional stance from us. Even AI researchers treat their interactions with a working LLM as if they’re talking to a real person, perhaps a little unevenly balanced, but a person nonetheless.
The intentional stance brings rights and responsibilities, though. We don’t treat infants as though they want something the way we might, which makes it easier to live with their crying. Successful dog trainers don’t imagine that dogs are humans with four legs–they boil down behavior to inputs and outputs, and use operant conditioning, not reasoning, to change behavior.
Every day, millions of people are joining the early adopters who are giving AI systems the benefit of the doubt, a stance of intent and agency. But it’s an illusion, and the AI isn’t ready for rights and can’t take responsibility.
The collision between what we believe and what will happen is going to be significant, and we’re not even sure how to talk about it.
The intentional stance is often useful, but it’s not always accurate. When it stops being useful, we need to use a different model for how to understand and what to expect.
Any fully open system of digital communication will corrode over time. Bad messages will crowd out the good ones.
The new normal: Someone finds a database of every residential property, then another of cell phones. An AI is trained to call every homeowner, every day, asking if they’re thinking of selling their home. Millions of calls an hour. The leads (one out of 40,000 calls, perhaps) are sold to real estate brokers.
Multiply this by 500 different hustlers in a dozen industries, and now the open nature of the phone is gone forever.
And then texts.
And of course, email. An inbox with 100,000 unread messages in it is no longer a functional tool.
Open systems come with the requirement of self-restraint and humanity. When we replace those with automated stealers of attention with a profit margin, the system can no longer remain open.
Permission and trust keep going up in value, precisely as quickly as selfish forces work to succeed without them.
Perplexity is more powerful, more pleasant and more effective.
Instead of being corrupted by invasive ads, surveillance and sneaky dark patterns, it presents you with a simple, footnoted explanation of exactly what you’re looking for. Asked and answered.
And I like that there’s a pro version that we can pay for. This makes us the customer, not the product.
Most of all, the limited scope of the promise gives AI a chance to shine. ChatGPT often comes across as both arrogant and bumbling, because it promises that it can do everything, all at once. Perplexity is simply a smart search partner without the corrosion that racing for more ad dollars will cause. At least for now.
So far, I’d give it five stars. It’s worth checking out.
The fence near the train tracks is a boundary. You can go near it without risk. The electrified third rail, on the other hand, is a limit. If you touch it, you’re done.
Boundaries can give us room to innovate and thrive. Budgets, schedules and specifications all exist to show us where the safe areas are. Sure, go to the edges and challenge the boundaries, that’s why they’re there.
But limits aren’t boundaries. Limits are the end, the danger zone, the thing to avoid.
Some people bristle at boundaries. They’d like to have a project with no budget and no deadline. The problem with living without boundaries is that the limits sneak up on you, and then, boom, it’s over.
We shouldn’t always color inside the lines, but creative work is better when there are lines.
Why do we buy the pitch of the snake oil salesman, the flim-flam man, the con artist, the demagogue or the trickster?
As our modern world becomes more informed and more rational, we see an increase (not the expected decrease) in scams, hustles, and chaos. There are Jokers and Riddlers on every corner, and our email box and mailbox are filled with schemes and manipulations. None of them would succeed if we didn’t support them.
What’s the attraction of these shortcuts?
Human culture is fueled and remade by insurgents. Successful art, innovation, and technology make promises that at first, are hard to distinguish from selfish cons like perpetual motion and pyramid schemes. The emperor has no clothes, but wouldn’t it be nice to believe that he did?
Contradicting forces of complacency, greed, and despair are some of the conditions that can lead us to getting tricked.
Complacency is a cousin of boredom. When things feel safe, our ennui might give us an itch for adventure.
Greed is the engine of capitalism and a component of status, and it tends to scale–people with more want even more, and they want it right away and without a lot of effort.
And despair is a lack of hope, a feeling that the existing paths can’t possibly offer what we need.
The good news is that we don’t fall for every scam, and we’ve gotten better at being resilient in the face of broken promises. It’s culture that pushes us to find a shortcut, but it’s also culture that can save us from the next one.
Being surrounded by a community that sees and tells the truth, that establishes a standard for keeping promises and that applauds long-term generative thinking is a resilient way forward. Connection helps us find traction, and forward motion toward better.
We get to choose which community narrative we want to absorb. And we get to choose whether we want to share those ideas with those we lead and connect with. We pick a neighborhood to live in, and we can pick a culture to be part of.
No whining, no shortcuts, no hustles. The long-run matters. Honor the rules that protect people who aren’t in your shoes, because you might be them one day.
If that’s the circle you’d like to be part of, join one, start one, talk about it, and don’t stop.
Plenty of creative pundits are decrying the speed and cost of creating pretty good work with an AI. It can often draw, write and compose as well as a mediocre freelancer, sometimes better.
But why were there mediocre freelancers?
The system that pushed us to turn our writing into oatmeal and our art into paint by numbers was here long before OpenAi showed up.
When the bar is raised, it challenges each of us to do what we already had the power to do–exceed the minimum.
June 12, 2024
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy