Consequences, Corporate Silence, Launching Pandora, AI Panic, Climate Fantasies, Slush
Conversations of the Week: February 20, 2026
Consequences!
I woke up to the cheering news that Andrew Mountbatten (the Andrew formerly known as Prince) has been arrested. On his birthday! On suspicion of misconduct while in public office tied to his relations with Epstein. Everyone I know is delighted.
In the US, as it approaches its 250th birthday, we are less keen on consequences for the powerful. But, nonetheless, they are starting to unfold, at least for some.
Hollywood agent Casey Wasserman and the general counsel of Goldman Sachs, Kathryn Ruemmler have both resigned. So has Kimbal Musk, from the board of Burning Man (who knew there was such a thing).
Larry Summers and Leon Black are out of Harvard and Apollo respectively, old news at this point.
Hanging on? Bill Gates, Les Wexner, Howard Lutnick, Steve Tisch.
Signs of accountability in Dubai, too. Sultan Ahmed Bin Suleyam is out at DP World.
Can we hope that this is just getting going? I feel like this is going to be slow, festering, and inexorable.
Corporate silence. For everything else, there’s Mastercard.
A Bloomberg piece on corporate silence, the death of corporate citizenship, and what it might take to change this was thought-provoking (read my discussion on the same themes from a few weeks ago, here).
At the end of the day, corporations follow public opinion, far more than they shape it:
Many American companies, for example, were happy to do business with the Nazis until Japan bombed Pearl Harbor. Lots of major corporations preferred not to weigh in on the Civil Rights Movement until television coverage of the sickening violence visited upon nonviolent protesters throughout the South in the early 1960s forced their hand. It took decades of pressure from workers and consumers before international outrage over the imprisonment of Nelson Mandela persuaded some American companies, including Coca-Cola Co., to stop doing business with apartheid-era South Africa. In 2021 the US was so uniformly aghast after Trump supporters stormed the Capitol on Jan. 6 that even some of the president’s most ardent backers in corporate America publicly called upon him to step aside.
These events are examples of what you might call a corporate state of exception—moments in which the country is so widely united in its outrage that corporations have no choice but to accede to demands to act, or at least to speak, even if their words and their underlying corporate behavior don’t match. The corporate world’s decades of promises on social responsibility were never to be taken at face value. But it was nonetheless useful for society that businesses felt compelled to make them—those promises were, if nothing else, a reflection of what executives believed society required. What corporations’ behavior tells us now couldn’t be clearer: They’ve surveyed the risks and decided their advantage still lies with Trump.
For now.
Mastercard is one consumer-facing brand that is making that calculation. I was quoted in Nancy Levine Stearn’s Impactivize piece on Mastercard as one of the sponsors of Trump’s Freedom 250, a Christian nationalist hijack of America’s birthday. Stearns argues that Mastercard has quite a bit to lose reputationally speaking by donating, given that they are “a consumer-facing company with about 33,000 employees worldwide [and have] made a uniquely robust commitment to inclusivity, sustainability, community and belonging initiatives — the sorts of programs that Trump has waged war against.”
Alison Taylor, Clinical Associate Professor of Business & Society at New York University Stern School of Business, told Impactivize in an interview: “We are paying much more attention to brands that we think of as progressive having betrayed their commitments than we are to car racing companies or tech firms, where we’re not expecting very much.”
I wonder if Deloitte employees are making much of their company’s sponsorship of the same event?
Leadership during authoritarian times
Wednesday’s watch for paid subscribers (free preview of the first 10 minutes) is a conversation with Ron Carucci, author of nine (!!!) books and leader of Navalent, on leadership during authoritarian times.
Ron was a great supporter when I ran Ethical Systems, and has long been a dear friend. In December, Ron published a piece on LinkedIn titled “Historical Echoes: Patterns of Democratic Erosion in Past and Present” that examines how the Trump regime is following a clear authoritarian playbook.
Ron said he had been hesitant to post his thoughts on this for a long time, but eventually decided to go ahead. I knew I wanted to explore this with him. Our conversation then broadens out to include leadership, voice, and hope in 2026. It was a delight!
The transcript of our conversation is available on the video.
Follow Ron for updates on his latest articles and perspectives.
A tipping point for AI, the peak of the hype cycle, or both?
You may have read the viral post on AI from Matt Shumer, Something Big Is Happening. Everyone seems to be talking about it. Bloomberg’s Parmy Olson has some admirably clear analysis, and concludes: “AI is trading on vibes and anecdotes.”
Here’s a bit more color:
Of the 4,783 words in Something Big Is Happening, none point to quantifiable data or concrete evidence suggesting AI tools will put millions of white-collar professionals out of work any time soon. It is more testimony than evidence, with anecdotes about Shumer leaving his laptop and coming back to find finished code or a friend’s law firm replacing junior lawyers.
Some critics claim the author has made exaggerated claims in the past about tech, but that is beside the point. A single compelling story about AI has created ripples of worry just when the market has become so narrative-driven that it’s giving investors whiplash. One minute AI is overhyped and the next we’re on the verge of the singularity…..
It’s worth retaining a healthy dose of skepticism about the speed of this transformation, and remembering that those who spread the most viral claims about it will likely benefit the most. Anthropic Chief Executive Officer Dario Amodei grabbed headlines when he predicted AI would wipe out half of all entry-level white-collar jobs in the next one-to-five years, while Microsoft’s AI head Mustafa Suleyman took things further last week, saying that “most if not all” professional tasks would be automated within 18 months.
Questionable decisions abound for those who only listen to the rhetoric. A Harvard Business Review survey of more than 1,000 executives found that many had made layoffs in anticipation of what AI would be able to do. Only 2% said they’d cut jobs because of actual AI implementation. Swedish fintech firm Klarna Group Plc had to rehire humans last year after its move to replace 700 customer service staff with AI led to a decline in quality.”
To put it more simply, tech people tend to imagine that every problem is tech-shaped. But most problems are human-shaped!
Reader Cecyl Hobbs had an astute comment on the knee-jerk AI-related layoffs:
That HBR finding is the key line: 98% of AI-related layoffs happened in anticipation of capability (or as cover for “over-hiring”), rather than in response to implementation.
That’s your human-shaped problem in the form of governance and leadership, irrespective of AI capabilities. When companies eliminate developmental pathways before verifying that AI can replace the functions those pathways served, it’s tantamount to infrastructure demolition without modeling replacement cost.
The Klarna example shows what happens when you can recover quickly (customer service roles, functioning hiring market). The mistake many organizations are making now is eliminating the roles where people learn to become managers and directors — which will lead to them discovering in 2029 that you can’t hire externally when every board approved the same cuts.
It is worth asking: what succession scenarios did boards model when they approved these restructurings? Most seem to have modeled none.
I was particularly struck that most of the comments fell into one of two camps:
AI is a bullshit scholastic parrot/terrible for the environment/a scam
People who don’t agree with Shumer are just Luddites who haven’t used the tech.
In my household, we have been discussing this and experimenting with Claude Code non-stop, and we are in neither camp. I can see that for developers, the new AI models are transformational and amazing, as if your dog suddenly started talking to you. But for anyone who hasn’t spent their entire career trying to communicate with computers, the benefits are real, but still prosaic.
Former BSR colleague Lindsey Andersen shared this useful paper:
Many of your followers might not be super plugged into the AI research scene, so I'm sharing this must read piece from two excellent AI researchers called "AI as Normal Technology." One of their core arguments refutes Shumer's view (which is common among the tech bro set) who as you rightfully say tend not to appreciate the messy human and systems elements of tech adoption: "we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales."
A climate fantasy world
Did you know that the cost of insuring a US home has jumped 69% in six years? The Trump administration has murdered all climate-related regulations, even while insurance companies recognize the intensifying natural disasters from climate change and pass the cost along to consumers. What’s dumbest about this is that it isn’t actually good for business:
Perhaps surprisingly, the regulatory rollback is also likely to be terrible for the oil and gas industry, which has been able to leverage the Clean Air Act to block other state and local litigation, but now can’t. So, the API is equivocating.
The rollback has not met with resounding support from many of the industries that it would directly affect. Over the past decade, oil majors have begun to face a wave of lawsuits from cities, states, and municipalities holding them accountable for climate damages incurred from extreme weather. A bedrock foundation of the industry’s defense against these lawsuits, ironically, lies in the EPA’s authority to regulate greenhouse gases: Oil companies have argued that states and cities cannot take them to court over climate change, given that the federal government controls climate regulation under the Clean Air Act.
Here’s some reading:
The Insurance Crisis Is About to Get Even Worse
Wired: The Fight Over US Climate Rules Is Just Beginning
I will leave the final word to this Vermont professor:
“I don’t see any plan, any strategy, any end game,” says Pat Parenteau, a professor of environmental law at Vermont Law and Graduate School. “I don’t see anything from this administration, just fuck everything up as much as you can. You can print that.”
Governing Pandora
I had a delightful time at the launch of a new book on technology and governance, Governing Pandora this week and even got to have a fireside chat with the author, dear friend Andrea Bonime-Blanc. The book is for leaders and boards, and explores how governance needs to transform, now that exponential technology is here. We talked about broligarchs, bravery, and agency. Here we are afterwards, with another dear friend, Alice Korngold:
Apart from that, it sure has been February forever. I have been complaining nonstop about how gross New York is right now, and it was nice to see that New York Magazine agrees with me! Dog owners, please give some thought to the aftermath when all this melts!
Have a lovely weekend, and watch your step!
Alison XX






In re: the cost of insurance and discussions of AI-- one of my consumer-rights arguments against unregulated AI is that pricing algorithms are essentially more insurance. AI on top of many software-based systems essentially becomes another layer of the insurance experience where it's required to live our lives, legally... but we have absolutely no control over how pricing is set or why it increases. When I see companies rush to implement AI-augmented pricing, I want to scream that no consumer wants more opaque pricing and ranking systems like insurance, credit scores, or airline tickets. I am an AI fan in its potential to transform knowledge and also to help me build things, but the use cases many companies are starting with are not far off from adding what's effectively more insurance to daily life.
AI is going to eliminate jobs just like the dishwasher liberated us from chores...
Jobs will change. Work will look different. But there is still plenty to do!