Source: ChatGPT

Dear Friends,

“I used to have students who bragged to me about how fast they wrote their papers,” declared William Deresiewicz in a speech he delivered at West Point in 2009. “I would tell them that the great German novelist Thomas Mann said that a writer is someone for whom writing is more difficult than it is for other people. The best writers write much more slowly than everyone else, and the better they are, the slower they write. James Joyce wrote Ulysses, the greatest novel of the 20th century, at the rate of about a hundred words a day—half the length of the selection I read you earlier from Heart of Darkness—for seven years. T. S. Eliot, one of the greatest poets our country has ever produced, wrote about 150 pages of poetry over the course of his entire 25-year career. That’s half a page a month. So it is with any other form of thought. You do your best thinking by slowing down and concentrating.”

The speech, which is brilliant, is called “Solitude and Leadership,” and was reprinted in The American Scholar. His point was that real ideas, and truly creative thoughts, require space and time. Our first thoughts usually aren’t original or even particularly interesting. It was a speech given at the beginning of the social media and multi-tasking age, and as the Iraq War dragged to its end. He wanted the next generation of military leaders to develop the cognitive skills that would help them avoid future wars of folly.

The speech popped to mind twice this week. The first was while I was reading a lovely New York Review of Books article about the physicist Luis Alvarez, one of the most prolific inventors of the 20th century. As a child, according to the piece, “Alvarez’s father had told him to ‘think crazy’ in the afternoons, and once a week Alvarez would sit for two hours in his living room and close his eyes and think of problems to solve. A colleague said of him that he might have a hundred ideas in a day, of which ‘fifty were probably useless, another twenty-five too difficult to do, and among the remaining twenty-five one or two would be worth a Nobel Prize.’”

The second time it popped to mind came on Friday, when I was discussing AI and the risks to our cognition with Azeem Azhar, Nita Farahany, Eric Topol, and Rohit Krishnan. We all are enthusiastic about what AI can do, and Rohit and Azeem are running wild experiments with it. But we’re also worried about the cognitive offloading that Nita, in particular, warns about. If we use AI to make medical choices, or to write our emails, or to prepare our speeches, will we get worse at all those things? And then what happens when we cross the line where more than half of what we read on the internet is created by AI and not by humans? One of the images that will stick with me is that, while Azeem has written over 100,000 lines of code with his agents this year, he also writes his essays out longhand with a fountain pen.

The best podcast I listened to this week was The Vergecast in which Nilay Patel, who is good on everything but especially on questions of tech and the law, blasts into the speech police at the FCC for the threats that persuaded CBS to censor Stephen Colbert. You should definitely watch Colbert’s interview with James Talarico, which he was allowed to do only on YouTube, since it’s not regulated by the FCC. Then read Elaine Godfrey’s profile of the aspiring Texas senator. (“You need to run for President,” Joe Rogan told him. “We need someone who’s actually a good person.”) And here’s Frederick Douglass’s remarkable speech from 1860 on why free speech matters in America. “Liberty is meaningless where the right to utter one’s thoughts and opinions has ceased to exist. That, of all rights, is the dread of tyrants. It is the right which they first of all strike down. They know its power. Thrones, dominions, principalities, and powers, founded in injustice and wrong, are sure to tremble, if men are allowed to reason of righteousness, temperance, and of a judgment to come in their presence.”

I very much enjoyed this profile of Tom Junod, which should inspire you both to buy his forthcoming book about his father but also to revisit some of his classic essays, like The Falling Man. And perhaps because my feed was so filled with stories about The Washington Post, I stumbled upon this brilliant 2005 lecture that Jeff Bezos gave about innovation.

My favorite book I’ve read this month is Maintenance: Of Everything by the wise and indefatigable Stewart Brand, who is probably best known as the founder of Whole Earth Catalog. The book is a brief history of the world, organized by different ideas for how our items—our motorcycles, boats, and armies—are maintained. And my favorite magazine profile is this stunning story by Robert F. Worth about Bashar al-Assad and the collapse of his government in Syria. Assad, it turns out, was as incompetent as he was savage. And he was also utterly addicted to Candy Crush. He probably could have used some time for deep thinking. 

The Most Interesting Things in Tech

The biggest debate in tech this week has probably been what AI is going to do to jobs. Andrew Yang thinks we’re cooked. Annie Lowrey has a smart piece pointing out that a white-collar job apocalypse will hurt the economy in ways we don’t fully understand. My favorite AI economist, Erik Brynjolfsson, suggests that we might be finally seeing an AI productivity boost. And Josh Tyrangiel has a superb, thorough look at all sides of the debate. It’s also the most delightfully readable long essay you’ll find on AI. “After a rollout that could have been orchestrated by H. P. Lovecraft—’We are summoning the demon,’ Elon Musk warned in a typical early pronouncement—the AI industry has pivoted from the language of nightmares to the stuff of comas. Driving innovation. Accelerating transformation. Reimagining workflows. It’s the first time in history that humans have invented something genuinely miraculous and then rushed to dress it in a fleece vest.”

Meanwhile, there’s been much speculation lately around the idea of putting AI data centers in space. It sounds crazy and it might be. But it’s not entirely impossible. And a paper from researchers at Microsoft, the University of Southern California, and the University of Pennsylvania proposes a smart new training method for AI to learn from its mistakes. They call it “experiential reinforcement learning,” and it mimics the way we learn from our mistakes by introducing a self-reflection stage into the evaluation process. This is a good reminder that there are lots of ways to improve AI that don’t require big increases in compute—or the use of space. 

Finally, there’s a fascinating new study on dream hacking, which might allow us to work through problems while we sleep. It’s a very cool idea! It’s also kind of terrifying when you think about how it might be used by bad actors. And here’s a step-by-step video I did on how to keep AI models from training on your data. Just because they help you doesn’t mean that you need to help them by letting them engorge your medical records.

Events and Podcasts

Here are some recent podcasts I’ve been a guest on and upcoming events I’m taking part in.

Thank you for reading and have a wonderful, and reflective, weekend! Make sure to spend two hours sitting in total solitude thinking of the wildest ideas you can.

Cheers * N

I hope you enjoy this newsletter. Please continue to forward it to anyone else who might enjoy it. They can sign up here.

Keep Reading