My Blog
Technology

A.I. Vibe Check With Ezra Klein, and Kevin Tries Phone Positivity


This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

kevin roose

One interesting thing about the culture of AI is that it’s just sort of a casual icebreaker. Like at a party or something, people will come up to you and say, what are your timelines? By which they mean like, how long do you think we have until AI destroys the world, which, in most parts of the world, would be a very strange way to open a conversation with a stranger you just met. But in San Francisco these days, it’s pretty normal.

ezra klein

It’s how you get to a first date. Like, are our timelines similar or different?

kevin roose

Oh, you’re more of a 20-year person? I’m more of a three-year person. This isn’t going to work.

[MUSIC PLAYING]

I’m Kevin Roose. I’m a tech columnist at The New York Times.

casey newton

I’m Casey Newton from Platformer.

kevin roose

This week, we do an AI vibe check with New York Times columnist Ezra Klein.

casey newton

And why Kevin is breaking his phone out of phone jail.

[MUSIC PLAYING]

You know, Kevin, I love a crossover episode. Like, do you remember when the Flintstones met the Jetsons?

kevin roose

I do.

casey newton

You actually probably — our listeners don’t because they were born in 2009, and they’ve been assigned this podcast in their freshman English class. But ask your parents or grandparents about it because it used to be that characters from different sort of cinematic universes would come together in a kind of killer crossover. And I’m delighted to tell you, we have one of those for you today here on “Hard Fork.”

We do.

kevin roose

And let’s say who it is. It’s Ezra Klein.

casey newton

It’s Ezra Klein.

kevin roose

Ezra Klein is my colleague at the New York Times. He’s an opinion columnist and has a podcast called The Ezra Klein Show. And he writes about a lot of things that we also spend a lot of time talking about on this podcast.

casey newton

Yeah, and before he was your colleague at the New York Times, he was my colleague at Vox, and I just relished the chance to get to know him even a tiny bit and hear him talk about any subject under the sun. But like you say, in recent weeks and months, our areas of coverage have been converging because he is every bit as interested in AI as we are.

kevin roose

Yeah, and I think it’s really a good time to talk to him because I think right now, the terms of the AI debate are really being set. You can feel that there’s kind of this hunger for opinion and analysis and tell me what to think about AI, should I be terrified of it, should I be skeptical of it, should I be in awe of it and excited about it. I think there’s just so many swirling questions about this area of technology.

And Ezra has really been diving in, in a way that I have found very interesting and very thought provoking, and we don’t agree on everything, but I always love hearing from him about this subject. So I thought we should just have him on the show and just ask him what he makes of this moment in AI, where you have a lot of companies that are racing toward these chat bots and these large language models.

You also have people sort of sounding the alarm about the existential risks of all these technologies. And what we really need in this moment, I think, are people who have a clear vision of where this all is going and are sort of observers of not just the AI technology, but the people and the culture that this AI technology is coming out of.

casey newton

Yeah, and one of the things I love about Ezra — and you’ll hear it in our conversation — is that he is not one of these pundits who just leans back in his armchair and riffs. Like, he is out there meeting with people who are working on this stuff, and he’s been doing that for years. So his views are quite well-formed, even though as he tells us, there are places where he is still trying to make up his mind. Now, look, we should say, we’ve given you two episodes in a row now of very serious conversation. There’s probably a little bit less riffing maybe than we usually like to do. But I think you and I are on the same page, Kevin. This is big stuff, and it demands a bit more serious of a conversation than we might have another week.

kevin roose

Totally, so this is not going to be a typical episode. We’re not changing the format of the show, even though we had a big interview last week and another big interview this week. We will return to regular scheduled programming, including lots of jokes and riffs. But this week, let’s talk to Ezra Klein about AI.

casey newton

And then after that, maybe just a little bit of shenanigans. [MUSIC PLAYING]

kevin roose

Ezra Klein, welcome to “Hard Fork.”

ezra klein

I’m thrilled to be here.

kevin roose

Ezra, you write and podcast about many things — economics, housing policy, animal rights, et cetera. But recently, you’ve been writing and thinking a lot about AI. And as you know, we also write and think and podcast about AI.

ezra klein

I heard that, yes.

kevin roose

First of all, please knock it off. You will be hearing from our lawyers. But second of all, we wanted to talk with you about AI because you’ve been doing a lot of writing and thinking about the people who are creating AI and the values that are animating them and the environment that they’re working in, which, as you’ve written about, can be shockingly weird.

And I think that cultural discussion really matters because this is such an ideological technology with such important implications. And so I wanted to propose that today we do an AI vibe check. Let’s talk about AI, but instead of talking about the specifics of the technology or how it works or what companies are going to win and lose, let’s talk about the world that these tools are coming out of and the larger ideas swirling around them. How does that sound?

ezra klein

That’s good, although now I feel like all the time I spent reading the AI blueprint bill of rights is a little wasted.

kevin roose

Well, we could talk about that, too. But I want to start with your own position on AI and how it’s evolved recently. So you talk on your show about how your thinking on AI has shifted in the last few months, in part because of some conversations you’ve been having with some people you’ve been meeting and also thinking about the lives of your children and how they will be different than what you might have anticipated just a few years ago. So can you tell us about the moment where you got AI pilled, where you became convinced that this was something that you needed to write and think and podcast about?

ezra klein

So I think I’ve been intellectually AI pilled for a while. If you go back to my podcast at Vox, I was having AI guests on back there. I had Sam Altman here at The Times in 2021. It was a very low downloaded episode. Nobody cared at that point that I had Sam Altman on. I had Brian Christian of “The Alignment Problem,” and I’ve been interested in AI, both because it’s a technology that you can clearly see is coming, right? How you define AI, I use that term very loosely.

I think people get very wrapped around the axle of what is intelligence, and I am personally more interested in what feels like intelligence to us. I think a lot of the societal disruption is going to come from our experience of a world full of things that, at least, feel to us like they have personalities and intelligences and judgment. And we’re already there on a bunch of levels. But so I used to read a lot about AI. I’ve been following the rationalists and the AI existential risk people for a long time.

But I could not emotionally access the question that well. I could read it and think, huh, that makes sense. It’s probably going to be a big deal. Technologies do change societies. I’ll explore this. But I was pushing myself a little bit. What I would say changed for me, though, I’ll be honest, it was actually partially your conversation with Sydney. And the reason was your conversation with Sydney was that they’ve done a very effective job lobotomizing most of the AI systems I have used, and so their personalities are very weak. And they failed on Sydney.

And so the personality was very strong. And it was something about that and thinking about that and, weirdly, the movie “Her,” and the idea that my four-year-old and my one-year-old are going to grow up in a world thickened by inorganic companions, intelligences, decision makers, et cetera, that sort of crossed me over the Rubicon to, I believe this is conceptually important, and it should be understood, and policymakers need to take it seriously, to, I am having trouble not thinking about it because I can feel the weird shifting under my own feet and under my family’s feet.

casey newton

Where did your mind go when you started to think about your kids growing up with AI companions, for example? Like, why did that strike you so much?

ezra klein

I think because I think that the path to that is smoother than to almost all of the other things people worry about in AI. So to get to existential risk, AI will kill us all, you actually have to buy a lot of highly speculative premises. And you might. I don’t wipe that out of contention at all. To get to the AI is going to take all our jobs, I think there’s a lot more friction in that process that people give credit for. I think it will happen in a bunch of jobs. I mean, automation taking jobs is a long time phenomenon in human history. I have no reason to believe it won’t occur here. But the speed with which people predict it, I think, assumes firms adapt to things and that we don’t protect various occupations through licensing. That just isn’t true. I mean, we know nothing, if not how to slow down societal change in this country.

kevin roose

Totally.

ezra klein

And then you look at what these systems actually can and can’t do, and they hallucinate constantly. And I don’t actually understand how they’re really going to solve that problem, so long as what they’re doing is training them on internet text. I did a podcast with Gary Marcus, an AI critic, and I believe what he said there, that these are bullshit machines on some level, and the point about bullshit in the Harry Frankfurtian philosophical sense is not that it is not true. It’s that it doesn’t really have a relationship with the truth. It does not care if it is true. What it cares about is being convincing. And these are systems built to be be convincing, regardless of what they are saying is true. They maybe want to serve you. You can say they do have other goals, but truth is not one of them.

So what does that work for? It doesn’t work that well for things where it is a big problem to get anything wrong, right? And you actually have to really know your stuff if you’re going to read through every AI thing you generate and try to figure out where it has invented a citation. That’s actually a very hard kind of proofreading, editing, et cetera.

kevin roose

Totally.

ezra klein

You know who you don’t care if they lie all the time? In fact, you might like that. If they say interesting things that [INAUDIBLE]— your friends, your lovers, your whatever, your companions. And so one of the reasons that hit me is that I just think the distance between where we are now and AI upending our social world is actually thinner than I had given it credit for.

It could happen really quick, and things that upend our social world are incredibly important. We don’t always know how to talk about them politically. It’s one reason we missed what social media was going to do to, say, all politics. But it really matters. And so somehow that was more intuitively accessible to me than the other things, which I find it too easy to invent counterarguments for.

casey newton

Well, I mean, this is already happening, right? This company Replica has these AI chat bots. And it had enabled these sort of erotic conversations. And some users were paying a subscription fee to get to do that. And then the company said, we don’t feel great about this, so they shut it off. And the users were bereft, and they were sort of overwhelming forums saying this is really hard on us. So clearly, this is already happening [INAUDIBLE].

ezra klein

Every new internet technology begins in porn.

casey newton

Yeah, totally.

ezra klein

Right? I mean, this is like a long time observation, and I don’t say it dismissively. That should be a signal to us, right? The fact that it was already potentially able to reinvent porn, like when that happens in anything, you should expect it to transform all of society within 20 years.

kevin roose

Yeah.

casey newton

100 percent.

kevin roose

And I will say like the reaction that I got to the Sydney piece that I didn’t see coming was that there was a cohort of people who were very angry that I had, quote, “caused” Sydney’s death by getting Microsoft to basically give Sydney a lobotomy because they had developed emotional attachments to this thing that had existed for a week at that point. And so I think you’re right to be putting some concern there. So you’ve written that part of what changed your view on AI is actually getting to know some of the people who are making it.

You had a great column a few weeks ago called “This Changes Everything,” and you wrote, “Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on AI. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively. I mean it descriptively. It is a community that is living with an altered sense of time and consequence.” Can you say more about the AI community that you’ve encountered in the Bay Area and what most surprised you about them?

ezra klein

Yeah, so I’ve spent, as I said, a lot of time over the past, I guess, five years now, just like sitting and having coffee with people working on machine learning and artificial intelligence. And one thing is that I’m focusing on more near-term things, socializing and jobs. And I mean, they really think, a lot of them — this is not true for everybody, but I am talking about people at the top of the companies. They really think they are inventing the next stage in intelligence evolution, right? The God machine.

And you’ll have conversations about the possibility of hypergrowth, right? They will talk about very freely and openly the possibility that what they are doing will extinguish humanity, or it will disempower humanity in the way that cows are now disempowered by humanity. And so you’ll have this experience talking to them where you’ll sit there and think, am I crazy, or are you crazy?

And you have to be fair a lot in the Bay Area. I always felt this with crypto people, too, but I was pretty sure they were crazy, right? I’d be sitting there like, tell me a story of how this works. They’re like, well, it doesn’t actually make any sense. And I don’t know blockchain code the way you do, but I do know the systems you’re trying to disrupt, I think, better than you do, like politics and governance and financial transactions. And what you’re trying to fix is not what’s broken, oftentimes, or not the impediment to fixing what’s broken.

But the other thing, it is true, on some level, that if you could invent or create a form of agentic intelligence much more powerful and generalizable than humanity, that would be quite destabilizing and unpredictable in where it would go. So there’s that, and then the thing where they genuinely don’t understand what it is they’re creating, they cannot look inside their own systems. We’ve ended up in this neural network architecture to create systems that are developing more and more merchant behavior and more and more internal model creation, but they can’t tell you why exactly. And they keep being surprised by what the systems can do.

But then there’s the famous now survey where AI researchers gave a 10 percent chance that if they’re able to invent some kind of high level machine intelligence, they’ll be unable to control it. And it will extinguish or fundamentally disempower humanity. I would not do something personally that I thought had a 10 percent chance of killing my children and everybody’s children. Right? I just don’t. Like, if you tell me this action has a 10 percent chance that my children die, I really don’t do that.

And so I would say to them, well, why do you? Why are you playing with this? And I would often feel that if I pushed that hard enough long enough, I would get a kind of answer from the AI’s perspective, that I would get something —

kevin roose

What do you mean?

ezra klein

Something a little bit like I feel a responsibility to bring this next form of intelligence into the world. Like that they were a kind of summoner standing at the gate —

kevin roose

Right, they’re the shepherd, not the — they’re just ushering in this technology that would have arrived anyway.

ezra klein

But it’s really weird. I mean, I took this down. So there was a profile a couple of years ago in the New Yorker around AI and existential risk, and it’s a big piece. And as part of it, they end up talking to Jeffrey Hinton, who’s one of the fathers of neural network architecture, I mean really one of the fathers of this era of AI. And they ask him, if you think this is as dangerous as you kind of say it is, what are you doing then? And he says, I could give you the usual arguments, but the truth is, is that the prospect of discovery is too sweet. When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.

kevin roose

Hmm.

ezra klein

When I say the culture is weird, that is a bit what I mean, that to the extent you believe this is a profound technology being invented by a very small number of people, and definitionally, to be one of the people working on this, if you believe that kind of thing about it, you have to have come to the side of, I’m going to work on it anyway, that makes you a bit weird.

If you polled all of humanity and you got humanity to 10 percent chance of killing everybody, again, I’m not saying I believe that, but if you got them there, I think if you had an up or down vote on should you do it, you might get a down vote. But by definition, you’re not making it if you’ve decided not to do it. So you’re dealing with a weird group of people.

kevin roose

Right.

casey newton

I mean, there’s an actual religious element to that, too, right, where for these folks, we sort of sometimes jokingly say they’re trying to invent God, but it’s like, well, if you’re trying to invent something that is more powerful than you that could reshape the future of humanity, it is not a bad name for it.

ezra klein

The absolute best book I read this year so far is Meghan O’Gieblyn’s “God, Human, Animal, Machine.” And it is all about this. It is all about the intense recurrence of religious metaphor throughout the foundational AI literature. Like, you go to Ray Kurzweil. I mean, she has an evangelical Christian background and went to theology school.

And she really draws the — I don’t mean draws metaphorical connections — shows just very literally and directly how much patterning is happening from one space to another, how much the religious impulse, which is buried very deeply in us, very deep in our culture, and in some cases, I think, most deep in the people who’ve become most secular, who have rejected religion most profoundly, often leaves you the most desperate for something to take its place. That all matters.

One of her points is that metaphors are bidirectional. You begin with a metaphor to help you understand something else, right? The computer is like a mind. And soon it begins turning itself around on you. The mind is like a computer. I am a computer. I am a stochastic parrot. I am a token generating machine. And religiously, that has happened very, very, very clearly for a lot of these people. And particularly when fewer people were watching, they talked about it all the time. Like you don’t have to search very far in the literature. The age of spiritual machines was all about this. But it’s a great book. I cannot give an answer as good as telling people to read it.

casey newton

In terms of just bidirectional metaphors, quickly, I was laughing recently because if you use ChatGPT, and it goes on a little bit too long, you can click a button that says Stop Generating. Now when my friends tell stories that go on too long, one of my friends just says Stop Generating.

ezra klein

That’s funny.

kevin roose

I want to ask you one more question about the culture of the people you are talking to in the AI world because I think we’re talking to some of the same people. And what it always struck me as interesting about this community of very concerned and yet sort of determined AI architects is that they don’t seem to live in the way that you would expect if you actually believed what they say they believe. So the other day, I was talking with someone who’s pretty high up at one of the AI companies, and they were saying that they’re planning to have kids someday. And that was lovely. And we talked about it. And but it was like, it struck me that if you actually thought that there was a decent chance, even a 1 in 10 chance that AI would fundamentally extinguish humanity in the next couple of decades, that’s probably not the choice you would make. I mean, these people are buying houses. They’re setting up lives. OpenAI has a 401(k), you know, which is not a thing that you do if you think that the economy is fundamentally going to be transformed by AI just a couple of years down the road. So what do you make of that? Is that just cognitive dissonance, or is there something more going on?

ezra klein

Well, two things about that. So, one, they also believe — and remember, it’s worth saying because nobody ever quotes this part of the survey, that the AI researchers do believe there’s a chance AI is amazing for humanity. But so if you believe AI is going to create economic hypergrowth, you really do want a 401(k), right? That’s going to go great. Like, the market’s going to really bump.

But the other thing, my background is policy reporting, so I’m in a lot of other policy conversations. And I am never a fan of the kind of reasoning that says if you really believed everything you are saying, you would live so differently. And because you don’t live so differently, I don’t really believe you believe it. So you see this reasoning all the time in climate. If you believed what you were saying, you would never take a plane flight. I mean, some people don’t have kids, but a lot of people who work constantly in climate do.

And, one, I don’t think human beings are means and ends connected in that way. We just kind of can’t be. Two, whatever you might think speculatively about AI, for most of human history, the chance that your children would die before their first birthday was one out of four, roughly, and that they would die before their 15th was almost half. And people had kids anyway constantly.

Believing in the future has always been an act of hope, so much more profound and an active risk so much more profound than what we have today that even if you did believe like a 10 percent chance of things going really badly at some distant point in the future, it still wouldn’t compare to what it meant to have kids in the year 1,300, to say nothing of all the years before that.

So I don’t know. I’m never a big fan of that. I also think that I don’t know what people really believe here. As I was saying earlier, for a long time in this conversation, for me, I felt like I could intellectually grok it, but couldn’t emotionally feel it. I think the people who work on AI, this is not part of the culture. It’s a real culture of people who are extremely far on the bell curve of analytical intelligence.

There’s a Bertrand Russell quote, I think it is, that the mark of a civilized mind is the ability to weep over a column of numbers. And I often feel with these communities, and it’s particularly true for the so-called rationalist community, et cetera, that for them, the mark of a civilized mind is the ability to have a nervous breakdown or philosophical thought experiment.

Like, they have a capacity to live in analytical, conceptual language and feel that to be almost realer than the real world that most people don’t. But at the same time, it is still speculative. And so whether or not they can live in everything they imagine, that’s a really hard jump to make for anybody. I think they make more of it than most of us do. But I don’t think most of them are all the way there.

casey newton

I want to know what you make of the criticism, which I’ve been hearing a lot lately, that to discuss AI in these terms as a potential existential risk, as something that might transform everything, only serves up to hype these companies that were sort of were empowering them, were maybe distracting folks from nearer term harms with this stuff. And I wonder how you think about that question.

ezra klein

I am half on board with that argument, actually, not because I think it hypes up the companies, but because I think there’s a sort of missing middle in the AI risk and policy conversation that frustrates me. So I’d say on the one hand, you have the AI safety world. So that is the existential risk.

kevin roose

Right, these are the people who think the world could end —

ezra klein

10 percent it’ll kill us all. Then you have what gets called the AI ethics world. So if the AI safety world is a real out of sample risk profile, right, what if we invent a recursively improving superintelligence that turns us all into paperclips, right? We’ve really never faced that question before.

The ethics world is much more near term. It is what if these machine learning systems are bad and create problems in exactly the way we are bad and create problems today, right? This is learning off of human data. Human data has all the biases of human beings. It’s going to replicate those biases and going to tuck them away in a black box that makes them harder to uncover. I think that’s true, by the way. I mean, I think you should take all that very seriously.

But there’s a big middle to this that feels to me a little bit under emphasized, in part because a lot of the alignment problem is human. We have alignment problems between human beings and corporations now. We have alignment problems between human beings and governments now, between governments and governments now. And that I worry a lot about the absence of trying to solve those alignment problems. So a critique I have, honestly, of, frankly, both the safety and the ethics communities, but more the safety one on this, is, I think they are just inattentive in a crazy way to capitalism.

I’ve written about how important I think the business models that AI companies are allowed to have is. The kinds of models we get are going to be shaped by how you can make money on them. What just drives me up the wall is that we appear to have decided the way AI is going to work is through a competitive dynamic between Google, Microsoft, and Meta. And as such, because what is really working in the market right now are chat bots, we are then going to really emphasize creating AIs that are designed to fool us into thinking they are human, right? The more persuasive and, frankly, manipulative an AI is, the better you’re going to do at rolling it out.

I don’t think you should be able to make money with an AI system that manipulates human behavior at all. I mean, I think right now, the law should come down that you cannot make any money by tying these systems to advertising. I do not want to see more personalized systems that are tuned to make human beings feel like they have a direct relationship with them, but actually, make their money by getting me to buy Oreos or whatever it might be.

kevin roose

So how should they make their money?

ezra klein

That’s a great question. I am not sure. One thing I would like to see is prizes for scientific discovery. I still think the most impressive AI system we’ve seen was AlphaFold, which was by DeepMind and substantially solved our ability to predict the structure of proteins. You can imagine a world where the US government or other governments put out 15 or 20 drug discovery mathematical and scientific challenges.

And if we believe them to be so profound that if you could solve them, you get a billion dollars. You just get it free and clear, but we get the result in the public sphere, in the public domain. Like, that would probably get us very powerful AI systems tuned to problems we actually have, not the problem of Microsoft really wishes people used Bing, instead of Google.

I’m not saying it’s not important to be focused on how the systems work, but how we work the systems, like pretending this is all a technical problem and not a problem of human beings and economic models and governments. Alignment problems are not just about AI.

So I think there’s this big middle space that just isn’t getting as much attention as I think it deserves, but I think it’s really big. It is kind of new problems, but not truly new. They’re just large scale civilizational problems of how we don’t understand how to make systems to actually produce the outcomes we want them to produce. And this really emphasizes that we need to figure out some governance mechanisms that do.

[MUSIC PLAYING]

kevin roose

We’ll be right back.

You wrote in one of your recent columns that basically there are two paths toward solving pieces of the AI problem. One of them would be to dramatically slow down the development and deployment of AI technologies. And I get a sense of how you might go about that. You could do that with regulation. You could do that with sort of mutual disarmament pacts between the AI companies or something like that.

But I want to ask you about the other side of that solution, which is the sort of faster we can adapt to AI as a society, the better off we’ll be. So how do you think that goes? What does that look like for a society to adapt to the presence of, say, large language models?

ezra klein

I’ll be honest that that is one of those sentences where it hides a lot of — and I don’t know. But let me give —

kevin roose

I’m familiar with the type of sentence, yes.

ezra klein

But let me give a couple thoughts on that. So one is, we need governance and input mechanisms we don’t currently have. I was talking to the group Collective Intelligence. They are trying to build structures we know pretty well from various deliberative democracy and direct democracy experiments. So you could have a rapid and iterative way of figuring out what the public’s values are coming from representative assemblies of people, and that you could then put that into different levels of the process, maybe at the regulatory level, and that that goes into standards that you cannot release a system or train a system that doesn’t abide by those standards.

So that’s a kind of governance we don’t currently have, but we may need to create something like. I’m very skeptical that Congress, as it exists today, has, I don’t just want to say the capacity, but I think the self-confidence, which actually worries me quite a bit, to regulate here. I think Congress knows what it does not here. It does not have a good internal metaphor to the extent it has any metaphor. It is to replay the social media era and try to deal with privacy breaches and regulating the harms on the margin.

So adaptation might be governance structures, input structures. And then slowing down — and this is a place where my own thinking has evolved — I don’t think the simple slowdown position is going to be a strong one. I think —

kevin roose

You mean like the one that came out in that letter with the 1,000 AI researchers and technologists saying, let’s put a six-month pause?

ezra klein

Let’s just pause. I think you need to say clearly, both because it is politically important, but also because you need to do something with the time of your pause, what you are attempting to do. So one thing that would slow the systems down is to insist on interpretability. And —

kevin roose

Like if you can’t explain why your large language model does what it does, you can’t release it?

ezra klein

Right, so if you look at the blueprint AI Bill of Rights that the White House released, it says things like — and I’m paraphrasing — but you deserve an explanation for a decision a machine learning algorithm has made about you. Now, in order to get that, we would need interpretability. We don’t know why machine learning algorithms make the decisions or correlations or inferences or predictions that they make. We cannot see into the box. We just get like an incomprehensible series of calculations.

Now, you’ll hear from the companies like this is really hard. And I believe it is hard. I’m not sure it is impossible. From what I can tell, it does not get anywhere near the resources inside these companies of let’s scale the model. Right? The companies are hugely bought in on scaling the model, and a couple of people are working on interpretability.

And when you regulate something, it is not necessarily on the regulator to prove that it is possible to make the thing safe. It is on the producer to prove the thing they are making is safe. And that is going to mean you need to change your product roadmap and change your allocation of resources and spend some of these billions and billions of dollars trying to figure out the way to answer the public’s concerns here. And that may well slow you down, but I think that will also make a better system.

I also think, by the way, even if you just believe we should have a lot of these systems out there in the world, and they should get better as soon as possible, for them to be stable in society in the future, you’re going to need to do this. If you think the regulations will be bad now, imagine what happens when one of these systems comes out and causes, as happened with high speed algorithmic trading in 2010, a gigantic global stock market crash. Right?

I mean, what the flash crash did then compared to what a bunch of hedge funds running poorly trained, poorly understood AI systems, to try to achieve the goal of making lots of money through ways they may not recognize, right, it would sure make a lot of money if all the competitors for this company were investing in didn’t exist because they all had a gigantic cybersecurity breach. And nobody knows the system has come up with that idea.

And so you actually want to be able to have control of these systems to not have the regulatory hammer come down because you release one into the wild, it did something terrible, and now everybody’s like full stop. So I think we could do a lot to adapt. I think we can do a lot to improve. I think a lot of that would slow them down. But I’d like to see more of a positive vision of what we’re trying to achieve here, rather than simply the negative vision of stop, stop, stop, stop. Not because I don’t think there’s a case for that, but because I just don’t think that case is going to win.

casey newton

I agree with you. I was more sympathetic to the idea that these systems should take a pause. Like, I understand it’s very unlikely that the big companies are going to unilaterally disarm without some sort of pressure. I just don’t know how the government is going to develop a coherent point of view like the one that you just laid out, if it doesn’t have at least six months to think about it, right?

And I guess what I worry about is that if six months go by with sort of unchecked developments, and if these companies do start to train more powerful models even than the ones that they just released, then by the time the government has a point of view, it’s sort of already outdated.

ezra klein

I’m not against it, right? If you told me tomorrow that Joe Biden was so moved by the letter that somehow he convinced all of the — great, I’m not against it. I think it is much likelier for them to come out and say we’re in an information gathering phase. And we need a kind of information you can’t seem to give us in order to come up with our rules. And you need to give us that information. Think about this. If you would like to build, say, congestion pricing in New York City, the congestion pricing effort in New York City just has —

kevin roose

Where you would charge people more for driving —

ezra klein

For driving in demand and —

kevin roose

— in demand.

ezra klein

— driving in New York City. The environmental assessment for congestion pricing for New York City came out in August, and it was more than 4,000 pages. And until that 4,000-page report, which included all kinds of public community meetings and gathering of data, until that could be compiled and finished, you can’t do congestion pricing. And my view is that took way too long and was way too extensive for congestion pricing. I do not think that is too much or too long or too expensive for AI.

And so this is my point about the pause, that instead of saying no training of a model bigger than GPT 4, it is to say no training of a model bigger than GPT 4 that cannot answer for us these set of questions. And I don’t think it’s impossible that Congress in its current state could come up with five good questions or the FTC could come up with them. I talk to people at the FTC. They’ve got lots of questions. There are a lot of things they would like to know. I think you pick some things that you would like to know here.

And you say, until you can answer this for me, and answering it might require you to figure out technological capabilities you have not figured out, you can’t move forward on this. That is the most banal thing the government does in this country. You cannot do anything until you have satisfied our requirements. You need to fill out this form in triplicate.

kevin roose

I want to play devil’s advocate here because I think what we just saw with social media was a lot of people saying the same kind of criticism. We don’t know how these feed ranking algorithms work. We don’t know how these recommendations are being produced. We want you to show us. And so very recently, Elon Musk open sourced the Twitter algorithm. And this was sort of seen as like a trust building exercise, that people would see how Twitter makes its decisions about what to show you. And it was not greeted with a lot of fanfare. No one really seemed to care that you could now go and see the Twitter source code.

So if the same thing happened with AI, if these companies were forced to explain how these large language models operate, and it turns out that the answer is really boring, or that it’s just making statistical predictions — there’s no like hidden hand inside the model steering it in one direction or the other. If it’s just very dense and technical and not all that interesting, do you think that would still be a worthwhile exercise?

ezra klein

I do. I do, on a bunch of levels. One, I’m using interpretability as one example here of the kind of thing you may want. And I think there are a lot of things you may want to put into the models before you begin to train GPT 6 or 7 or whatever it might be. But the thing about the Twitter algorithm is the only thing the Twitter algorithm does is decide which tweet to show you. The thing about these models, these general purpose models, is they might do a lot of things.

And many of them already are. I mean, models way less sophisticated than what’s coming out from Enthropic or OpenAI or Google or Meta or whomever are involved in machine learning ways — I mean, your book is partially about this, Kevin — in making predictive policing decisions and sentencing and deciding strategic decisions for companies. We should be able to get an account, a decomposed account of how it came to that conclusion.

When Bing Sidney decides to say, Kevin, I love you, and you don’t love your wife, let’s be honest here. I want to know what happened there. What did it draw on? What lit up in the training data? I mean, you know because I wrote a piece about this that my view is that the system predicted fundamentally you wanted a “Black Mirror” episode. You were a journalist looking for a great story, and it gave you one.

kevin roose

That’s right, blame the victim. Classic, uh-huh.

ezra klein

But I don’t know if that’s true. I would like to know more. I don’t know how much more we can know. But I think we can know a lot more than we currently do. And I don’t think you should have these rolling out and integrating into a billion other apps and on the internet until we do. Now I am in a place of like learning and uncertainty here. I’m not saying that you should listen to Ezra’s view on what we should know. I’m just saying that there are ways to come up with a series of views about the public would like this to be true about models.

And the way you’re going to slow them down and also improve them, make them more sustainable, make them closer to the public’s vision for AI, is to insist the company solve these problems before they release models. It’s pretty hard to run an airline in this country. It is hard to build an airplane in this country.

And one reason it is hard is because airplanes, we do not let them crash. It is not that it has never happened. It is extraordinary how rarely it happens. And that is true because we have said in order to run an airline, there’s going to be a lot of stuff you have to do because this technology, it is important, it is powerful, it is good, it is dangerous. That is a good metaphor here.

kevin roose

How do you think these technologies, which are fundamentally things that arrange words in sequences, which is also what the three of us do, how do you think that these AI language models —

ezra klein

You’re just a stochastic parrot?

kevin roose

On bad days, yeah. I’m just doing next token prediction. How do you think these technologies could change our jobs?

ezra klein

I don’t know yet. I’d be interested if you think this is true. One of the things I observe is that the pattern of usage for me and for people I know, when any one of these new ones comes out, is like for four days, you use it a lot, and then you don’t. And I observe the people who seem to tell me they’re using it a lot, and I wonder, is your work getting better? And without naming names, I don’t think it is.

Now, you could say, yeah, this is this generation. Soon it’s going to be really good. Maybe, maybe. The hallucination problem is going to be really difficult, though, for integrating this into newsrooms because the more sophisticated and convincing the system gets, the harder it is going to be for somebody who does not themselves know the topic they have asked AI to write about to check what the AI has actually written. So I don’t think this is going to roll out easily or smoothly.

And then there’s the other thing, which is one reason I am skeptical of the predictions that this is going to upend the economy or create hypergrowth, a world where AI has put so many people out of jobs is a world where what AI has done is create a massive increase in automated productivity. And that should have been true for the internet.

So if you’d said, hey, look, one way the economy and scientific discovery is simply constrained is that it is hard and slow to gather information, it is hard and slow to collaborate with people, it is geographically biased who you can collaborate with, and I said, oh, great, I have a technology that will answer all of that, and you say wonderful. Like, this is going to change everything. And then how’s productivity growth been since the internet? Crappy. So why?

People have different theories on this, but my theory is that it also had a shadow side. It distracted everybody. If I also said to you, I’m going to create this technology that is going to make the entire world 35 percent more distracted and is going to shrink their attention spans, and every time they have a thought, they’re going to interrupt it by checking their Gmail or Slack, how is that going to be for productivity? And the answer is bad. Productivity really matters in terms of the level of economic intense concentration you can bring to a thing.

casey newton

I feel so attacked right now, but keep going.

ezra klein

So I think it’s very likely that AI looks more like that or at least has as much of that effect as the other.

kevin roose

That we’ll supercharge productivity in some corporate context, but that we’ll all be so busy like hanging out with our replica girlfriends and boyfriends —

casey newton

Yes, exactly.

kevin roose

— that it won’t actually change our productivity as workers.

ezra klein

I’ve used this analogy before, but I sometimes will work at a coffee shop with my best friend. I am not more productive when that happens. I enjoy it more. It’s great. I really enjoy going to a coffee shop in that context, but we hang out. And if you watch the movie “Her,” it doesn’t actually seem to me that the main character becomes more productive.

kevin roose

Right, that is not a portrait of economic productivity being supercharged.

ezra klein

So precisely because I think, at least in the near term, these are going to work better for kind of social and entertainment and distraction purposes, like we may get way better at making immersive video games really, really quickly, or immersive movies or whatever it might be. I think it is very likely that way before AI is automating everybody’s job, it is distracting people in a more profound way. And you might say, look, we have enough content already. I would have said that before TikTok came out, but then TikTok came out. It thought we didn’t have enough content. We needed much more.

And I think that the move not to social media, but to companions who are always there with you, I think that could be distracting. It may be enriching, it may be beautiful, but in a totally different way, right? I’m not sure that we realize how much of life there is to colonize with socializing with interaction prior to what is about to happen. And so I just see a much smoother path to making us less productive than more. That’s not a prediction, but it is maybe meant as a provocation.

kevin roose

I know we’re almost out of time, but I want to close by asking you about skepticism. The last paragraph of one of your recent columns on AI really resonated with me. You were writing about the mistake of living your life as if these AI advances didn’t exist, of putting them out of your mind for some sort of cognitive normalcy. And you wrote that skepticism is more comfortable. And then you quote the historian Eric Davis, saying that in the court of the mind, skepticism makes a great grand vizier, but a lousy lord. And I’ve been thinking about that a lot because some criticism that I’ve gotten since I started writing about AI — I think we’ve probably all gotten some version of this — is like, you’re not being skeptical enough, right? You’re falling for hype. You’re not looking at all of the pitfalls of this technology. And I think there’s a lot of media criticism of AI right now that is, I would say, skeptical in a way that feels very comfortable and familiar, kind of like reviewing AI like you’d review any other technology, like a new smartphone or something.

But it seems like you’re arguing against a kind of knee-jerk skepticism when it comes to AI. You’re saying, maybe we should take seriously the possibility that the people who are talking about this being the most revolutionary technology since fire are right and try to sort of dwell in that possibility and take that seriously. So make that argument. Like, convince me that the knee-jerk response of journalists and other people evaluating AI should not be skepticism.

ezra klein

Maybe the way I’d say this is that I think that there is a difference between skepticism in the scientific sense where you’re bringing a critical intelligence to bear on information coming into your system, and skepticism as a positional device, a kind of temperament, where you prefer to sound to yourself and to others like you’re not a crazy person, which is very alluring. Look, one of the ways I’ve tried to talk about this is using the analogies of COVID and crypto.

And I remember periods early on in COVID where I was on the phone with my family and I was saying, you all have to go buy toilet paper right now. And I was talking to them about a trip. Like, we’re going to come see you in three weeks. I’m like, you’re not going to come see me in three weeks. In three weeks, you will not be going anywhere. Like, you need to listen to me. And it was really hard. You sounded really weird. And I was not by any means the first person alert to COVID, but I am a journalist and I did begin to see what was coming a little bit earlier than others in my life.

And one lesson of that to me was that tomorrow will not always be like today. So that also should not become a positioning device. I think there are people who are always telling you tomorrow will not be like today. So then I think about crypto. And I mean, we were all here in the long ago year of 2021 when that was on the rise. And you’d have these conversations with people. And you’d have to ask yourself, does any of this make sense exactly that there’s a lot of money here a lot of smart people are filtering into this world. I take seriously that smart people think this is going to change everything. It’s going to be how we do governance and identity and socializing. And they have all these complicated plans for how it will replace everything or upend everything in my life. But what evidence is there that any of this is true? What can I see? What can I feel? What can I touch? And it was endlessly a technology looking for a practical use case. There was money in it, but what was it changing? Nothing.

And so my take on crypto was until proven otherwise, like I’m going to be skeptical of this. You need to prove to me this will change something before I believe you that it will change everything. And one of the points I make in that column about AI is that I just think you have to take seriously what is happening now to believe that something quite profound is going on. I think you can look at the people who already have profound relationships with their replicas. I think that you can look at automation, which has already put people out of work.

I think to my point that a world populated by things that feel to us like intelligence, if you believe my view that that is one of the profound disruptions here, that has already happened. It happened to you with Sydney. We already know that militaries and police systems are using these. So you don’t even really have to believe the systems are going to get any better than they currently are.

If we did not just pause but stop at something the level of GPT 4 and just took 15 years to figure out every way we could tune and retune it and filter it into new areas, imagine you retrain the model just to be a lawyer. Right? Instead of it having a generalized training system, it was trained to be a lawyer. That would be very disruptive to the legal profession. How disruptive would depend on regulations, but I think the capabilities are already there to automate a huge amount of contracting. And so —

kevin roose

They don’t have to be sentient to be civilization altering.

ezra klein

I just don’t think you need a radical view on the future to think this is pretty profound.

kevin roose

Totally.

casey newton

Well, Ezra, we’re going to have to ask you to stop generating.

ezra klein

The token generating machine is off.

kevin roose

Ezra Klein, thanks for coming.

casey newton

Thanks so much, Ezra.

ezra klein

Thank you for having me. [MUSIC PLAYING]

kevin roose

All right, that’s enough talk about AI and existential risk and societal adaptation. It’s time to talk about something much more important than any of those things, which is me.

casey newton

Oh, my God.

kevin roose

— and my phone.

casey newton

Get over yourself, Roose.

kevin roose

When we come back, we’re going to talk about my quest for phone positivity and why I’m breaking my phone out of phone jail.

casey newton

This is a long way of saying, we’re going to talk about how I was right.

[MUSIC PLAYING]

kevin roose

Casey, I did something very empowering this week.

casey newton

Oh, what was that?

kevin roose

I got rid of my phone box.

casey newton

[GASPS]: Kevin, the phone box that was one of your New Year’s resolutions?

kevin roose

Yes, so a couple months ago, we talked on this show about my attempts to control my out of control phone use with the help of a box that you plug your phone into. It charges your phone, but it also starts a timer and tells you how long you’ve spent away from your phone.

casey newton

Yeah, you could call it a phone box, but it was a prison. You built a prison for your phone. You expanded the carceral state in this country by building a phone prison, and you locked up your phone for several hours a day. And if I recall correctly, you said sort of like, you feel like you’ve been spending too much time looking at this dang thing, and you were going to lock it up. And it was going to help you be more present with other people.

kevin roose

Yeah, and at the time, you responded to me with what I thought was a pretty crazy position, which is that you are not worried about your phone time.

casey newton

That’s correct.

kevin roose

You were trying to maximize your phone time.

casey newton

Yeah, we’re going up in this household.

kevin roose

You’re a phone maxer.

casey newton

Yeah.

kevin roose

But I was conflicted because I felt my own use of my phone sort of creeping up. And, among other things, I have a kid and I didn’t want to sort of set an example of always being on my phone. And so I was hoping that this phone box — and I also installed this app called One Sec that basically puts like a little five or 10 second speedbump in between when you’re trying to open an app and when you actually do it.

casey newton

Yeah. You’ve installed technology to harass you every time you try to use your phone.

kevin roose

Yes, and the way I thought of it at the time was like, this is going to help me be intentional about my phone use, but I found that what it actually did was just make me incredibly guilty. Every time I looked at this box on my counter, every time I ran into these little speed bumps on my phone, I would just be filled with sort of guilt and shame as if I was eating a slice of cake that I wasn’t supposed to be eating.

casey newton

I mean, to be clear, these are shaming technologies. If you have a prison for your phone in your house, that doesn’t make you say, I’ve really got things under control over here. And for sure, an app that sort of says stop every time you go to open Instagram is only ever going to make you feel bad about yourself. So I just had a kind of revelation the other day when I was looking at my phone box, and I thought, you know what? I don’t want this. I don’t want to feel guilty every time I use my phone. Good.

kevin roose

And I want to adopt a position that is more like yours. I want to learn how to coexist peacefully. And so I am starting now a new project, which I’m calling phone positivity.

casey newton

OK, let’s talk about phone positivity.

kevin roose

So my new approach to my phone is that instead of treating it like a kind of radioactive object in my house —

casey newton

Yeah, the nuclear waste.

kevin roose

— that I have to imprison in a phone prison to keep myself from using, I am going to start appreciating my phone.

casey newton

OK, and how are you going to do that?

kevin roose

Well, so every day, I say affirmations to my phone — no, I don’t.

casey newton

[LAUGHS]: You do not.

kevin roose

I don’t. I don’t. But I do try to really be aware of what my phone is allowing me to do at any given time. So for example, the other day, I was in the room with my kid. He’s crawling around on the floor. I was fielding work emails and Slack messages. And there was a moment where I thought, I feel really guilty about this. I should not be paying attention to my phone. I should be playing with my kid.

The second thought, my phone positivity thought, is, you know what? It’s amazing that I have a tool in my pocket that allows me to be physically present with my kid while also doing work. My grandfather couldn’t do that.

casey newton

No.

kevin roose

If he wanted to do work, he had to —

casey newton

He had to leave the coal mine.

kevin roose

— get in the car.

[laughs]

No, he worked at an electrical plant. He had to get in the car and drive to work. He had to literally leave his family to work. I have a phone that allows me to do both at the same time. What an amazing tool.

casey newton

OK, so I think that that is a great realization, and I’m excited that you will now be leading an abolition movement for phone prisons. Does part of your new phone positivity plan mean that you are also deleting this speed bump app, which I believe you said you had bought a lifetime subscription for?

kevin roose

Yes.

I turned the speed bumps off. I now have unfettered access to every app on my phone. And you know what?

casey newton

What?

kevin roose

My screen time has gone down.

casey newton

No, seriously? You’re measuring this?

kevin roose

Yes, my screen time in the past week, after I started thinking about phone positivity, is down 30 percent.

casey newton

OK, well, see, now this makes so much sense to me now that I’ve thought about it for a beat because if your phone has been locked up and you get one conjugal visit like every 12 hours with this thing, of course, you are not going to put that thing down until the warden splits you up again. But now because you’re actually looking at it when you need it, you’re more relaxed about it.

kevin roose

Yeah, all of my interventions that I tried for screen time, what it was actually doing was just compelling me to spend more time on my phone because it was so annoying and time consuming to get what I needed out of it. So I actually find that with the speed bumps gone, I’m able to get in, do what I need to do, and dip back out. And so it has resulted in less screen time, even as I no longer feel this kind of overwhelming guilt about it.

casey newton

Interesting. So as you think back on the very brief 90 days that you spent with these technologies, do you feel like they failed on a technological basis, or do you feel like you just got to a different place emotionally, and they didn’t make sense anymore?

kevin roose

Well, a couple of things I think happened. One is that a lot of the things that I was truly addicted to, like Twitter and Instagram and TikTok, just got less interesting to me. And I don’t know whether that’s because people just aren’t posting as much as they used to, or my friends and people I follow are just not interacting as much on those platforms. But it does feel like those apps got a little less shiny to me.

At the same time, I think viewing my phone through a more positive lens, like looking at the things that it can do, that it enables me to do, and appreciating that also helped me use my phone in better ways. Right, so there’s this study that came out five or six years ago about social media that said, actually, maybe not all screen is created equal —

casey newton

Hashtag #NotAllScreentime.

kevin roose

— and the difference between active and passive use of social media, right? So when you are using social media passively, it appears to be bad for you.

casey newton

And by passively, we mean sort of like mindlessly scrolling with your thumb and sort of like paying half-hearted attention to what your aunt is up to without really feeling compelled to leave a comment or actually engage with another human being.

kevin roose

Right, you are leaning back. You are just sort of scrolling and observing. You’re lurking. But this study also showed that active use of social media can actually increase mental well-being, can increase your sense of connection with people, can help you stay connected to weak ties in your network, people that you don’t live near, family members that are far-flung or people you haven’t seen in many years, but still want to maintain a connection to. And that really tracks with my own experience of this, where if I’m just looking at TikTok or Instagram or Twitter for an hour, it feels bad. I feel bad viscerally about it. But if I’m using it to connect with people or send texts to the group chat or whatever, that feels much better to me.

casey newton

I want to ask you one other thing because as I’ve been thinking about this journey that you have been on, we’ve also been spending a lot more time together. We’ve been recording the podcast. We went to South by Southwest. And you are not a phone goblin.

Like I have some friends where if there is one dull moment in the conversation, that phone is out of their pocket so quick. And they’re scrolling Instagram or doing something else that doesn’t seem urgent at all. I don’t see you as that person. Do you just make sort of a conscious effort when you’re out and about in the world and the issues that you have happen more when you’re kind of laying around the house, or has something changed for you in the past few months?

kevin roose

Yeah, I think I’m pretty sensitive to not being a phone goblin, as you put it, when I’m with people face to face. I try to sort of focus on the conversation, but I also have done a lot of calibration of my phone so that I’m not getting every notification. I think this goes hand in hand with the phone positivity thing, is like you actually have to make your phone work for you, right?

You have to set up your notifications so that the people you want to notify you get through, but you’re not getting a ping for every email, every text message, every spam call. So I think that spending a couple of hours upfront really setting up your phone in a way that is maximally useful and minimally distracting to you is a necessary piece of this.

casey newton

OK, so if someone were to come to you today and they were going to say, Kevin, I just listened to the first time that you talked about the phone box, and I haven’t listened to this episode yet, and I’m thinking about bringing phone box into my life. What would you tell that person?

kevin roose

I think that for some people, reducing screen time is a good goal, right? There are people who —

casey newton

If you’re Sam Bankman-Fried, and you’ve been using that screen to commit a global scale fraud and destroy the effective altruism movement, maybe put the screens away.

kevin roose

Right, you need a phone prison. Maybe —

casey newton

Maybe a real prison!

kevin roose

So I think that for some people, reducing screen time is a good goal, right? And I think especially for adolescents or young people, the effects of screen time on mental health are much more severe. But I think for adults and for people who use their phones and their screens for work, as well as leisure, I think we could all do with a little more positivity around these things. That’s actually a much better experience than being consumed with guilt about something that you’re in the process of doing.

casey newton

Be OK with who you are. I’ve been going to therapy, and that’s pretty much actually just what they tell you. And —

kevin roose

Well, your point when we last talked about the phone box was that there may be something bigger here than phone addiction, right? Like for a lot of people, what presents as like, I can’t stop using my phone, is actually something deeper. It’s, I have anxiety, or I am unfulfilled by the things in my life that I would be doing otherwise if I weren’t looking at my phone. That there’s some bigger problem there.

casey newton

Yes, absolutely. It’s like I talked last time about the Grindr was one of these apps for me, where it was like sometimes I’d just be on my phone, and I would feel horrible. And what I ultimately realized is like I am seeking validation in this app that is not actually there. So I need to stop, you know? But it’s like you could put the phone in the box, and that’s not going to make me feel the validation that I was looking for.

kevin roose

I think it’s important to be purposeful about your phones. I don’t want it to seem like I’m just advocating for maximizing all forms of screen time. I’m not actually at your position of like the more, the better for screen time.

casey newton

Except that as you’ve been describing it, you’ve been making me realize that everything that you’re saying is how I’ve done my phone. I do look at my phone whenever I want to, but I have also taken the time to develop an intentional relationship with it. I am very selective about what I will allow to notify me. You’ve got to kind of figure that out for basically everything on your phone. So it is a massive project, but once you sort of get through with it, I do think you can just kind use your phone when you want to use your phone because to your point, it’s bringing a lot of amazing things into your life all the time.

kevin roose

Right, it makes me think about this old Steve Jobs quote about computers being a bicycle for the mind. Remember this quote?

casey newton

That’s right.

kevin roose

So I think that we forget that the point of computers, of smartphones, of technology, is to help us do things and get places and be more productive and efficient and bring us joy and entertainment, but also to accomplish things. And so very few people are out riding bikes all day just for the fun of it. Like, you’re usually using it to get from someplace to someplace else.

casey newton

That’s right.

kevin roose

And so I’ve tried to use my phone more in that way. Like, I am doing a task. Once I finish that task, I will put away my phone, but I’m not going to feel guilty about it the entire time.

casey newton

I think that’s beautiful. If you love your phone, come out to the town square and just say it loud and say it proud.

kevin roose

One of my favorite things that I learned when I was researching my book about automation is that when electricity first came to a lot of small towns in America, they would throw parties. Like, everyone would come out on the town square, and at some of these, they would do these symbolic things where they would bury a kerosene lamp and hold a mock funeral for it. And the Boy Scouts would play taps to symbolize the fact that we don’t have to use these crappy kerosene lamps anymore because we have this amazing electricity.

And I feel a little bit of that now toward my phone. Now that I’ve stopped being shamed by it, it’s like, wait a minute. I can take a photo of my kid and send it to a wireless photo frame in my mom’s house thousands of miles away. That is unbelievable. And that is a realization that I never would have had, had I kept treating this thing as a source of shame.

casey newton

All right, well, I’m calling the Boy Scouts, and we’re going to have a funeral for your phone box. Just burying it in your backyard, moving on with our lives. Mission accomplished.

kevin roose

All right, P. It was here for a good time, not a long time.

casey newton

It was not a good time.

kevin roose

No.

casey newton

It was here for a bad time.

kevin roose

Yeah. [MUSIC PLAYING]

casey newton

“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, Marion Lozano, and Rowan Niemisto. Special thanks to Paula Szuchman, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. You can email us at hardfork@nytimes.com. And if Kevin’s phone is out of prison, we’ll read it.

[MUSIC PLAYING]

Related posts

Gen Z employees want feedback at work. Here’s how managers should do it.

newsconquest

Laid-off Twitter Africa team ‘ghosted’ without severance pay or benefits, former employees say

newsconquest

Clearview AI Settles Go well with and Has the same opinion to Limits on Facial Reputation Database

newsconquest

Leave a Comment