Scientist Destroys Elon Musk's Mars Fantasy: ‘BS’

Scientist Destroys Elon Musk's Mars Fantasy: ‘BS’
New episode of The Nerd Reich podcast!

“Mars is terrible. The gravity is too low. The radiation is too high. There’s no air — and the dirt is made of poison ... You would just die.”

Adam Becker

In this episode of the Nerd Reich podcast, scientist Adam Becker destroys tech billionaire delusions like Mars colonization, “God-like” artificial intelligence and eternal life.

Becker is an astrophysicist and journalist. His new book — More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control The Fate of Humanityis a must-read if you want to understand the unscientific delusions driving today's tech billionaire insanity.

Click below and listen to what he has to say. If you give Becker 30 minutes, he’ll change the way you see the future — and fill you in on who’s trying to steal it, and why.

Full transcript below.

The Nerd Reich is made possible by the generous support of paid subscribers. If you can, please join hundreds of fellow readers in becoming a paid subscriber today. Click here to join us!


Transcript: Here’s Why Elon Musk’s Mars Fantasy Is BS | Scientist Destroys Billionaire AI, AGI and Space Hype

Interview with Adam Becker (Transcripts are auto-generated and may contain errors)

Gil Duran: Elon Musk says he'll put a million people on Mars and make humankind into a multi-planetary species. Adam Becker says that's 100% Silicon Valley bullshit. I'm Gil Duran and this is the Nerd Reich podcast.

In this episode, Adam Becker calls BS on Silicon Valley dreams like Mars colonization, artificial general intelligence, the singularity, and more. Becker is an astrophysicist, a journalist, and the author of an essential new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control The Fate of Humanity. He says a lot of the most hyped claims of tech billionaires, space colonies, godlike AI, eternal life, have no scientific basis whatsoever. Becker says they're pure sci-fi fantasy to make tech billionaires feel important and powerful.

Instead, Becker says tech oligarchs are creating imaginary problems that only they can solve instead of dealing with the very real problems they're helping to create — like poverty and climate destruction. In reality, Becker says Mars would be a living hell for human beings. And there's no scientific evidence to suggest that a “godlike” super intelligence is even possible. Who's right? The astrophysicist or the billionaires? Let's find out. Here's my talk with Adam Becker.

Hello, Adam Becker, and welcome to the Nerd Reich.

Adam Becker: Thanks for having me, Gil. It's a pleasure to be here.

Gil Duran: So I really enjoyed your book, More Everything Forever. And I think it's a book that everyone needs to read if they want to understand what's going on in the world right now as these tech billionaires try to take control and push us towards some strange future that they have in mind. But let's start a little earlier than your book. You're an astrophysicist turned journalist, but your journey didn't start in a lab. It started with Star Trek and with a life-changing discovery at the library. Tell us a little bit about your origin story and how you decided to become an astrophysicist.

Adam Becker: When I was a kid, I was really into dinosaurs. And then, one day I went to the library and the library was out of dinosaur books. And instead there were the books next to it about space. And I just kind of never really looked back after that. I started reading those books and was hooked and just felt passionate about space and physics.

And it just seemed like the most interesting thing I'd ever seen. And I was also reading and watching a lot of science fiction as a kid. And it seemed like the future had to be in space. That's what was going on on Star Trek. That's what was in all of the books. And so I really believed for a long time that the future had to be in space. And that was part of the motivation for me as well. And as I got older, my interests broadened, became more subtle. And, over time, I eventually started to realize that the future is not in space. Space is really interesting. And I went and did my PhD in astrophysics, but our future is not in space.

Gil Duran: Everyone's heard the term “astrophysicist,” but what does it actually mean in practice? What does an astrophysicist study and what does an astrophysicist do?

Adam Becker: There's a broad range of things. The particular area that I did my PhD in is cosmology. So I was looking at the entire universe at the very largest scales, trying to figure out what was happening during and right after and maybe even a little before the Big Bang and how much we could learn about that by looking at the way stuff is arranged in the universe right now. And in practice, that involved writing a fair amount of computer code to do a lot of statistics on various data sources and forecasting what we might be able to learn from future data.

Gil Duran: You're an expert on this stuff, and I wanted to lay that groundwork for people. You're not just some guy with opinions on the internet. You're an astrophysicist. You got a PhD. You've had a long fascination with these issues, and they've led you to the conclusions that you share in your book. So let's dive into your book, because you take a wrecking ball to some of Silicon Valley's most cherished delusions, from AGI to the Singularity to Mars colonies to immortality. But you begin the book with Sam Altman, the CEO of OpenAI.

And from scanning eyeballs in downtown San Francisco to publishing techno utopian manifestos, he's become a symbol of tech’s unchecked ambition. And in his essay from 2021, I believe, “Moore's Law for Everything,” Altman outlines a world essentially ruled by a single AI-backed corporation. This is his vision of the future, that AI will create a world where one corporation can have control over everything. Walk us through that vision first and what it reveals about 21st century billionaire ideology or psychology.

Adam Becker: It's really a very revealing essay. And what he claims is that there's this inevitable explosion of AI power coming very soon. And that it will lead to AI that can do literally everything for us. He says everything will become cheaper, everything will become more widely available, everything will become easier, and more abundant. And he doesn't really give an explanation for why that should be true other than AI is going to make itself smarter and smarter, and eventually it will become smarter and more capable than all of humanity combined.

And that argument is really key to a lot of these techno utopian visions. Altman's specific vision is also relatively common in the tech industry. And what he lays out in the essay explicitly is that AI companies will be able to provide everything for everyone. And so there would need to be appropriate regulation to allow the wealth of those companies to be spread around the world. But if you read the essay more carefully and look at the subtext and look at other things that Altman has said, the essay sort of takes on a different meaning. And it becomes clear that what Altman is actually proposing is that one company, and he hopes his company, OpenAI, will essentially accumulate all of the world's wealth and in exchange it will provide all goods and services for everyone in the world and it will replace all currency with non-controlling quantities of stock in that company. And so effectively that would make the CEO the king of the world or at least the board of directors the king of the world and in charge of everything or at least in charge of the AI that provides everything to everyone. And this is terrifying, but also makes some sense out of the seemingly sudden turn to the right that tech billionaires have had recently, which is really not that sudden at all.

Gil Duran: Early on there was this talk of Universal Basic Income becoming normal as jobs disappear and AI takes over which is part of their fantasy. But now it seems like it's shifting from this idea of benevolent nerd overlords who will give everybody money to really what Altman lays out there is a totalitarian corporate vision. Is there any other use in their minds for technology besides acquiring this mass power? The subtext of your book is that everything these guys do seems to be based on justifying their own lust for absolute power over humanity, over politics, over the future, over everything.

Adam Becker: I mean, there is this drive for power and money that these tech billionaires have that can't be sated any other way. And in fact, I don't even think that, if they got what they wanted through this futuristic scenario that is not actually going to come to pass. But even if such an AI were possible to build, and they built it, and it did, in fact, take over the entire economy of the world, I don't think that would be enough for them, right? The world is not enough for them. And in line with that being the title of an actual James Bond movie, these guys are kind of James Bond villains.

Gil Duran: So let's go deeper into this question of AI, AGI, and that techno-religious idea called the Singularity. Today, AI and AGI, which is artificial general intelligence, this god-like AI that they claim is either a few years or a few decades from being built, these things are portrayed as imminent and inevitable. But you argue that this is largely a myth, a marketing myth. Tell us, what is AGI? How does it supposedly differ from LLMs and why do you think it's kind of bullshit?

Adam Becker: The first question is sort of the key one there. What is AGI? And the answer is: Nobody really has ever given a good answer to that question. It's mentioned in a prominent way in the Open AI charter. And the definition that they give there, I think, is quite revealing because it's really pretty impoverished, right?

The vague idea of AGI that people have when they talk about it is a machine that can do what humans do a machine of human level intelligence. That's pretty vague, because what does human level mean? What does intelligence mean? In the OpenAI charter, they try to fix that problem by saying, an AGI is something that can reproduce any economically productive activity that humans engage in.

And that's a really, really weird way of defining what AGI is. It's both vague again, and really, like I said, impoverished. It's a really bad way of describing what it is that humans do, right?

We engage in all sorts of important activities that are not economically productive, right? Like taking a walk with a friend or biking down a hill, like a really steep hill, like the ones we have here in Berkeley. Economic productivity is not the end all be all of human intelligence. And this also speaks, I think, to some of the marketing terms at work here, right? Now I'm going to sound really old. When I was a kid, AI meant sort of what AGI means now. It meant like human level intelligence of some kind. AI was commander data from Star Trek, right? Or maybe HAL 9000, something like that.

The systems we have now, which are not like that, are called AI. And this science fiction dream, that's AGI. And that's the real definition of AGI, as far as I'm concerned, is that science fictional dream. And that's why it's so hard to define. It's sort of necessarily vague because it's based on these fictional fantasies. And meanwhile, we have these automatic text and image generation engines that they are calling AI that are nothing of the sort. And so they have to tell us that AGI is coming soon because that's this belief that they have that if they can just get something like the human level intelligence is out of science fiction, then that will get them to this semi religious thing, the singularity where once you have a human level AGI, it will use its powers to improve itself and make itself better than human and even better than that. And eventually you end up with a super intelligent AGI, something that is far beyond the capacities of any human or even humanity as a whole.

But what does that mean? What is intelligence? All of this runs on this idea that intelligence is this single monolithic thing, a number that can be dialed up or down. And that also just somehow more intelligence will mean that you can build a more intelligent machine.

And all of these are not really justified when these arguments are made. But if you dig down, as I do in the book, you find that there are very good reasons to believe that none of that is true and that none of this is coming. And yet Sam Altman says it's coming in the next 40.

Gil Duran: One big argument in favor of the existence of AGI seems to be that it justifies giving trillions of dollars potentially to the people trying to create AGI. Is that why some of these leaders cling so hard to this dream despite the lack of evidence, the lack of a definition of what intelligence is? And given the fact that most people are very impressed by LLMs and all these new tools that have come out, it seems like super intelligence. I've got a friend who claims, my God, it's developing a personality and it's lying and it's doing all of these things. And you see those kinds of narratives in the media. So why are they clinging so hard to this dream and pushing this? Is this just like a messianic narrative? Is it an optic? What do you think is the real purpose of this? Or do they truly believe it?

Adam Becker: It's a good question. I mean, I think there's some mix of that among all of the different people in this industry, but I think by and large they really believe it right because it is as you said very advantageous for them it justifies giving them lots of money and that makes it more likely that they really believe it not less likely right because it's easier to believe things that are advantageous to you that mean that good things should happen for you. And that also justify the things that you want to do anyway. Right. And it also provides a sense of purpose and meaning it provides justification for pretty much anything that they'd ever want to do. And including just their sort of endless pursuit of more. And so the answer in a way is sort of all of the above, they really believe it. And it provides anything that they'd ever want in terms of meaning and justification. And even gives them a sort of sense of moral superiority that what they're doing is the best thing that they could be doing. They're doing it for all of us because they're trying to bring about this technological utopia. And that's just not true, but it's very seductive, especially if you're a tech billionaire who's already sort of feeling that your money is justification that you know best.

Gil Duran: It seems to be a big problem with these guys. And I've actually been watching Altman for a while and I don't trust him at all, for some very simple reasons. One, I believe he considered running for governor himself, governor of California, which seems just ludicrous based on how dumb he seems about how the world works. I won't go into full detail there. Maybe we'll have a full Sam Altman episode at some point.

But a few years ago, this guy named Michael Shellenberger ran for governor of California. Michael Shellenberger is this right-wing propagandist who is on Fox all the time, spreading all kinds of disinformation narratives constantly, and really just attacking the Democratic Party and liberal democracy in general, although he used to be a far-left Berkeley activist. So Michael Shellenberger runs for governor, and Sam Altman maxes out to Michael Shellenberger's campaign. He gives the full $32,400 that you can give.

And there are two problems with this. One, Michael Shellenberger. If that's your idea of who should be in charge of things, that's pretty shocking. That's pretty extreme. But two, this was during the recall election of Governor Gavin Newsom and anybody who had a brain, anybody who understood politics on a level, an educated level, anybody who could be familiar with data or polls would have known that there was no way Gavin Newsom was going to lose that recall.

There were some headlines that got clicks by suggesting otherwise, but to anybody knowledgeable, especially to someone as wealthy as Sam Altman is, you would have known that that wasn't gonna go anywhere. So the decision to invest the maximum amount in a failed campaign for a right-wing propagandist, to me, tells me everything I need to know about Sam Altman.

And so the idea that this is the man who wants to save the world and has good ideas for how to do that, to me, is false and ludicrous on his face. And a couple of years ago, he was comparing himself to Oppenheimer. Part of the thing with these guys is they want to create a big problem that they get to solve and be the biggest geniuses in history. And because they have so much money, they can just do that. They can just pursue these ideas. And to me, that's what makes it particularly terrifying, as we have talked about previously on the podcast, it's become a sort of religion for billionaires who want to feel God-like. And speaking of religious ideas, let's talk about the singularity for a moment.

This idea that at some point everything will be so optimized and under control by technology or out of control through technology that sort of human agency and action will largely be rendered irrelevant and will sort of merge somehow with the machines. Am I explaining that correctly? Tell us briefly, what is the singularity as you understand it from an astrophysicist point of view?

Adam Becker: The basic idea behind the Singularity is the thing I was talking about before, this idea that we'll get machines, AIs that will improve themselves and make themselves smarter and smarter and smarter until they are super intelligent and have essentially godlike capabilities to reshape, create, destroy, and that will lead to a fundamentally different way of being for human civilization, which is dominated by these machines. And it's based on this idea of like continual exponential growth or super-exponential growth in technological capability. It's based on this idea of intelligence as a monolithic thing that you can dial up or down. And there's a bunch of other assumptions that go into it as well.

And they just don't work. Those assumptions are incorrect. It all falls apart if you breathe on it wrong. And yet this is sort of a core piece of the ideology of these tech billionaires and of the subcultures and pet intellectuals that they fund.

Gil Duran: Jaron Lanier once called the Singularity a “particularly kooky” new religion. So is it a real scientific milestone or is it just modern day messianism?

Adam Becker: It is a fairly religious worldview. I mean, they will, the Singularity people will say, well, just because it sort of looks like a religion doesn't mean that it's wrong. I'm like, well, that's fair. It's not that it's wrong because it looks like a religion. It's that it looks like it's a religion and it's wrong. The assumptions that it's built on just don't work. Intelligence doesn't work that way. Technology doesn't work that way.

There was essentially no reason to think that any of that is going to happen anytime soon or possibly ever. And it's also true that if you want to talk about like great periods of technological change happening really, really quickly in human history, the current moment is actually not even the fastest moment of technological change in the last couple hundred years. That was probably in the late 1800s and the full swing of the industrial revolution, when we went from not having any sort of long-distance communication faster than a horse or a boat to having telegraphs and radio and even TV in the span of one human lifetime. We haven't seen anything like that lately. And that's not because of some sort of failure. It's because those are the changes that were easily made technologically once we started figuring out how to do that. And I mean, that's not to say that the internet and mobile phones and social media aren't important. They're just not as important as like widespread electricity or radio.

Gil Duran: Here's a quick question from a listener…

Producer Note: A quick note from the Nerd Reich producers, we do take listener questions for the podcast. Just sign up for the newsletter at thenerdreich.com to find out some of our upcoming guests and submit your questions.

Gil Duran: The question: “In the chapter called “Machines of Loving Grace,” you mentioned the U.S. would need to exponentially increase science funding to reach the singularity. Why are tech billionaires now slashing budgets?”

Adam Becker: Yeah, I don't even think that we could reach a Singularity with exponentially increasing spending. I don't think that the Singularity makes sense. It is true that there are some good arguments about diminishing returns of scientific and technological research over the last few decades, sort of similar to what I was just talking about. And that makes some sense having to do with low hanging fruit and stuff like that. It's still not totally clear that that's completely true, but it seems reasonable. It is definitely true that for something like Moore's Law, the exponential increase in the number of transistors that you could fit onto a single chip that required spending more and more and more money just to get another doubling of the number of transistors over the last few decades and of course now that trend is over or about to be depending on who you ask and how you define it.

But why are they slashing the budgets given that this is sort of the engine of productivity that all of them have benefited from and made their billions from? That's because they don't understand how the world works. They don't understand that this is the engine of productivity that they've benefited from and they also think that it's okay to sort of pull up the ladder behind them that they'll be okay if everything just collapses that their money will insulate them and that's not true either. So yeah, I mean, I don't want to say they're doing it because they're stupid, but I will say they're doing it because they do not understand how the world works or what the benefit of fundamental research is.

Gil Duran: “Moore's Law for Everything” … more like “Sloppenheimer.” Let's jet over to Mars, because this is really interesting part of the book. Hollywood and billionaires alike paint Mars as humanity's next home, a place where we'll live on the surface and have greenhouses and do little walks on the Martian soil. You argue that Mars colonization is largely bullshit, a fantasy, not a plan. Why is Mars colonization a dead end? And what does the science say versus the hype? And most importantly, what horrors actually await humankind on Mars? Because you had some real details.

Adam Becker: Yeah. Mars colonization, sending large numbers of people to live and work and have families on Mars. That is largely bullshit because Mars is terrible. Mars is just an awful place. The gravity is too low. The radiation levels are too high. There's no air and the dirt is made of poison. And these are not even all of the problems.

Musk likes to talk about Mars as a lifeboat, a sort of backup for humanity in the event of a major disaster here on Earth, like an asteroid strike. And he's talked about asteroid strikes specifically. This is laughable for two main reasons. First of all, Mars gets more asteroid strikes than Earth does. Second, the biggest and most devastating asteroid strike that we know of in the geological record here on Earth is the one that happened about 66 million years ago that killed off all the dinosaurs except for the birds.

And that was not the worst mass extinction that we have record of, but it was the fastest one. It was the only one caused by an asteroid and the only one that happened in like one day, essentially. Because that's the worst day in the history of complex life on Earth. The worst single day. That day, six hours after that asteroid hit when there were widespread wildfires and smoke and all sorts of horrible stuff blotting out the sky and this pulse of heat that cooked animals alive at the surface. That was a better and nicer and more hospitable climate for complex mammalian life than any point in the history of Mars. And we know that because mammals survived that day. And that's why we're here right now. There are no mammals from then or now that would survive unprotected on the surface of Mars for more than a few minutes at most without a spacesuit.

And I'm pretty sure that there were no spacesuits on Earth 66 million years ago. You would just die. Saliva would boil off of your tongue as you asphyxiate on the surface of Mars without a spacesuit because the air pressure is so low and there's no oxygen. There's no real easy way to fix that problem. Musk has talked about terraforming Mars, making it more like earth by nuking polar ice caps on Mars to sort of get that ice into the atmosphere. There's not enough material in those polar ice caps to create a livable atmosphere on Mars. It's just not there.

Gil Duran: That dumbass couldn't even cut two trillion dollars from the federal budget. I don't think he's going to nuke Mars into livability. There was a particularly memorable part of the book that kind of I really spent some time with because it really changes your perception of all this Mars talk. It was the process women would have to go through, the horrific process to give birth on Mars. Can you give us a glimpse of that? Because I think it shows how dumb all of the talk and the journalism even about Mars tends to be.

Adam Becker: One of the many things we don't know that we would need to know in order to know enough to build even a small colony on Mars, much, much smaller than what Musk wants to build. We don't know what effect Martian gravity, Martian levels of gravity would have on the human gestation process. Because Mars has about a third of the surface gravity of Earth. And we don't know whether or not you can actually have a full-term pregnancy and give birth with that level of gravity. We do not know. The experiments have not been done. And it would be very difficult to do those experiments even with animals without actually sending them to Mars. Ultimately, you would need to test it on humans as well. And who would want to be the test case for that? And if you don't want to do that, then what you've got to do is if you're pregnant on Mars is live in a centrifuge that gives you earth level gravity for nine months. And that's not great. That would be incredibly isolating. And it would need to be very big, the centrifuge. Otherwise you would get really bad vertigo because your head and feet would have different levels of gravity.

It's just not a workable plan. And then even if that works, we do not know what effects living in Martian gravity has on the growth and development of babies and children? And so what are we going to do? Are we going to leave them in a centrifuge for 18 to 21 years? I don't think so. And yet Musk wants to put a million people on Mars in order to make it self-sufficient in the event of some disaster here on Earth, even though there's no disaster that could befall Earth that would make Mars a better option. And this is one of my favorite parts of the whole thing. A million people isn't enough for that. If you want to have a completely self-sufficient colony on Mars, and Musk has said it needs to survive even with the rockets from Earth stop coming, you need a large enough economic base to sustain a high-tech industrial civilization because you can't just be a hunter-gatherer on Mars. You have to have a high-tech industrial civilization. The economic base you need to support a civilization like the one that we have now is at least half a billion to a billion people. So that's 500 to a thousand times more. We're not putting a billion people on Mars. That is never going to happen.

Gil Duran: That seems to be really at odds with the impression we get from the media. And there seems to be a lot of hope in both about AI and about Mars colonization in journalism. There's a lot of credibility given to these tech salvation fantasies. What's going on? Why are journalists sort of largely making this sound reasonable when the way that you explain it as a scientist who's also a journalist makes it pretty clear that we're probably better off living underground here than trying to go live underground in Mars. There is one place, and you say this in the book, where human life is entirely possible. And we're in that place. So why do we get these fantasies through media, through movies, through journalism?

Adam Becker: It's a good question, right? Because there are some really obvious reasons why you can't go live on Mars or anywhere else in the solar system. There are obvious reasons why you're not going to go live outside the solar system. The speed of light will limit you. There are places on Earth that are far more hospitable than anywhere else in the solar system, like say, Antarctica and the polar night or the bottom of the ocean. And we don't live there. And yet people want to go to space. And I think that this is because there's been this science fictional idea of what space will be like and how hospitable it is that has not really been accurate or updated with modern information from our extensive robotic exploration of the solar system over the last few decades.

At this point we have sent robotic probes to every planet in the solar system to a large number of the moons in the solar system. We know a lot more than the writers in the golden age of science fiction in the forties and fifties did. And yet that vision of the future in space that was developed by science fiction in that era has proven surprisingly durable in the face of data that show that you can't do that. Back in 1940, people thought that there might be jungles on Venus because they knew it was shrouded in clouds. And now we know that that's not true. And that the surface of Venus is arguably the most hellish place in the entire solar system with surface temperatures hot enough to melt lead and pressure is equivalent to something like a mile under the ocean. It'll crush you and burn you to death at the same time. Burn, boil you to death at the same time. Space is an incredibly harsh place. And I think that that is a bummer. And so people don't dwell on it because it's more fun to think about the other way things could have been.

Gil Duran: And it does seem like some of these guys, and it's not just Elon Musk, it's also Jeff Bezos, got their idea for colonizing space when they were boys. And they really haven't let go of that. The only thing that's changed is they have all of this money to pursue this fantasy, which allows them to sort of be in rockets and be in the news and enjoy something that most people are never going to get to experience, which is going in space.

 Of course, Musk hasn't had the courage to climb onto one of his own rockets. That's where I have to actually give a little bit of props to Jeff Bezos. At least he has risked himself on his own technology. So it seems like Mars colonization isn't really a backup plan. It's a billionaire escape fantasy that ignores Earth's real crises. There's another crisis that these guys are terrified of, and that is the crisis of death, which we all confront.

And spoiler alert, everybody on the Earth today is going to die someday. No one is going to live forever. Not even Bryan Johnson. Not even Elon Musk, not even a baby that just got born, right? We are not going to solve that problem, especially not anytime soon.

And again, it's one of these things where what does solving that problem even mean? If some part of your DNA is still alive in a robot a million years from now, that's not really you. That's not being able to walk to the corner store and get an ice cream or ride your bike along the bay. It's not what, what is life even when you become a robot? So from brain uploads to digital immortality, transhumanism, this idea of people merging with machines or becoming greater than human, seems to have become a part of the billionaire end game plan. And again, you say it's bullshit, a delusion. What's driving this obsession with eternal life? Is this Silicon Valley's ultimate midlife crisis?

Adam Becker: I think it is in a way. I will say, it's interesting that you say you give Bezos props for trusting his own technology and going up. And this is something I almost never say. I want to give Elon Musk props for knowing not to trust his technology because it would be a mistake to trust it and go up. And that I think probably comes from his fear of death. It's completely natural to be afraid of death. I'm afraid of death. I don't want to die. Not anytime soon.

But I think that it just goes beyond a normal level for these guys, right? And it goes back to that stuff I was talking about before, this desire for control, that nothing could ever possibly be enough. Death is the final loss of control, right? And it's the great leveler, the great equalizer, everybody dies. And these guys do not see themselves as level or equal with the rest of us. And if you don't believe me, when I say that, by the way, if you think that I'm just sort of irresponsibly psychoanalyzing these guys from a distance, I suggest going and taking a look at what a particular tech mogul, Larry Ellison, has said about death. He's very, very explicit about this. It's just not acceptable to them that they will have to give everything up and lose everything at the end of their lives when they die. They just want to keep going because nothing could ever be enough. And it is a real question about like, what does it mean to live forever? Because yeah, sure…life extension and we live longer than we used to. And that could continue to a point, but eventually everybody dies.

We are our bodies. These bodies cannot be extended past a certain limit. There are certain hard biological limits that we are probably already running up against. And if not, we will soon. And you can't just transfer a consciousness out of the body. We don't haunt our bodies. We are our bodies.

Gil Duran: Everyone is going to die except for Keith Richards. And that's because you can't always get what you want. We'll be right back after these messages.

Producer Note: Adam Becker's fantastic book, More Everything Forever, AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity is available now at bookshop.org, Barnes & Noble, local bookstores, and wherever books are sold. He's also on BlueSky on his handle, at Adam Becker. And honestly, there's a lot of cool stuff on his website with an excellent name, freelanceastrophysicist.com. And now back to the conversation and Gil's most inspirational return from break in the history of this pod.

Gil Duran: So, back to death. Explain to us, as a scientist, why we don't get to live forever.

Adam Becker: There are fundamental limits, both biological and physical on this, right? Biologically, yeah, we are our bodies. Our bodies deteriorate over time. This is just the way that life, especially animal life, work. It may be possible, like I said, to extend our lifespans and our health spans, maybe quite a bit longer than we already have, although there's some very suggestive evidence that that's not possible. We don't know for sure. But there's this physical limit as well. There's this quantity called entropy, which is usually thought of as a kind of disorder. Yeah, that we know always increases it is in a lot of ways the most ironclad law of physics or maybe in all of science that there is entropy always increases.

And this not only ensures that any living being will eventually sort of accumulate a kind of disorder in their cells that will lead to their death. It also ensures that the universe itself will eventually die. And yes, that's on a much, much, much longer time scale. That's trillions upon trillions upon trillions upon trillions of years away.

But for people who truly want to live forever and a lot of these guys truly want to live forever, even that's not enough. The world's not enough. The universe is not enough.

Gil Duran: Even a thousand years wouldn't be enough. Some of these guys are worried about the eventual heat death of the universe. They flatter themselves to think that that's the problem they're trying to solve. And it does seem like they're just in some delusion, probably one largely driven by copious amounts of psychedelic drugs, as far as I can tell, but that they have to worry about that instead of worrying about the problems we have now. Let me go to a question from a listener.

“Gen X tech billionaires can't live forever. How do you think they'll handle aging and death?”

 Well, I kind of think we're seeing that happen right now with this sort of clinging to delusions, but what's your quick answer on that?

Adam Becker: I think they'll handle it badly. The sort of Bryan Johnson copium that we're seeing is going to become even more widespread than it already is among the billionaire class. I think that in some ways it's going to be the usual stuff of like there's probably just going to be a lot of plastic surgery. There already has been for some of them. And I think that they're going to freak out.

And the form that that freak out takes, I'm not sure. I mean, we're already seeing some of it. I think there's also going to be a lot of denial that, the AGI is coming soon. It'll save me. And they might believe that right up until the end. And if that comforts them. Fine. The problem is then they make business decisions and influence policy decisions based on this pseudo religious delusion.

Gil Duran: Jesus is a tried and true way to have faith till the end or you can just believe until the end you kind of win Right, then you find out one way or another. I also find their lack of faith disturbing. Especially given the amount of psychedelic drugs many of them have done. They don't think there is something beyond the realm of physical existence that they're not open to the idea that maybe it's all more mysterious than we assume they have to cling to this thing they can't have instead of living in peace with the thing that we do have.

And it seems very strange to me that they would choose to torture themselves and the rest of us in this way. But a big takeaway, and you say this in your book very clearly, is that these ideas all reduce everything to a technological problem, one that they can solve, excluding everyone else from the equation. That these ideas are profitable and will make them richer and align with their bottom line and their fantasies of perpetual capitalist growth, which extends to the human organism and life. Everything must keep growing. Everything must keep going or it's a failure. And most importantly, they offer this seductive promise of transcendence and it gives them an excuse to ignore all of the other problems in the world, all of the other people in the world and all of the real imminent crises here on earth that they are partly or largely now creating. And it seems like you do an amazing job of encapsulating that.

And you give a very specific solution to the problem. And I'm not going to tell our listeners what that is because you really got to buy the book. You really got to read it. This is an important book, probably the most important book I've read in a few years. And I've been reading a lot of books about this stuff, but this really encapsulates where we are at this moment. And to understand why the billionaires are acting crazy and trying to do all these destructive things, you have to understand the basic delusions that they're pursuing. And so.

Thanks so much for writing this book, Adam. And can you leave us with some kind of final words of hope as we spin our readers out into the world with these very dark ideas in their brains?

Adam Becker: Yeah, absolutely. And thank you for promoting the book. I think it's an important book too, but I'm biased. And yeah, I've got a message of hope. One of the things that these billionaires want to do with these visions of the future that they're imposing on us is to make us believe that there's one or maybe two possible futures and that's it. That either there's this utopia or some sort of extinction or dystopia and that's it. Musk has actually said this very explicitly.

 Others have said this explicitly that these are the only options. I believe, and there's good reason to believe this based on history, based on science, based on sociology, the future is open. There are so many ways that this could all go that the future of our lives and the lives of those who come after us could go.

We are not doomed or promised for salvation. The future is what we make of it. And there are so many different ways that we could make that future.

Gil Duran: Thanks for joining us.

Adam Becker: Thank you.