AI Empire and Tech's Quest for Power
“We are now seeing a corporate empire and the US government, in its own empire era as a state empire, each trying to subsume the other. They currently have a tenuous alliance...but the alliance is happening because the state is trying to use Silicon Valley for its empire building and Silicon Valley is trying to use the state as its empire building asset. So each one is trying to ultimately be the dominant one that ends up on top and can direct the other.” — Karen Hao
In this episode of the Nerd Reich podcast, I am joined by journalist Karen Hao and tech critic Roger McNamee. Hao is the author of a best-selling new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. McNamee is a legendary Silicon Valley investor turned critic who has warned that tech companies are destroying democracy.
Click below to watch our fascinating conversation about AI, Sam Altman, Elon Musk and tech’s quest for endless power, wealth, and empire.
Prefer audio instead of video?
This podcast is made possible by the generous support of paid subscribers. If you can, please join hundreds of fellow readers in becoming a paid subscriber today. Click here to join.
TRANSCRIPT: The Nerd Reich Podcast: Empire of AI - Dreams and Nightmares in Sam Altman's OpenAI
Transcripts may contained typos and errors, and may be lightly edited for clarity and readability.
Gil Duran: Are the AI apps on your phone right now mere productivity tools, or are they weapons designed to give tech billionaires more power? Welcome to the Nerd Reich podcast. I'm Gil Duran.
An explosive new book called Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI makes it clear that tech's current approach is a destructive effort to colonize the future.
Our guests today: Journalist Karen Hao, author of Empire of AI, a brand new bestselling book that exposes the truth behind OpenAI's quest for dominance — and Roger McNamee, a Facebook investor turned Silicon Valley critic who is warning for a new reason that big tech is destroying democracy.
They're pulling back the curtain on the battle between billionaires like Sam Altman and Elon Musk to control the future of AI and the world—a world where Congress is trying to kill any AI regulation, even as every major company is racing to deploy artificial intelligence tools, all while they warn those tools might kill us.
But Hao's book also illuminates the struggles of Kenyan workers forced to moderate the most traumatic content imaginable for poverty wages, and she writes about communities from Chile to Arizona defending their local resources from digital colonialists. AI dystopia isn't just in the future—it's also an empire nightmare happening right now.
So who are the AI empire builders making billions off of AI hype? What do they really want? And what happens when they get control of our future?
Here's my conversation with Karen Hao and Roger McNamee.
Gil: Karen, Roger, welcome to the Nerd Reich. Karen, your new book pulls back the curtain on AI development and the company very much defining this moment—OpenAI—in ways that will make many in tech deeply uncomfortable. And Roger, you've been an outspoken critic of Silicon Valley going back to your own awakening experience with Facebook, which is why I thought it was notable that you've been a tremendous advocate for Karen's book, even blurbing it. You said in your blurb: “With a cast of scientists, scammers, and scoundrels, Empire of AI documents the hype campaign that caused the world to fall in love with a technology whose immediate harms are legion and benefits remain unproved.”
So let's talk about the people in that world, because I think that was the surprise to some in your work, Karen. Your book isn't just about billionaire oligarchs—it's also about everyday people in places like Colombia, Kenya, Arizona, and Chile pushing back, raising alarms, struggling to get by in a world where they're dependent on these companies that don't really care about them, or maybe about anybody. Tell us about some of these other characters who are caught up in this story and what their struggles tell us about AI. I think we get this idea sometimes that these billionaires are creating magic machines and we don't know about the other people who are involved in that.
Karen Hao: Absolutely, and thank you so much, Gil, for having us, and Roger for being here. Roger has been incredibly supportive of my work and my book for so long, so I'm really grateful to him.
The title of my book, Empire of AI, is a nod to this argument that these new companies—we need to think of them as empires, new forms of empire. And the reason why I go to all of these communities that you described is because you cannot tell the story of an empire by just staying in the power center of that empire. You have to go to the far reaches of empire to see how the technologies that are created within Silicon Valley really start to break down for the majority of the global population.
So in Kenya, for example, I went to speak with workers who had been contracted by OpenAI during a time when OpenAI was moving from a more fundamental research orientation to commercialization. They realized that if they put text generation models into the hands of millions of users, they could run into a PR crisis with the model spewing toxic, hateful speech. Under those conditions, they went to Kenya to contract workers to design a content moderation filter that would wrap around these models and block anything that the models said that was unsavory before it reached the user.
What that meant for the Kenyan workers was they were doing this detailed task—they were reading reams of the worst content on the internet, as well as AI-generated content where OpenAI was prompting AI models to imagine the worst content on the internet. They were putting them in these detailed taxonomies: Is this hate speech? Is this harassment? Is this violent content? Is this sexual content? To what degree is this content violent? To what degree does this sexual content involve abuse or abuse of children? So that the filter could be taught to block all these different categories of content.
Like in the era of social media, these content moderators were deeply traumatized by their work. It not only broke down their spirits—it broke down their families and communities and the people that depended on them. This is just one of the many stories that I highlight to show this technology is not magic. There's a profound level of labor exploitation that's happening. There's a profound level of environmental and public health harms that are happening to develop these technologies.
Ultimately, you begin to see the logic of empire when you center those stories, because there is no logical basis for why those workers are paid a few bucks an hour and AI researchers at the center of power are paid million-dollar compensation packages. Their work is both fundamental to the functioning of this technology. The only basis is an ideological one, which is that this world should be a hierarchical one and that there are some groups that have a god-given right or a nature-given right to be superior and others who are born inferior.
Roger McNamee: First of all, I want to just compliment Karen for that extraordinary description of the core problem—that essentially what we have in this generation of what is called AI is essentially a business where there's 10,000 mostly white men who are benefiting and 8 billion people around the world who are being either exploited at a minimum or directly harmed by the success of those 10,000.
This is what I would describe as the kind of end state of the evolution of the tech industry that's been taking place since 2009. Prior to 2009, for the first 35 years of my career and for the 15 years before that, or 20 years before that, Silicon Valley existed. Tech was about empowerment. It was about productivity. It was about positive values.
But when the financial crisis hit in 2009, they got this idea: "Wow, we can use data and the free capital that was available from 0% interest rates to change our model, and we can become predatory. We can exploit the weakness of others using data and use that to essentially take control of everything."
The part that makes me so angry is how long it took me to understand. I first observed it in 2010 and told my partners when I looked at Uber and Lyft—looking at that whole generation of ride-sharing guys—I said, "Oh my god, this is totally exploitative and it's all based on breaking the law."
Congress had decided Silicon Valley should be protected from interference, so we're not going to create new laws against them, but we also didn't enforce any of the old laws. Silicon Valley got used to this, and then after 2009 it just basically said, "Look, we're going to break the law with impunity."
Here we are with a thing in AI that's based on: "Okay, we're going to basically end any effort to control climate change. We're going to use up scarce water in places where water is really precious. We're going to steal every copyright. We're going to steal everybody's personal data. And we're going to do all that in order to unemploy tens of millions of people. And in order to make it work, we're going to exploit hundreds of thousands of people in the global south who are used essentially as feedstock to make all this work for the benefit of roughly 10,000 white guys."
I'm sitting there going, "Hmm." That's why I'm so glad Karen wrote this book.
Gil: Let's switch to the main character in some ways of the book—Sam Altman. Karen, you describe Altman in the book as someone with a stunning ability to persuade often and get his way, which is a difference from a few of his key peers in AI right now—Elon Musk and Mark Zuckerberg—who people have learned not to trust or view in a more negative light these days. Yet underneath this sort of boyish "boy wonder" facade, there's something else: a ruthlessly ambitious streak that often ends up turning lots of people close to him against him. People try to push him out of companies more than once. You even report that Elon Musk felt he had been manipulated by Altman, and Altman of course was mentored by Peter Thiel, who was part of the PayPal mafia with Musk.
Because there's some connections here, and obviously you've got a whole book that seeks to understand Sam Altman and his key relationships, but what did you learn about Altman's specific motivations in his quest for power?
Karen: The question of Altman's motivations is really hard because one of the things I found most fascinating while I was reporting this book is I spoke to over 90 OpenAI people as well as several more people that were close to Altman but had not worked for OpenAI. No matter how long someone had worked with him, no matter how closely someone had worked with him, they could not articulate what Altman's motivations and beliefs actually were.
The thing that makes Altman so persuasive is three key ingredients. One: he is really great at telling these compelling stories of the future that persuade people—as you mentioned, he has this persuasive power to join him on a quest or give him capital for a quest. He is really good at understanding what people need, how to motivate them, and how to push them towards joining the quest. And the third thing that people often reference is he has a loose relationship with the truth.
So that's what makes him so persuasive—he can say what people need to hear to join on whatever journey he needs them to go. Because of those three things, when I ask people, "What does Altman believe?" they would often just say to me, "Well, I think he believes what I believe." Except that different people would have polar opposite beliefs and still be saying, "I think he believes what I believe."
Over time, the reason why some people who were very close to him then end up feeling this unease and then inevitably anger towards Altman is because they feel played by him. They start to feel that his actions and where he's ultimately pushing the company, pushing the trajectory of AI development, is actually diverging from what they thought he believed, what he told them he believed.
That is kind of the heart of a lot of the OpenAI drama. It is also a really important dynamic in how AI has ultimately ended up in this "scale at all costs" paradigm of AI development—because of Altman's ability to persuade and also the polarizing nature of his character.
Gil: When I read that description of him in your book, it made me think—I do a lot of studying of psychology and persuasion, and there's a tactic called mirroring where you just kind of nod and you tell people what they want to hear, and on some levels you even mock or mimic their body posture because it sends this subconscious persuasive signal that this person wants to agree with you. So that kind of suggested to me that he's probably quite familiar with the technique of mirroring.
Roger, this isn't your first rodeo seeing a boy wonder CEO trying to save or change the world. Any thoughts on Altman and what Karen just said?
Roger: From 1956, when Silicon Valley was created by the AT&T consent decree, until 2009, it was a completely reasonable thing for people to trust that whatever Silicon Valley created was going to make their life better. That was a completely reasonable hypothesis. Since 2009, I think you can make the case the industry has been so predatory that one should not trust a single thing that they have done nor believe a single thing that they have said.
When Karen talks about a "loose association with the truth," that's just a beautiful euphemism, because the underlying premises of many of the things that have happened since 2009 are just nonsense on their face. You look at crypto, for example—it's based on the blockchain. One thing you know about databases is that you're supposed to have them become efficient as they scale, and yet the blockchain—every transaction costs more than all previous transactions because it gets longer and longer and longer. So the premise is just ridiculous.
The same thing is true here. With generative AI, the idea is that we can apply statistics to a training set and the mean value will give you something that is intelligent. That is such an obviously flawed concept. And then when you throw into the mix the fact that the training sets are not just biased because the data is biased, but also filled with nonsense, you realize that there are use cases for generative AI that make sense in the hands of domain experts, but the notion that you can apply it generally—that you can use it for search, that you can use it for chatbots and things that actually affect people's lives—that is so obviously not true.
You have to ask yourself, how in God's name did this guy persuade these people? How is it that they were able to take him around the world and meet heads of state as though he was actually doing something important? To me, we're going to look back on this thing and people are going to wonder, "Wow, how did everybody fall for this thing?"
Microsoft is obviously a huge factor in that—they legitimized this company. Google clearly could have killed them in the crib in 2022 had they simply pointed out the obvious: that you cannot apply this to search successfully. But all these CEOs in Silicon Valley, they're billionaires completely isolated from normal people, and they're really competitive with each other. So Google, for whatever reason, just had Microsoft envy and decided to chase into the space.
Altman had a lot to do with all of that, and I look at it and I just go, it's unbelievable to me because it is so obviously BS. It's stunning that they've gotten away with it for as long as they have.
Now you see in the field, almost all the news reports are of the products failing. There are more than 300 legal cases that have had citations that were made up. You've had chatbots telling people to commit suicide. You've had all these search results that are obviously ridiculous. The technology clearly doesn't work—not in the generalized case. It clearly works for domain experts in their area of expertise, but 95% of the uses of the products is in places where people don't know the topic and can't evaluate the results. It's being used in schools as a substitute for actually learning how to think or learning how to write or how to reason. There's no good outcome that comes from any of that.
Gil: It's kind of scary. The children in my family beg me to show them AI. They know about it, they want to get on there and use it. They see it like a toy or something, and it's scary what's going to happen to this next generation when they have access to these tools. I forbade it myself, but we'll see what happens in the long run.
It seems to me part of the whole fascination with these guys is the archetype of Steve Jobs that they all try to project—this idea that everything will change and technology will change everything. But we saw that happen the same with Elizabeth Holmes, and that seems to be closer, Roger, to what the actual model is than to the Steve Jobs model. And it seems to be just this kind of quest for power as well.
Roger, so let's unpack that dynamic a bit. Sam Altman, CEO of OpenAI, once considered running for governor of California, and there was also a time when Mark Zuckerberg was very awkwardly exploring the idea of running for president. We increasingly see a trend in which money and power aren't enough for these guys—they want direct political power too. And it doesn't always work out, as we're seeing right now, at least for the moment, with Elon Musk. Roger, what's the thinking here, and why isn't the money enough for some of these guys? Why do they need more than that?
Roger: My hypothesis—I don't really know the answer, but my hypothesis is that you have a whole generation in Silicon Valley that was essentially raised on dystopian fiction and video games, and really, in many ways, their emotional development was so affected by that that they didn't go on to develop empathy or many other emotional tools that allow you to navigate a complex world.
The one thing I'll never forget was in the days when I was a mentor to Mark Zuckerberg, which was 2006 to 2009. That came apart when the company got to a quarter of a billion users and I pointed out to him that that was pretty much the limit of what you could do in English-speaking countries with an ad-based model, and that if he wanted to get bigger than that, he was going to have to start to do business in places he shouldn't want to be under terms he shouldn't want to have.
He says to me, "Roger, I'm not just going for a billion users—I'm going for two or three billion."
I'm like, "Mark, that's crazy. Why would you do that?"
And I just said, "Look, I can't be part of that."
The thing is that I think these guys have figured out—and Musk is really the guy who put it all together—Musk realized you could combine tech power with state power, and once you did that, you created something that might be irreversible. Musk's whole idea is to replace civil service with AI. That's a category error of the most extreme kind. The whole point of government is to do the things capitalism doesn't do well.
All of this stuff, I think, is just terrifying. The other guys all took baby steps towards this, and it took Musk to do the giant leap. Because Mark, for a long time, said, "Hey, I got three billion users. I'm bigger than any country. You guys can't tell me what to do." That was his whole way of handling regulators over the last roughly 10 years.
You look at it now and you go, "Wow." Musk—I mean, I've been screaming about the threat of big tech to democracy for nine years, and it never occurred to me that somebody would figure out how to combine tech power with state power and do it in one shot—literally overnight as opposed to having to do it in steps. I'm embarrassed that I got that wrong.
Gil: Karen, let's talk about Elon Musk for a second since it's kind of hard to avoid right now. He's currently attacking Donald Trump after cozying up to Trump. They've had a major falling out, and of course Musk is a co-founder of OpenAI and he also had a major falling out with Sam Altman, and so he started his own AI company. Now it's all playing out but on a much bigger scale—these fights he has with his colleagues. Musk and Altman both wanted to be CEO and it became a struggle, and now it seems that Musk and Trump both wanted to be president and that became a struggle, so Musk helps Trump win the presidency, now we're seeing this dramatic explosion.
Tell us about the relationship between Altman and Musk, and how do you think Musk's exit will factor into Altman's relationship with Trump, given that Altman is so good at cozying up to power and telling people what they want to hear, and now there's a space that is just opened up?
Karen: I'll talk a little bit about the individual dynamics of these people, but I also want to talk about what I think this symbolizes because there's a bigger picture thing happening that I think is quite dangerous.
So Musk and Altman—on paper they look like mirror opposites. Musk is someone that seizes power through coercion; Altman is one that persuades people to cede power. Musk is the one that seems like the guy that lashes out, and Altman's the one that seems really contained and disciplined. It also manifests in the way that they deal with legal things. Musk is very much a legal offense player, and Altman is very much a legal defense player. You look at OpenAI's structure—it's just all these nested entities, and that in itself is just confusing for helping to understand what on earth is going on with this company.
But ultimately, they're just using different tools to do the same thing, which is to accrue more wealth, accrue more influence, accrue more resources towards their particular vision of the future. Throughout OpenAI's history, it isn't just Musk and Altman that clash—it is all of these other former executives at OpenAI that clash with Altman because they have different ideological visions for how they want to shape AI and how ultimately then that AI will shape the world in their own image.
The thing that happened with Musk and Trump, I think, is emblematic of this bigger picture that goes back to my title "Empire of AI." If we look at the history of empires, one of the analogies that I have been pointing to that I didn't put in my book but is extremely apt for the current moment that we're in with the Trump administration is the British East India Company, which was a corporate empire that ultimately ended up being nationalized by a state empire—the British Crown. That is when the Indian subcontinent went from being ruled by company to being a formal colony of the British Empire.
We are now seeing a corporate empire and the US government in its own empire era as a state empire each trying to subsume the other. They currently have a tenuous alliance. As Roger pointed out, there is an alliance between state and tech power that is unprecedented, but the alliance is happening because the state is trying to use Silicon Valley for its empire building and Silicon Valley is trying to use the state as its empire building asset. So each one is trying to ultimately be the dominant one that ends up on top and can direct the other.
Trump and Musk is the first illustration of this happening. It was highly predictable that at some point it would break apart because of exactly what you said—each one tried to gain power over the other, and then ultimately Trump wins out and boots out Musk, saying, "No, absolutely not. I'm president. I'm number one."
Now Altman has this space, and it's going to be the same dance, but Altman—I think he's quite clever in continuing to persuade people into continuing to cede him power, so I think he might actually have more longevity than Musk's typical tactics. But it is still the same exact thing.
Silicon Valley is now so deeply influenced by thinkers that talk about the politics of exit—about this idea that democracy doesn't work anymore and ultimately the better way to organize society is through corporations run by CEOs. That is ultimately the endgame that Silicon Valley has: they're trying to use the US government while they have this alliance to build hardware and software all around the world, striking deals in the Middle East, striking deals in other places to lay down infrastructure and ultimately get to escape velocity—what Roger was saying that Mark was trying to achieve back in the day. Try to get to a point where they become bigger than countries themselves and then take over the US government, take over democracy.
We will once again see the playing out of how Altman fares in trying to do the same dance that Musk did. But the bottom line is that whether or not Trump or Altman ultimately win out, both versions are highly dangerous because in each pathway, no one is trying to preserve democracy in either pathway. Both of these powerful entities—both the state power and the corporate power—are trying to ultimately move past democracy and return back to an age of empire.
Gil: We do a lot of talking on this podcast about these exit ideas and the network state, and Altman is definitely a part of that, although it seems like he's more in the Thiel model. It's hard to imagine Altman going on Twitter and accusing Donald Trump of being in the Epstein files. He seems like he'll take a much more diplomatic route and understands that to be in proximity to power means you got to lick a lot of boot and hold your tongue, and it seems like he understands that in a way. I feel like that makes him more dangerous than the guy who goes off on Twitter. I think it'll allow him to continue having access to extremely powerful spaces for longer.
Roger, let's go a bit macro on this. You've been arguing for years that tech self-governance is a failed experiment, and now they're trying to get into governing everybody else. Part of the reason that it's hard to nail these guys is that they just shift definitions and reality around to suit their current positions. And Karen says that very clearly in her book—the definitions always shift. What is openness? What is transparency? What is it—a nonprofit or a for-profit? They just shift it all around. And it seems like any federal regulation for the next few years probably unlikely to do what it needs to do in order to bring these guys to heel. What would genuine accountability look like?
Roger: I spent nine years trying to bring about regulation, and it all starts with the data, so it has to start with privacy. There's an NGO called the Electronic Privacy Information Center (EPIC) which I'm on the board of that is at the lead of trying to make this happen, and it is an incredible struggle, and yet that's where it begins.
I actually think that we as individuals have way more power than we realize. Silicon Valley is actually a series of overlapping monopolies, and if you know the novelist Cory Doctorow, he has a term for the business model—he applies it mostly to social media, but it really applies to all centralized cloud-based apps—which is this idea of "enshittification."
You start with a product that is immensely appealing. It does something that changes people's lives for the better, and they get completely hooked. The vendor who has this product does no monetization until everybody is addicted, and then when they do, they bring in usually advertising or some other monetization form. In that process, they enshittify the experience of the early users, but they do it really gradually, and again, these people are addicted, so they tolerate the enshittification.
Then the second stage, after they've gotten to great profitability, they realize they can get even more because if they enshittify the experience of the advertisers, they can just print money. That's the phase we've been in for the last five years. If you use a product like Google search, or if you use Microsoft Office 365 or Google Apps, if you use Facebook or Instagram, the thing you notice immediately is that the experience today is just dreadful. These products are simply horrible.
The thing that just boggles my mind is that it hasn't occurred to anyone that these guys are incredibly vulnerable. All you have to do is recognize the industry hit a fork in the road in 2009 and it left empowerment and productivity in favor of exploitation of extraction. All you need to do is go back to 2009, pick up where they were, and reinvent all of the core products and move them forward from there. So there's essentially no technical risk, huge market opportunities, but nobody has attempted to do that.
I think this is something that will happen outside the United States because the behavior of the integrated tech-state power in the United States is such a giant threat to the European Union, to Canada, to countries around the world that they have an enormous incentive to do that.
When I look at this, AI is funded by the profits on those monopolies, and there is a race going on now. Can you convince the guys in the Middle East to fill the void that's going to be left when all of those monopolies collapse of their own weight? Because if you use Google for search, you're getting garbage out of it. If you're using Microsoft Office or Google Apps, your experience is just horrible.
So I do expect that this is an unstable situation, and I don't know how it's going to come down, but there is a race going on, and we as individuals, we should just say no. Just stop using the products. I was expecting young people to do it, but I actually think it's going to be people in businesses that the really vulnerable products are Office 365, Google Apps, Google search, Gmail, because they're all bogus, and there are substitutes for all of them.
[Producer's Note]: A quick note from the Nerd Reich producers: "Enshittification" is a great word that can describe any number of things—cars, platforms, smaller burrito bowls. Tell us how it applies to something in your life and tag our Bluesky at http://nerdreich.bsky.social. Back to the pod.
Gil: Karen, let's talk about the would-be colonizers of the future. The empire—that's the main metaphor, maybe it's not even a metaphor, really. It's very literal term that you describe. You use that term to describe AI development, and that's not accidental language. Break that down for us. When you say empire, what are you actually describing, and how does the AI empire specifically work?
Karen: There are four different features that I point to that are the parallels between what I call empires of AI and empires of old.
The first one is that they lay claim to resources that are not their own, but they interpret the rules to suggest that it was always their own. That refers to the data that these companies scrape from the internet. People who put that data online—they never gave informed consent for having their personal photos or their thoughts get taken and used to train models that might ultimately constrain their future economic opportunity. But companies will say, "Well, it's in the public domain. It's totally fair game." And all that intellectual property that these artists and writers created—"That's fair use. We're using it under fair use."
The second feature is that empires exploit a lot of labor. So that refers not just to these companies contracting workers all over the global south and in economically vulnerable communities to help produce the technologies they create, such as through content moderation, data preparation, data cleaning for just a few bucks an hour, as I talked about, but also the fact that their technologies are ultimately labor automating technologies. OpenAI's definition of artificial general intelligence is "highly autonomous systems that outperform humans in most economically valuable work." So they are explicitly saying that their intent is to do better the jobs that people usually get paid for. So there's labor exploitation going into the creation of this technology, and then the technology itself perpetuates labor exploitation.
The third thing is that empires monopolize knowledge production. So in the past decade, what we've seen is because AI companies have become so resource-intensive, they can afford to pay researchers these million-dollar compensation packages that I mentioned. So the top AI researchers in the world used to mostly work for academia or independent research labs. They now mostly work for AI companies, and that means the fundamental science that underpins our public understanding of how AI works and its limitations is being filtered through what is good or bad for the empire. That's effectively the equivalent of all climate science being predominantly done by researchers working for oil companies. You're obviously not going to get an accurate picture.
The final feature of empire is that empires always engage in this narrative that there's good empires and there are evil empires, and they, the good empire, need to do all this resource extraction, need to do all this labor exploitation in order to be strong enough to beat back the evil empire. I talk throughout my book about how OpenAI consistently identifies new evil empires to hold up. So originally the evil empire was Google. Now increasingly the evil empire is China. And the idea is they, as the good empire, are ultimately civilizing the world. They're not engaging in exploitation—they're actually bringing progress and modernity to everyone and giving humanity this gift where they can bring all of the human race to heaven instead of damn them to hell.
That is literally the language that they use these days. They talk about building digital gods. They talk about heaven and hell. And that is quite a profound echo of the way empires of old used to describe themselves as well.
Gil: So we're surrounded by technology. It's hard to escape, no matter how hard we try. We're being sucked into the consumer funnel of these companies. Most people who listen to this conversation will do so on YouTube, which is owned by Google. Many liberal and progressive writers are on Substack, which means the venture capital firm Andreessen Horowitz is profiting from their work while supporting the Trump administration.
Roger, these are powerful systems whose success depends on exploitation and harm in many ways, yet we're in a moment when everything is being pitched as AI-enhanced. It's hard to escape. Any program you get now is like "AI! Now with AI! Your phone, now with AI!" But some people do try to escape. You, for instance, don't use certain products. Is there any ethical way to use these products from large tech companies, and why don't we see venture capital backing projects that are good for the public?
Roger: I don't think there's any ethical way to use AI. I don't think there's any ethical way to use Uber and Lyft. I don't think there's any ethical way to use crypto. I think these are products that are rotten at their core.
The thing is, in the case of Uber and Lyft, or DoorDash, they're incredibly convenient, and we have been trained for the last 70 years in America in particular to choose convenience as the first-order feature of anything that we use. So we seek out convenience, and we seek it out literally the way lemmings seek out cliffs. It's really terrifying.
The thing is, we've been manipulated to believe that convenience is always the best path forward. That's the thing driving kids to use ChatGPT instead of learning how to write—it's convenient. It saves you time.
The thing that I would argue—and listen, I think Karen is absolutely brilliant, I think this book is amazing, and her whole way of describing empire is really important—but there's a fifth element that I would just like to add, which is my own personal one, which is that empire is about essentially collecting wealth in the hands of a tiny number of people by exploiting literally everyone else.
When you get into conversations about democracy, I think people don't really understand what this is really about: human rights. Are you going to be a human being with agency? Are you equal to everyone else, or are you somehow going to be lesser than others?
I think that the tech industry has a very clear plan, which they do not hide any longer. They used to hide it, but now they're really open about it, which is: "You little people, you have no rights. We're going to take whatever work you have, whatever product you have created, whatever it is that is your will, the thing that you do for a living—we're going to take that away from you for our benefit."
I look at all of these things and I simply point out to everybody: you may have trouble imagining what life would be like if you don't have the right to vote, or you don't have the right to reproductive freedom, or you don't have the right to health care, or you don't have the right to a job, or any one of a gazillion other things. You may have trouble imagining that, but guess what? You probably ought to work really hard at imagining it because that's not a hypothetical. That is literally the game plan that these guys are on.
So if you're using Microsoft Office 365 or Google Apps, you must assume that everything you're putting into those systems is being processed by those companies for AI, and that there is some prompt you can use with AI that will regurgitate your private information, your company strategic information in whole form to the prying eyes of somebody else. I'm looking at this and going, "Why would anybody put up with that?"
I used to have a Microsoft Exchange server for me, for my wife, and three people who work with us—an insane expenditure. Microsoft deprecated the thing every year until it broke because they wanted to switch us to Office 365, and I refused to go. I'm sitting there going, "If everybody did what I do, this whole thing would be completely different," but they don't.
Gil: One thing that's been interesting to me to watch as someone who worked in politics for a long time is that people I've known from California Democratic politics for years—people who worked really hard on progressive policies, people who worked really hard to get Kamala Harris elected—they've all now gone to work for a lot of these tech companies. In fact, two people immediately after the Kamala campaign pop up at OpenAI now. This has been kind of surprising to me. I know a lot of people who are in the middle of some of these pro-tech things, so the ethics get very blurry when people see their own personal advancement.
Roger: Shame on you. This has been going on—the revolving door between Silicon Valley and Democratic politics has been going on since the Clinton administration. If you're just noticing it now, I mean, seriously, let's just go back and look at the list. Kamala Harris's brother-in-law is at Uber. You look at this and you just go, "The Democratic Party sold its soul to Silicon Valley." It did that in the Clinton administration, and it was really bad under Obama.
Obama had a Federal Trade Commission case against Google that could have nipped surveillance capitalism in the bud in 2013, and Eric Schmidt, who had been at least the figurehead chairman of the campaign, prevailed and they killed it. The notion that the Democratic Party is helpful on these issues is laughable. The Democratic Party largely created Elon Musk. Who made him richer? Who made him the richest man in the world?
People that I've had a long, great relationship with, like Adam Schiff—one of my favorite members of Congress—in order to get elected to the Senate, took a ton of money from the crypto guys and became an advocate. Nancy Pelosi became an advocate. What is up with that? These people know better, but the incentives of politics in America are to take the money, so nobody is working for us.
The question is, when are we going to insist that our politicians work for us?
Gil: I think I agree with you it's a long-standing problem. I think what's become different lately is that the overt harms and imperial ambitions of these companies has become very pronounced, and that's still not going to stop Democrats from going directly into it. In my view, the co-optation and corruption of the Democratic Party is an even bigger threat than the Republican Party. We know the Republicans are already there, but the Democrats now—people who were fighting for this different future supposedly—are actually going for the exact same future, just with different language and a different take on certain issues that we might all agree on.
But Karen, you recently described generative AI as "fruit from a poisoned tree," and that's a legal concept about evidence obtained through illegal means. What are the poisoned roots of AI development that can't be wished away with better intentions? What would you say to those currently using AI products as consumers about what they're actually participating in?
Karen: One of the things I want to clarify is I specifically talk about that when talking about Silicon Valley's conception of AI development, which is the "scale at all cost" paradigm of AI development. This is the kind that is scraping all the English language data on the internet. This is the kind of AI that is leading to the mass proliferation of data centers and supercomputers, which is then creating environmental, public health, and freshwater crises all around the world.
But there are other types of AI technologies where I would describe them as task-specific, focused tools that can target computational problems that would in fact be beneficial in different spaces, such as the mitigation of climate change, improving healthcare access, improving educational outcomes, and things like that.
The reason why I call Silicon Valley's conception of AI "fruit from a poison tree" is because of exactly that—there are just so many social, environmental, and labor harms along the supply chain production of this technology, and then ultimately, as I mentioned, the technology itself then is also labor exploitative and is also leading to detrimental effects like the erosion of critical thinking in schools.
Every time someone uses these tools, they are helping to perpetuate that imperial ambition. As Roger mentioned, if everyone stopped using these technologies, these companies would have to change. They would have to change just in the same way that the fashion industry used to have also hugely exploitative practices and lots of labor environmental harms, and consumers shifted en masse to create new markets for sustainable and ethically sourced fashion.
Roger's exactly right that people forget how much power we have. All of the resources that are used to create these technologies—the data, the land, the energy, the water—and all of the spaces that these companies need to deploy their technologies into—schools, hospitals, government agencies—these are actually all owned by individuals, by the public, or by communities.
That data is our data. If we stopped using these tools, they would stop getting our data. Artists and writers that are suing these companies now and saying, "You can't just take our intellectual property"—that is them reclaiming ownership over their data. Teachers and students that are objecting now to the idea of AI just being deployed willy-nilly into the educational environment and saying, "Can we figure out under what terms we deploy AI so that it actually fosters creativity and fosters critical thinking?"—that is them pushing back and starting to deny access to a space that is collectively owned and deny access to Silicon Valley.
If everyone did that all along the supply chain of AI development and deployment, we would get to a place where we would start having more broadly beneficial AI technologies. But we need to, as Roger said, recognize that convenience—the convenience of these tools—is the way that Silicon Valley is greasing the wheels for the perpetuation, fortification, continuation of the empire.
Roger: The core issue here is that in Silicon Valley, the play was to spend so much money that they would crowd out literally everyone else. So the thing that Karen's talked about is hypothetically true, but in practice, all of the engineers who are the leading thinkers in this area have gone to work for the empire, and so the rebellion, such as it is, is not particularly well-funded and it's not making that much progress.
As I look at it, and I think this is a really important point here to keep in mind, it's not just that the big tech monopolies of old tech are very vulnerable—the big five are vulnerable because they have five essentially identical development efforts chasing what I suspect will be at most three viable opportunities, and maybe only two.
So you look at this—if Musk is out with Trump, then maybe he loses this, but he put himself in the pole position to be the guy for the US government, and he may still have that because his people are still in there. Well, you got four other guys competing for that. Two of them are almost certainly toast, and probably relatively quickly.
Now, what are the knock-on effects from—remember, the industry will have spent $600 billion by the end of this year on this stuff, so at least $200 billion of that is going to be a write-off in the next year or two because there just isn't going to be space for five of them.
The Chinese showed, "Hey, with a little ingenuity and stealing a little IP here and there, we can do it for less than 1% of the cost," because it turns out that LLM technology is actually a commodity. It's not very valuable. Why? Because it doesn't do much. It's not the answer. It's not the path to AGI. In fact, there is no path to AGI—certainly not in our lifetimes.
You look at this—the whole thing is just smoke and mirrors, more smoke, more mirrors. The politicians have sold out completely, CEOs have capitulated completely, and the press has capitulated completely. So it's like there's nobody left but us. So if we don't act, these people are going to win something, and whatever it is that they win is not going to be good because the products aren't any good.
Gil: Karen, I went to your talk at the Commonwealth Club, and I noticed something interesting in the book signing—you had a little stamp that you put in the book cover that said "AI Free." Tell us a little bit about your personal relationship with technology.
Karen: I do not use generative AI tools. I don't use ChatGPT. I don't use Claude. I don't use any of them. Part of it is I investigate these companies, so I'm absolutely not going to just give my data to them. I only use it with fake email addresses and a fake account to understand when they release new features so I can continue doing my work.
I do use predictive AI tools, and this is what I was referencing when I was saying task-specific, smaller AI models that are used to tackle useful things. So the stamp says "Gen AI Free," not "AI Free," because I did use predictive AI for some parts of my reporting.
I wanted to figure out how expensive OpenAI's furniture was because I was trying to explain the shift that happened when they were a nonprofit to becoming a very well-backed, Microsoft-funded company. So I used Google Image reverse image search, which is a predictive AI tool. I took the photos from OpenAI's chairs from back in the day, and then I took a screenshot of the chairs from their office that they upgraded to, and then I ran it through Google reverse image to try and figure out what the price was.
It turned out that the original chairs were around $2,000 each, and the upgraded chairs were going online for around $10,000 each. So I ended up adding that detail into the book to try and describe in a very concrete way the escalation of wealth that this company was experiencing. As Roger mentioned, ultimately the empire metaphor really talks about the accumulation of wealth occurring to the top at the detriment of the majority of the world.
So I do sometimes use predictive AI tools in my life, but yeah, I avoid any of the generative AI tools that are made specifically by these companies.
Roger: May I push back on that slightly? So you use the term "predictive" like somehow that category of AI is okay. Let's remember that predictive is only as good as its training set, and I believe the four largest use cases of predictive AI have been predictive policing, mortgage review, hiring, and—I'm drawing a blank on what the fourth one was, but let's just look at the first three.
So predictive policing is essentially using data filled with bias in order to justify the overpolicing of Black and brown communities. It has done exactly what the buyers want. The police departments want the bias because it essentially absolves them of responsibility for overpolicing.
You look at banks—digital redlining. That's the same thing. Biases are built into the data sets, and it gives them—it absolves them responsibility for denying Black, brown people, and immigrants and women mortgages. The same thing happens in the job market.
What the fourth one was—the fourth one is moderation of social media, where it's been an abject failure. So in the four largest use cases of predictive from an economic point of view, it has produced huge social harm.
Now, there are obviously other use cases in drug discovery and other places where smaller data sets have produced positive outcomes, and the same thing is true in generative—in the hands of a domain expert with a highly curated data set, you can produce something useful. But that's not what these people are trying to do. They're claiming they have the answer to everything. That this is literally a Douglas Adams 42. The notion that we believe anything that these people are saying is just unbelievable.
Karen: That is a very good pushback. Absolutely. Roger is right that just because something is predictive doesn't mean that it's suddenly okay. For me, any technology that emerges out of Silicon Valley's quest to create "everything machines" is inherently corrupted. I don't think that is true for predictive—there are certain predictive AI technologies that could be beneficial if all the conditions apply, as Roger said: they're developed and deployed in ways that are highly responsible and held accountable. But of course, there are also many, many different ways to abuse predictive technologies.
But I do not think anything that comes out of this quest of building everything machines can have that same opportunity to be ethically deployed. It seems like the focus on AGI is something that distracts us from these very real and present harms that are looming more closely to our reality than "Oh, we have to save everybody from this monster that, by the way, we're trying to create," which we probably maybe can't create, but we're trying to create it in order to save you all from it. That seems to be the logic around that.
Gil: Roger, you've been in this fight longer than most people. When you look at the current moment, do you see a genuine reckoning on the horizon, or have we fallen further down the well than ever?
Roger: If there's a reckoning coming, it's going to come from the industry's own failures. So the monopolies in social media, in enterprise applications and search and all that, are poised to break down. The big five in generative AI are going to have a day of reckoning because there isn't room for five winners in that category.
I think the failures that will result from the people actually using this stuff are going to be spectacular, and they're already terrible, but I think they're going to become much more so. So I think the big hope is that these things are coming, and if we can persuade people, then when the collapse comes from the social media monopolies and the enterprise monopolies and the failures of a couple of the generative AI guys, then we have a chance to have a real reckoning here.
But to expect it from government? I don't think there's a chance. To expect it from corporate CEOs, from the customers? I don't see a chance. To expect it from journalism? Not a chance. If it weren't for Karen and a handful of other people, we wouldn't know anything about this because most of journalism is just stenography for the PR department.
Gil: Karen, is Roger right? We started this conversation talking about the people—regular people being affected by all of this. What did you see writing your book that gave you hope? And in the book, you do go into some ideas for the future. I never like to give those ideas in the interview because people got to buy the book—it's an important book, it's highly readable, and when you buy books like these, it sends a signal to the publishing world that they should publish more books like these. So we're not going to tell you the ending here, but give us a little glimpse of what gives you hope in this David and Goliath struggle against this dark vision of an empire future.
Karen: The thing that gave me hope was I met so many people along the way who you would assume would have no agency that fundamentally remember that they did, and remember that they had agency much better than people who should actually have way more agency.
I'm talking about the Kenyan workers that I met on the ground. I'm talking about the Chilean water activists that I spoke to that pushed aggressively against data center expansion within their communities.
Objectively, they're at the bottom of the global power hierarchy. These are communities that are in poor communities in poor countries combating a wealthy enclave in the wealthiest country. This is extraordinary that they, first of all, knew that they had agency, and second of all, then made so much noise that they got international media attention.
The reason why I even went to visit them and report on them as part of my book was because there had already been significant reporting on these communities, so I knew that they existed. I think we just need to remember that spirit. We need to capture the spirit that they had and their willingness to fiercely protect their resources, their labor, their dignity, and that is ultimately what is going to help get us out of this hole.
Gil: Thank you both for joining us on the Nerd Reich podcast.
Karen: Thank you so much for having us, Gil.
Roger: Thank you.
Producer: The Nerd Reich podcast is produced and edited by RR Robbins. It's written and hosted by Gil Duran. Become a paid subscriber to the newsletter today at TheNerdReich.com. It's really helpful if you write us a review on Spotify or Apple Podcasts, and if you subscribe to our channel on YouTube. It's possibly less helpful promoting us on LinkedIn, Pinterest, or Tinder, but hell, give it a shot.
Today's final words from FDR: "The liberty of a democracy is not safe if the people tolerated the growth of private power to a point where it becomes stronger than the democratic state itself."
See you next time.