In this episode of Democracy and Destiny, host Ciara Torres-Spelliscy—law professor and Brennan Center fellow—examines the rise of “crypto bros” and the corrupting influence of unregulated digital money in American politics. She covers the conviction of former Congressman George Santos for wire fraud, identity theft, and FEC violations and later speaks with Harvard Law professor Lawrence Lessig about the dangerous intersection of AI, social media, and campaign finance. They unpack how engagement-driven algorithms polarize the electorate, the broken promises of copyright law in the digital age, and the looming threat of AI-manipulated elections.
[Music]
This is democracy and destiny with Ciara Torres Torres-Spelliscy.
[Supreme Court audio clip] “I have the per curium opinion and judgment to announce on behalf of the court Buckley against Valeo.”
[Clip from Nixon Presidential Library] John Dean to Richard Nixon: “We have a cancer within close to the presidency that's growing.”
[Supreme Court audio clip] “In case 08-205 Citizens United versus the FEC Justice Kennedy has the opinion of the court.”
[Supreme Court audio clip] “The First Amendment's core purpose is to foster a vibrant political system full of robust discussion and debate.”
[Supreme Court audio clip] “There is no right more basic in our democracy than the right to participate in electing our political leaders.”
[Supreme Court audio clip] “With fear for our democracy I along with justices Kagan and Jackson dissent.”
Ciara Torres-Spelliscy: Welcome to the show. I'm Ciara Torres-Spelliscy. I'm a law professor at Stetson Law School in Florida and I'm a fellow at the Brennan Center for Justice at NYU School of Law. I work on the intersection of election law and corporate law this show was inspired by my third book Corporatocracy: How to Protect Democracy from Dark Money and Corrupt Politicians published by NYU Press Election Day 2024. I realize in today's busy world, reading a 300-page book is not on everyone's to-do list, but even as a law professor I have time to listen to radio shows and podcasts when I'm commuting to campus or walking my dog. So here we are this is “Democracy and Destiny.”
Today's episode is about the new money in politics, especially from crypto bros. I will be joined today in a few minutes with my guest Professor Larry Lessig from Harvard Law School who will talk about the intersection of technology, law, and campaign finance. First let's start with pay-to-play today.
[Music]
The term pay-to-play comes from the radio payola scandals from the 1950s and 1960s record companies would pay radio stations to play their music hence it was literally pay-to-play. Today the phrase pay-to-play y is shorthand for all kinds of political corruption, especially when government contractors or others with business pending in front of the government pay bribes to public officials to get a private benefit, like a lucrative no bid contract or approval of a corporate merger. One of the things I learned while writing Corportocracy is that political corruption is prosecuted frequently but the media just doesn't report it as often as other things like celebrity news. That leaves the misimpression for the public that corrupt politicians or shady government contractors are getting away with crimes all the time. And they are not. So I swore to myself that if I ever had a news generating platform that I would highlight that political corruption can be met with serious legal consequences including fines and jail time. So our example of pay-to-play today is from the DOJ. According to the Department of Justice ex Congressman George Santos was sentenced to 87 months in prison for wire fraud and aggravated identity theft. Santos filed fraudulent FEC reports, embezzled funds from campaign donors stole identities charged credit cards without authorization, obtained unemployment benefits through fraud, and lied in reports to the US House of Representatives. Former Congressman George Anthony Devolver Santos was sentenced by Judge Sybert. Santos pleaded guilty in 2024. U.S. Attorney John Durham said, "George Santos was finally held accountable for the mountains of lies theft and fraud he perpetrated for the defendant it was judgment day and for many of his victims including campaign donors political parties government agencies elected bodies his own family members and his constituents it's justice." “His lengthy prison sentence is a just ending for a weaver of lies who believed he was above the law.” said Nassau County District Attorney Donnelly. The Party Program Scheme: During the 2022 election cycle Santos was a candidate for the United States House of Representatives Nancy Marx who pleaded guilty on October 5th 2023 to related conduct was the treasurer for his principal congressional campaign committee. Santos and Marks devised and executed a fraudulent scheme to obtain money for the campaign by submitting materially false reports to the Federal Election Commission. Santos and Marks agreed to falsely report to the FEC. In fact, Santos and Marks both knew that these individuals had neither made the reported contributions nor given authorizations for their personal information to be included in such false public reports. These falsely reported loans included one for half a million dollars when in fact Santos had less than $8,000 in his personal business bank accounts. As a result of qualifying for the program the congressional campaign received significant financial support. The Credit Card Fraud Scheme: Santos devised and executed a fraudulent scheme to steal the personal identity and financial information of contributors to his campaign. He then repeatedly charged contributors credit cards without their authorization. Because of these unauthorized transactions funds were transferred to Santos's campaign and to his own bank account in furtherance of the scheme. Santos sought out victims he knew were elderly persons suffering from cognitive impairment or decline.
[Music]
Ciara Torres-Spelliscy: Our next segment is Corruption Junction. I have been writing about money and politics for two decades. I was inspired to write the book Corporatocracy because of the events of January 6th at the U.S. Capitol. One way to think about this book is it's the Supreme Court's horrible Citizens United decision meets the horrifying events of January 6th. So that we are literally on the same page, let me read a short excerpt from Corporatocracy
[Reading from Chapter 1 of Corporatocracy.]
[Music]
Now we get to the heart of the matter which is the problem of money and politics. My guest Professor Lawrence Lessig is a professor of law at Harvard Law School. He is the author of several books including They Don't Represent Us, Republic Lost, and How to Steal a Presidential Election. You may be the only person I'm interviewing for this series who is a technology expert. So I'm glad to have you here today to speak about the state of American democracy and to help us think through how technology policy intersects with money and politics. Welcome to this section of the show which is called Corruption Junction.
Professor Larry Lessig: It's great to be here. Thanks for having me.
Prof Ciara Torres-Spelliscy: Where did you grow up?
Professor Larry Lessig: So I grew up in the kind of Kentucky part of Pennsylvania town called Williamsport. I lived there from about six until I went to college.
Prof Ciara Torres-Spelliscy: And what inspired you to study law?
Professor Larry Lessig: It was my actually my uncle. In 1974, he was asked by the House [of Representatives] to be the lawyer that would explain to members of the House why Nixon was guilty of high crimes and misdemeanors. In a weekend in August before Nixon resigned, he came up to visit us. He took me for a long walk and he said to me, “you know law is the only discipline, or the only field, where it's your power comes through reason not through money or not through force.” There was something deeply attractive about that idea.
Prof Ciara Torres-Spelliscy: So you and I have crisscrossed paths many times since 2010 when the Supreme Court decided Citizens United. There is this democracy circuit for academic, nonprofit, and philanthropic symposia where we have often been in the same room. So I remember seeing you in Los Angeles, in New Orleans, in New York, and you were even kind enough to give a keynote speech at my law school…
Professor Larry Lessig: I remember yeah.
Prof Ciara Torres-Spelliscy: Yeah. At Steston. And then I saw that one of your latest TED talks was in Berlin. So I have to ask do you actually enjoy all the travel that comes with your life?
Professor Larry Lessig: so actually I used to before COVID. I traveled every week and then COVID taught me, I don't like to travel because I really it turned out loved being at home.
Prof Ciara Torres-Spelliscy: What do you do to keep yourself sane during these trying times?
Professor Larry Lessig: Well the only sanity is my kids. I think anybody who has a clear sense of just what's at stake can't help but be terrified. And especially lawyers when you see the extraordinary transformation that the administration is trying to affect. It's hard to explain it to people who aren't lawyers. But you know for those of us who see it, you can't, you can't unsee how profound and significant it's going to be.
Prof Ciara Torres-Spelliscy: So if you could go back to around maybe 1990 how would you have wanted the internet to be regulated or not regulated back then?
Professor Larry Lessig: The thing that broke the internet wasn't conceivable by anybody back then because the thing that's broken the internet is the engagement business model which is driven by AI, which is focused on the platforms [so they] can rent you out to the highest bidder. And they've been enormously successful in that. In developing technologies that force us to obsess with our machines and spend as much time as we possibly can on our machines so that they can rent us out. All of that was inconceivable, you know, in the 1990s because the technology wasn't there. The machines were not powerful enough. Nobody had demonstrated quite how it worked. This is before Google had even been born and certainly before the founders gave up on their fundamental commitment not to have advertising at the core of Google. I think that changing your preferences to turn you into the sort of person who can be easily predicted. So people sometimes think that the internet is a thing that's existed in in the public's minds since the 1990s till today. But the internet's actually been very many things. And the thing we ought to understand it today as is not a technology, but a business model. And recognize that business model has imposed extraordinary externalities on society as it's turned us into extremists who hate each other so that they can sell more ads.
Prof Ciara Torres-Spelliscy: Yeah, one of the things I tell my law students is: “you are the product. When you are on these big social media networks, you are the product. They want to sell your information to advertisers so they can better target ads at you.” And this is true also in the political realm: that they want to be able to target political messages that will appeal to you.
Professor Larry Lessig: But what's important about that is to emphasize that it's not just passive. It's not just that that they're like kind of understanding who we are and feeding us what we want. The really critical thing which you know Stuart Russell wrote about: the machines are learning how to change our preferences so that they can serve us ads better. And what Max Fisher's book The Chaos Machine demonstrates is that if you look at the spread of social media across major democracies, you can see that the more the social media spreads, the more polarized and radicalized the public becomes. Because the social media realizes that if it can turn us into right-wing or left-wing crazies, we're more reliable products. We're more easily manipulated and steered into whatever content they can rent to their to the highest bidder. So it's kind of manipulation of us that the technology is affecting, not just serving us what, you know, we actually would choose to watch.
Prof Ciara Torres-Spelliscy: One of the first places I encountered your work was when I was a Columbia Law student. At the time, what had just happened was the auctioning of spectrum that was going to be used by cell phone companies. Was there a better way to distribute that spectrum that would have been more small d- democratic?
Professor Larry Lessig: Oh absolutely. That was a critical mistake even though it was progress over the old model for allocating spectrum. And from the beginning there were skeptics about this. Ronald Coase who said that, you know, there's no reason to license spectrum any more than there's reason to license where you live on your property. You could sell spectrum just like you could sell property. And if you sold spectrum, then people could resell it to the most to the people who value it the most. And you would get a kind of efficient spectrum market just like you would get an efficient property market. But what Coase didn't recognize because it wasn't actually common or understood at the time was that there was another way to allocate spectrum which is to use technologies that effectively share the spectrum space and share it in a radically efficient way. This is basically the technology that Wi-Fi works on, but Wi-Fi has actually been allocated to a pretty terrible slice of the spectrum. This alternative kind of ultra-wideband spectrum allocation would basically say: anybody can be using the spectrum and the technology would be smart enough to figure out how to share the use of the spectrum, which would facilitate a wider group of people being able to use the spectrum, than just those who are able to pay to buy access to the spectrum. And that “spectrum as a commons” would have been a far superior way to allocate spectrum than auctioning it off. Of course the government wouldn't get as much money, but the economy would be much richer because there would be much greater competition about exactly how best to deploy and facilitate use of this spectrum resource.
Prof Ciara Torres-Spelliscy: What is the biggest flaw in the Digital Millennium Copyright Act?
Professor Larry Lessig: The biggest flaw is the decision to entrench a particular business model for copyrighted material. And that business model is the idea of facilitating the lockdown of content to authorize quote unquote “sales,” but it turns out it's not really “sales.” It's basically licenses for a period of time of access to that content. And locking up any other use and locking up or making illegal the ability to hack or work around the technical measures that have been deployed to give the copyright owners maximal control over their copyrighted material. And the reason that's a mistake is not because one should be against copyright. I think copyright's incredibly important. But because it entrenched businesses, who ultimately were not so interested in the artists. You know, at the beginning of the internet, everybody was afraid of Napster or ways of sharing content for free. And how were the artists going to get paid? But I think if you talk to most artists and you ask them, “how much are you being paid by Spotify to have access to your content?” They would tell you, “not very much.” You know, pennies on the hundreds of thousands of listens to that content. And that's because the system essentially locked in access to these huge players, rather than facilitating much more dynamic IP competition business models to facilitate artists, as well as access to their work. I think the best example of this is way back in the day I became friends with Sean Parker, who at the time was at Napster. Eventually he became one of the earliest Facebook investors. But in his days when he was with Napster, he was a deep advocate of changing copyright laws to facilitate greater competition in business models for building and developing artists. About a decade after I first met him, I was on a panel with him. At that point, he had become a board member on Spotify. I was astonished that he was so unreflective about this transformation but he said "You know I've now decided that I think copyright laws are really good." And of course what he was saying is the way they are made it extremely hard for any other business to try to compete with Spotify. And of course, now he had a vested interest in Spotify being the number one platform to get access to music. And so that's not surprising, if you're interested in it for the money. But if you're interested in it for the way it facilitates support and protection of artists, I'm not sure this entrenching of the power of labels or the platforms actually made any sense.
Prof Ciara Torres-Spelliscy: What is Creative Commons?
Professor Larry Lessig: We were in the middle of the copyright wars where it seemed like there were just two sides to this war: either we were for copyright or against it. We recognized there was actually a third position which is: not people who opposed copyright but people who wanted to create work, but they wanted to assure that it could be shared and built upon freely. So rather than “all rights reserved” or “no rights respected.” This perspective was: “some rights reserved and some rights given over to the public” to make it easy for the public to know how they could build upon and share their work. So we launched these licenses as a way to give people the opportunity to show through their acts, make true: that they wanted their creative work built upon and shared. But very quickly, people rallied to the recognition that this in fact was what they thought about copyright.
Prof Ciara Torres-Spelliscy: Let's take a short break and we're back [Music]
Professor Larry Lessig: You know, if you're in the education business or if you're a scholar, platforms like Wikipedia are committed to the idea that the people contributing to their platform or people who are sharing their content want to guarantee that others are free to share that content too. These licenses provided an infrastructure for doing that and have since become the infrastructure for the open access publishing that has become central to the way many academic journals work and central to Wikipedia.
Prof Ciara Torres-Spelliscy: When I was at the Brennan Center we would release our reports using creative common licenses. I don't know if they still do that anymore, but that was important to me when I was there. Facebook starts in 2004; personalized Google search starts in 2005; as does YouTube; Twitter starts in 2006. These entities are what has created the information landscape that most Americans live in. So is the information silo problem on social media a problem for American democracy?
Professor Larry Lessig: Yeah. It's a severe problem. But I really think that there were stages of that problem. And, you know, the early stage you would focus on the fact that these were siloed platforms. It was hard to move content between them. So it was not the old internet where there was basically an internet and everything on it. It was like four or five separate internets that had people in their walled garden to prune and flourish as they could make them flourish. The real change for each of these platforms was recognizing the full potential of AI as a driver of content. You know, at the beginning of social media, what drove content what made something viral was a certain artistic skill of the artist who created it. Like it was humans that were genius at figuring out a meme, or an image, or a turn of phrase that would then inspire others to want to click and reshare it. That creativity was extremely valuable for a whole bunch of people. Think of a place like “BuzzFeed,” which hired a incredible number of social media creators. What made them great was that they had a special talent and that talent was to capture people's attention and get them to act for you by sharing the content with others, and thereby drive attention to whatever the ultimate advertising objective of the platform was. But what social media discovered was that humans were good, but machines were orders of magnitude better. And so, they increasingly deployed technology to manipulate the content to turn the audience into a more reliable more predictable product that they then could lease out to people who wanted access to people's attention. Right? So you know Mark Zuckerberg famously said "Look I I'm happy to respect privacy. I'm happy not to keep any of the data services are going to gather on people.” Because he didn't need to. Once the AIs saw the data and they learned what kind of person you were. Like, you know, it turns out we're not very different and not very complicated. Especially if they can turn us into radicalized lefties or radicalized right-wingers. And so, the platforms increasingly did that because they had a single objective. Their objective was to maximize engagement. Now, the byproduct of that engagement has been destructive to our democracy. Just like the byproduct to the fast food industry has been destructive to our collective health. Fast food industry is out there just trying to find a way to sell more potato chips. It does that by constantly experimenting with a mix of salt, sugar, and fat, to find the most addictive, most compelling processed food that they can. And then ultraprocessed food does this in even more aggressive way. But as they do that, what they've done is produce a public which is incredibly sick. Because this food turns out not to be good for humans, even if it's the food that's the most addictive. Well, the same thing is happening with social media. These are companies that are in the business of engineering attention, they do that through computer technology that increasingly learns the best way to capture and to direct attention. The most troubling fact about that capture is that: the way that it's best to do that is to basically change our preferences. And as they change our preferences, collective consequence [and] the externality from this is that we become a society that understands each other less, is more polarized, more committed to our view, and more able to get a constant feed of information that confirms our view. It's a business model whose externality is the weakening, maybe destruction, of democracy. And when you think about it, I don't think there's ever been a point in the history of humanity when the business model of media was against democracy. [The] byproduct of this business model of media is an incoherent, radically underinformed public, even though we're spending more and more of our time looking at content than we have at any time in our history. So that that's a product of the business model and the business model is enabled by the technology, but it's distinct from the technology.
Prof Ciara Torres-Spelliscy: Sort of reminds me of the “Yellow Press” and Hearst and starting the Spanish-American War. So Facebook, Google, YouTube, and Twitter are all just tools. You could use them to share cat videos; you could use them to undermine democracy. As Ani DeFranco says, "Every tool is a weapon if you hold it right." How much do you worry about online political disinformation?
Professor Larry Lessig: So I worry a lot about it. But I want to question the framing. Because by calling it “a tool,” it makes it seem like there's an intentional actor behind it for a political purpose. If you're telling me we're talking about X and Elon Musk, I'm willing to believe you've got an intentional actor there deploying the tool of X for a political purpose. But, the biggest consequence to our politics doesn't come from particular people deploying a tool in one way or another. And it certainly doesn't come from users of this quote “tool” deploying it one way or the other. You've got businesses that are deploying their platform to maximize engagement with the consequence that we are radicalized. But their objective is just to make money. Their objective is, you know, Mark Zuckerberg doesn't want to destroy American democracy. It's just that's the only way he can make money, so that's what he does. And when we when we use the tool, and we “like” the things we like, and we steer away from things we don't like, and we yield to the constant push or pull of these platforms, I think it's overplaying our role to say that, “this is now our tool.” No. We are the tool of it. And so I think that we need to recognize the loss of agency that we individually have and the loss of social agency that these companies have. Because this business model is just so incredibly lucrative to them, that they literally can't resist it. I was honored to be the lawyer for Frances Haugen when she came out as the Facebook whistleblower. The Facebook Files are filled with examples of engineers recognizing the severe harm the platform was doing to different demographics of their users, and recommending solutions to Zuckerberg about how to make the platform less poisonous for young girls, or less poisonous for political speech, because [there would be] less promulgating of misinformation. And the single dimension that Zuckerberg would apply to each of these proposed changes was “how would it affect engagement?” And if it reduced engagement, it was not going to happen. Now that wasn't because he liked what it was producing. It's just he was not willing to pay the price to remove what it was producing. Because that price was the price of reducing engagement which is how he measured the value of his company-- how Wall Street measures the value of his company. I think we need to recognize, we're kind of lost in the face of this technology and this business model. Typically we would say “well this is a role for government to step in” and figure out how to in force them to internalize the externalities they're imposing. How could we tax the business model or make it so the business model is no longer the model they choose? But of course, we don't have agency over our government because our government itself is captured by the influence of the extraordinary money that's inside of our political system. So that's how we're doubly bound.
Prof Ciara Torres-Spelliscy: Here I wanted to focus a little bit more on artificial intelligence. Ethan Mollick is one of my friends. There's a lot of sunlight between me and Ethan on AI. He's quite the evangelist and I think we should pump the brakes. So should AI be trained on copywritten work?
Professor Larry Lessig: So I think absolutely. We should be free to copy. We should be free to train AI on all copyrighted work. That doesn't mean copyright owners shouldn't get compensated. But they shouldn't be compensated through the copyright system. I think we need to create a sui generis way to ensure compensation for work that AI has been trained on. But I think it's the basic commitments of copyright and the Enlightenment, that work that's created is out there to be learned from. And you know, we're just at a stage where the most important entity learning stuff is going to be AI. You know, and it's really learning stuff, you know. AI reads. It—AI— has done the reading. I mean I have interrogated AI about my own work, and it is smarter than I am about my own work. It has better insights about what I'm saying and the implication of what I'm saying. And certainly, [has] better insights than anybody on my faculty. So I think we need to accept that the future is a future where this will be the medium through which intelligence is generated. Sure, let's make sure that people who create get compensated for it. But let's not start building blockage to stopping the machine from understanding the knowledge or the culture we're producing. Because what that's going to do is to make sure that the only culture that gets reported or gets folded into this is the culture that, you know, is most commercially valuable. It's like it's exactly the way to reinforce… The way these algorithms change preferences, you know, in the in 2010s to 2015s, the thing we were all obsessing about was Chris Anderson's work about “the long tail.” And how what the internet did was open up the opportunity for all this diverse content to be accessible for the first time. And this was a great success. The great thing about Napster was this archive of the widest diversity of content of musical content that anybody had ever seen. Well, the latest measurements of the long tail show that what AI algorithms are doing is replicating the day of, you know, “Casey Top 40.” Right? It's because the algorithms are increasingly steering us back to the very same small universe of content that we had in the days before the internet. And that's not surprising, again because, you know, to the extent they can turn us into boring music listeners, there's a better clearer market to be marketing to. The thing to worry about here is: that we're going to build rules into the system that will force culture into that narrow space and I certainly don't support that.
Prof Ciara Torres-Spelliscy: Are you worried about AI picking up human biases and magnifying them?
Professor Larry Lessig: Absolutely. Absolutely. And there's a-- it's a intractable problem. But it's a problem, it's an-- It's, in part, an engineering problem. It's, in part, a foundational problem. We're not going to get rid of it. But, in part, it's engineering. There are clever and important ways that we can avoid and minimize some of it. At least the difference between that bias and the bias that's built into our system right now is: that we can see that bias. We can track it, and we can recognize its manifestation. The bias of federal district court judges is much harder to track or to understand.
Prof Ciara Torres-Spelliscy: What are positive and negative uses of AI with respect to our democracy?
Professor Larry Lessig: The positive uses are the extent to which it encourages and facilitates people having a less targeted understanding of reality. It angers Elon Musk that his AI is not sufficiently right-wing or white nationalist. There's all these demonstrations of how Grok gets reality right, and that reality is inconsistent with Elon, and Grok directly calling out Elon as one of the best biggest misinformation purveyors on the internet. So to the extent we're talking about platforms like that, it could be a complement with an “e”. It could, it could help democracy to help us have a better foundation, a better basis in understanding reality. But the worst thing AI is doing for democracy it's already done. Because it's AI driving the algorithms behind social media. Those algorithms are incredibly destructive. So I think that's a terrible use of AI and there's a great piece by Aza Raskin, whose great sin in life was inventing the “infinite scroll,” but he's doing his penance by co-founding Center for Humane Technology with Tristan Harris. Aza describes “2024 as the last human election.” And by that, he means by 2028, the following is completely plausible: So imagine, people start deploying AI bots designed to just engage with people in a playful, fun way. Just, you know, you can, you could say, “let's pick a target demographic: white men under the age of 30.” Okay. With that target demographic, let's start deploying AI generated “Only Fans” AIs. And the objective of these AI-generated Only Fans AIs is to induce this target demographic to spend as much time as they can with the AI. Use your imagination to imagine how in fact they do that, but as they do that, they develop models of the target audience that they are engaging with. Those models help them understand how they might manipulate that target audience. So, they could spend three years talking about whatever, and then come time for the election, they could begin to leverage their knowledge-- their persuasive authority, their withholding of affection, or their withholding of attention, in exchange for the target doing what they want it to do. So, you know, once you describe it, you're like, WHOA. There's nothing stopping that right now. I've just described existing technologies. We're not talking about technology that tries to fool you because the people using these video generated AIs on Only Fans are not confused about whether this is a human on the other side. But they're just enjoying the engagement, so that's why they engage. It's completely plausible that this could be being built right now. Which is, which means, that it must be being built right now. There must be people doing it right now. And it's completely invisible to the political process. It's like there-- you could be doing it. Generating your huge following. You are now capable of manipulating them as you wish, and not have to file a single FEC report because of what you're doing. And then, at the very end, when you use your persuasive authority just like, you know, Taylor Swift, God bless her, tried to use her persuasive authority to turn her base against Donald Trump. When you use her persuasive authority, it's just what famous people have been doing since time immemorial, but the difference between Taylor Swift and these AIs is: that there's one Taylor Swift and her connection to your psyche is very weak, and different for different people. But there'll be 10 million of these AIs and they will know you precisely. And they will know exactly what buttons they need to push, and they will push those buttons. And who knows what the product of that will be? Depends on what the arms race between one side and the other is deploying this technology.
Prof Ciara Torres-Spelliscy: Last Supreme Court term, I was expecting two blockbuster First Amendment cases about social media: Murthy v. Missouri and Moody v. Net Choice. And instead we got a punt and a remand. What did you think of Murthy v. Missouri? Should the White House be allowed under the First Amendment to pester big social media platforms to change their content moderation policies?
Professor Larry Lessig: The problem is at that level of generality, it's hard to answer. I think what is frustrating about the particular context of these cases is they all grow out of catastrophic national threat. We all know that the history of First Amendment is a history that's contingent upon whether we are facing a catastrophic national threat. So, in the middle of World War I, First Amendment looks different from what it looks like, you know, in the middle of, you know, the 1970s. And that's the way it should be. To the extent you've got a government saying "Hey what you're saying is actually leading people to do things that threaten their health and we'd like you to stop." I don't have any problem with that. If instead you're saying "Hey what you're doing is advancing an ideology that I don't agree with a set of political values that we don't agree with.” Or, you know, if you deploy the kind of extortion racket strategies, which the current administration is deploying against all sorts of institutions, including the press, see example CBS being threatened with a $20 billion dollar lawsuit on the basis of a legal claim that has zero plausibility in American law, given the First Amendment. Then I'm much more troubled by it.
Prof Ciara Torres-Spelliscy: Do you have any thoughts on Moody v. Net Choice, which decided that editing news feeds is like editing the New York Times?
Professor Larry Lessig: The way that case was written makes it sound like, all the way down, we're going to have this kind of First Amendment blocking the ability to regulate the environment of news feeds, or the environment of these social media platforms. And if that's true, then we're in trouble. I don't quite think that's necessarily what it is set up. And I do think the Court's going to be open to understanding the distinctive character of the AI-driven platform content that news feeds represent. I do think there'll be more space for government regulation that we've seen right now. But I worry that the First Amendment is going to become an insuperable barrier to doing sensible regulation in this context.
Prof Ciara Torres-Spelliscy: Should the courts apply the company town case Marsh v. Alabama to either social media platforms or something like the metaverse?
Professor Larry Lessig: I certainly have believed that. Zephyr Teachout and I wrote an amicus brief in the context of the Texas case, Texas and the Florida cases [Netchoice v. Paxton and Netchoice v. Moody], about you know thinking more like a Pruneyard standard, but thinking about how we understand these as themselves kind of governmental context, and they should be able to create environments that are respectful of the values that you're trying to establish. But I don't think we're… I don't think we're there yet.
Prof Ciara Torres-Spelliscy: Did the Supreme Court get it right in upholding the Tik Tok ban?
Professor Larry Lessig: I mean the problem with it is on the surface… Absolutely. Because it was a regulation of a foreign company in the context of American political speech. And so, that, traditionally, has been completely permissible. The challenge to it is: it wasn't really about security and it wasn't really about foreign-ness. What we know from the people on The Hill in fact from Mike Gallagher, the Congressman who was successful in pushing this along, is that the only reason Congress passed this, is that there was too much pro-Palestinian speech on TikTok. And they didn't like the fact that all of these terrible images of Palestinian children were spreading so broadly among the youth in America. If you-- if ever there was a case for testing the question whether the ulterior motive is the real motive, I think this should have been a case.
Prof Ciara Torres-Spelliscy: If you could defenestrate a legal concept out of American law what do you think should be in the dust bin of history?
Professor Larry Lessig: The idea that Citizens United gave us super PACs. My friend Bernie Sanders constantly out there saying “super PACs are the end of American democracy.” True. “And therefore we have to overturn Citizens United.” False. Because Citizens United did not give us super PACs. It was SpeechNow, a D.C. Circuit case three months after Citizens United that gave us super PACs. And it did so on the basis of an obvious logical mistake. And once you point that mistake out and get a chance for a court to finally consider it, because that case was not appealed to the Supreme Court because the attorney general didn't believe it was going to be a significant category of speech. Once you get that case before the Supreme Court, I predict the Court is going to recognize that SpeechNow was wrong and that super PACs are not constitutionally mandated. And indeed, right now, most of my work is with people in Maine who have succeeded in getting an initiative passed that ban super PACs in Maine, which was immediately challenged by a group funded by Leonard Leo. We recruited Neil Katyal to intervene to defend the law and we're arguing uh to uphold the law. And the arguments to uphold the law are, I think, overwhelming. And I don't think the court has any reason not to uphold the idea that states and the federal government have the right to limit the size of contributions to an independent political action committee, even if they don't have the right to limit how much that committee spends. So we've got to give people a sense… got to give people hope, because when you say “what you got to do is overturn Citizens United,” what anybody who knows anything knows, is that there's zero chance we're going to overturn Citizens United. But when you say to the Court, look “embrace the logic of Citizens United,” what follows from that is that Citizens United does not entail super PACs, then I think there's a fighting chance of removing 80% of the problem simply through the Court correcting a lower court’s mistake.
Prof Ciara Torres-Spelliscy: Is cryptocurrency inherently anti- small -d democratic?
Professor Larry Lessig: No. I don't think it's inherently small d- anti-small d- democratic. There's a, there's a lot of fraud and, you know, manipulation and corruption that is deployed in the current infrastructure or regime of cryptocurrency. I mean, look at the president running memecoins, where he's auctioning off access to himself in the White House to the top memecoin holders for his own memecoin, which has led to his personal wealth doubling in the first 100 days of his administration. I mean, all of that is corrupt for a million reasons unrelated to the architecture of cryptocurrency. So it's corrupt because we don't know who's buying these crypto coins. There's no, you know, FEC requirement that you turn over, like, who the owners of these is. And there's a very simple way that foreign governments can parlay a favor with the White House by buying huge amounts of this and then they let them know that they are the ones purchasing it. And so therefore there's a favor to be returned. So I think it's used right now in an incredibly corrupt and terrible way. Just like, you know, it's used to facilitate the buying and selling of illegal content like child porn or drugs. That's certainly true with the technology, but I don't think the technology is inherently evil or inherently bad. And I think if we had a well-regulated infrastructure for cryptocurrency, it would be a valuable addition to the finance mix.
Prof Ciara Torres-Spelliscy: So, my favorite Scalia quote says "Requiring people to stand up in public for their political acts fosters civic courage without which democracy is doomed." What do you think would foster civic courage today?
Professor Larry Lessig: I think that the most inspirational acts today would be people willingly exposing themselves to criticism by engaging with people they disagree with. I think the most poisonous reality of politics today is that we are given the opportunity to live within our own bubbles and praise ourselves because no one within our bubbles questions what we believe. But that dynamic is deeply destructive of democracy, so, I think we ought to be encouraging and rewarding people who put themselves out there in a context where they're not surrounded by people they agree with and who work hard to try to find common ground nonetheless.
Prof Ciara Torres-Spelliscy: Well, thank you so much for being here today.
Professor Larry Lessig: Sure. My pleasure. Thanks for having me.
[Music] Let's take a short break. And we're back. As someone who spends her time focused on political corruption it's easy to end up with a dim view of humanity and to get dispirited about everything. But one thing that has kept me happy and sane over the past 8 years is my 100 pound chocolate labradoodle. So with your indulgence and to lighten the mood, let me share my life motto with you which is “loves dogs hates corruption.”
One of the bad things about living in Florida is hurricane season. It's a lot to be the tip of the spear experiencing climate change. When I first moved down to Florida, I remember asking other professors at Stetson, “does Tampa get hit by hurricanes?” and the response that I got at the time was that “the Tampa Bay area hadn't been hit with a major storm in 80 years.” And that was true when I asked that question. But in the time that I've been here, it has been hit by three major hurricanes.
One of my only regrets with having a 100-pound chocolate labradoodle is that he is simply too big to fly. That means he is in a car with us. One of the web pages that we use when we're traveling is a web page called “Bring Fido.” They have a great list of dog friendly hotels, restaurants and dog friendly parks. When Hurricane Irma hit, that was a wild experience because the entire state ran out of gas. So by the time we decided that it would be wise for us to evacuate, it was actually no longer safe to leave because we would risk getting stuck on a highway with no gas. We were in our home when Irma hit our house. That was one of the scariest nights of my life and then after the storm we didn't have power at my house for a week and at my law school they didn't have power for 2 weeks. After Hurricane Irma as a family we tend to get out of Florida if a forecast says that a hurricane is likely to hit nearby what that has meant is driving out of Florida for 8 to 10 hours uh whoever is in the back seat has the 100B doodle snuggling them his favorite was a hotel in Atlanta because there was a big roomy couch that he could chill out on while we obsessively watched the news to see how the hurricane was impacting our area i must say if you have to experience the stress of out running a hurricane then I highly suggest taking a big snuggly dog along with you so that there's something positive to an otherwise miserable experience okay now back to business
[Music]
Now we get to our final segment, the fix is in. Many of the problems with our democracy seem unfixable, but that is not true. These problems were created by human beings and they can be solved by human beings. We can improve laws and practices at the local, state, and federal level. One of the topics we discussed today was the role of technology in harming our democracy. Concern about the power of big tech crosses party lines. If this is a topic that you care about, ask your member of Congress and your two senators to enact laws that reinforce democracy through technology. As you are using your right to petition the government, you can also encourage your congressional representatives to keep the First Amendment in mind. You should be choosy about what platforms get your attention. If the platform has given up on factchecking or has turned into a free-for-all or misinformation, you can choose to spend your time elsewhere. Just remember that democracy is worth defending and a little truth goes a long way.