DAVE DAVIES, HOST:
This is FRESH AIR. I'm Dave Davies, in today for Terry Gross.
Last week, the New York Post published a potentially damaging story about Hunter Biden, son of the Democratic presidential nominee, based on emails the Post said were provided by former New York Mayor Rudy Giuliani and originally harvested from a laptop computer left in a Delaware repair shop. There were enough questions about the authenticity of the emails that most mainstream media declined to publish the story, but it's the kind of content that can spread like wildfire on social media.
In a remarkable move, Twitter on Wednesday banned users from sharing links to the story because it said the emails may have been hacked and contained private information. It reversed course two days later after Republicans accused Twitter of censorship. But the episode illustrates a question our guest, Emily Bazelon, has been thinking about. In an age when questionable, perhaps even fabricated, content can sweep through the digital world unchecked, does our traditional commitment to unfettered free speech still serve democracy?
In the cover story for this week's New York Times Magazine, Bazelon surveys the impact that lies and conspiracy theories, sometimes promoted by foreign actors, can have on our political discourse, and she explores how other countries think differently about free speech and its relationship to a healthy democracy.
Emily Bazelon is a graduate of the Yale Law School and a journalist. She's a staff writer for The New York Times Magazine and the Truman Capote Fellow for creative writing at Yale Law School. She's also the author of two books. She joins us from her home in New Haven, Conn.
Emily Bazelon, welcome back to FRESH AIR.
EMILY BAZELON: Thanks so much for having me.
DAVIES: You open your piece with a story that began making the rounds some months back among right-wing voices on the Internet that there was a plan by the forces of Joe Biden to stage a coup to take over the government in connection with the November election. First of all, what was the basis of this claim?
BAZELON: Right, so this is a concocted claim, and the sort of kernel at the center of it was a project called the Transition Integrity Project, a group of about a hundred academics and journalists and pollsters and former government officials and former campaign staffers. They started meeting over the summer to kind of game out various scenarios for the November election. And so they were basically testing American democracy in the event that President Trump wins, in the event that Vice President Biden wins to see in various scenarios what could happen.
DAVIES: And in the event there's a contested result - right? - and a long, nasty count.
BAZELON: Yes, exactly, especially in the event if there's a contested result and litigation and other possibilities.
And so in one of their several scenarios, Biden wins the popular vote but loses the Electoral College. And so in that hypothetical case, they imagined the Democrats would get desperate and they might consider encouraging California and the Pacific Northwest to threaten to secede in exchange for pressuring Republicans to expand the size of the Senate.
So Rosa Brooks, who was one of the organizers of this project, she's a law professor at Georgetown. She published an essay where she mentioned this threat to secede in one sentence in an essay in The Washington Post. And the next day, you see someone named Michael Anton, a former national security adviser to President Trump - he has an article called "The Coming Coup?" And based on Rosa Brooks' characterization of what the Transition Integrity Project was doing, he starts saying that Democrats are laying the groundwork for revolution.
And then you see that article take off in extremist online communities. There is a podcast maker named Dan Bongino, who's a big Trump supporter. He makes videos about it. One of them has the tag, they are telling you what they are going to do! His videos pull in millions of views.
Then you see the story migrate to a right-wing website called Revolver News. Revolver News starts to spin up the idea that Norm Eisen, who participated in the Transition Integrity Project and is a longtime Democratic lawyer in Washington - that he's at the center of this supposed coup.
And from there, Tucker Carlson features someone talking about this concocted, made-up story on his show. And then you see it just go viral on social media and get picked up by lots of groups, including, like, a county Republican organization in Oregon. So it is a perfect kind of story because it pulls in both traditional media in the form of FOX and also social media.
And then you see President Trump get involved. He tweets in praise of Revolver News, and then he tweets, quote, "The November 3 election result may never be accurately determined, which is what some want." And that's a kind of typical, dark, slightly vague, foreboding kind of warning from President Trump that further perpetuates this coup narrative. And then Trump later retweets someone talking about a coup with regard to Nancy Pelosi.
So you see from this hypothetical project that was really meant to be a kind of academic exercise about the election this whole set of conspiracy theories on the right that get a lot of play in the media, on social media and then from the president himself.
DAVIES: Right. This was essentially a scenario tossed out in a brainstorming session, but it takes on this whole new life. And what's interesting is it's not just spreading on Facebook and Twitter, right? It is reinforced and repeated by people who are, in some respects, media elites and political elites, right?
BAZELON: Yeah. And that's actually become typical. You know, sometimes when you think about these disinformation campaigns, meaning spreading conspiracy theories or lies in a kind of deliberate way for a political goal, you think of them as starting at the fringes of the Internet, like on 8chan or in some thread on Reddit, and then migrating more into the mainstream.
But there's a center at Harvard, the Berkman Klein Center, that's been really studying these networks of disinformation. And what they have found is that since President Trump's election, they often - these campaigns really begin or are centered in the more elite media because President Trump, now that he's president, has the power to really use that media and make it a kind of party press, as the Harvard researchers have found. So this is, in their words, a more elite-driven mass-media-led process than it was before the 2016 election.
DAVIES: And you also write that this spreading of, you know, lies and conspiracy theories isn't meant to win the battle of ideas, but kind of prevent the battle from being fought. What do you mean?
BAZELON: Well, I mean that there is just so much overwhelming information - the distortions, the anger encoded in them. And they're as much about creating chaos and confusion as they are about any particular idea or set of facts.
And I think that part of the goal here is to make people who are nonpartisan just kind of exhausted and skeptical and just cynical about politics writ large. So I don't necessarily think that the people spinning this notion of a democratic coup really think that's happening. It's more just this idea that you're sowing distrust. You don't know what's happening around you. Any source of information could prove to be true or false. A conspiracy theory might sound outlandish, but who knows?
And those kinds of question marks, when you start raising them in the minds of voters, that becomes really hard for them to sort out and exhausting in a way that discourages people from participating in the democracy.
DAVIES: So we have had this idea for a long time that, you know, free speech is precious and that the answer to bad speech or hateful speech is not censorship, but more speech. And you say that a number of scholars are beginning to rethink this. Why? Why aren't just aggressive fact-checking and, you know, more speech which gets at the real truth the answer to this?
BAZELON: Yeah, it's a great question. I mean, free speech is a precious right. I think the question I have right now is whether this very sunny vision that there's a marketplace of ideas, and the better ideas are going to win out - whether that's really true, especially in the Internet era. So one of the problems online is that we know from research that lies often go viral faster than truth, probably because there's something novel about them.
And we also have this reality on the Internet where the algorithms that platforms like Facebook use - they are set to maximize what - let's call it engagement. They want to keep you online. And what they've learned over the years is that the content that tends to keep people clicking and sharing is hot content. It's content that generates outrage. And that is not necessarily very healthy for a democracy. There's also nothing neutral about it. It's about the profits of these tech companies. It's not about any kind of notion of, like, the best content rising up and getting the most attention or the most convincing argument.
And so the scholars who I've been talking to for my piece - what they were concerned about is that we have this First Amendment that is very, very good at protecting us against government censorship. We have lots of rules set up. The government can't interfere with your speech. And there're really good reasons for that. You don't want the government telling you what you can say. If the government thinks an idea is bad and turns out to be wrong and shuts down debate, that's not good for democracy.
But in this age of this flooding tide of disinformation, should we have a First Amendment that can also figure out how to address that particular problem? - the problem of, you know, troll armies online that are trying to distract listeners and confuse them and often really damage people's reputations. And so the question is, you know, once you have people using speech as a tool to suppress other speech, then what do you do about the fact that that's very challenging for our American First Amendment to really address?
DAVIES: You know, we've had the First Amendment for, you know, more than 200 years, but for a long time, it didn't really function to protect free speech. There was a lot of official oppression of ideas regarded as unpatriotic or troublesome. But in this century, we've really come a long way - really, haven't we? - towards insisting that even the most loathsome speech is to be protected - Nazis marching in Skokie, Ill., right?
BAZELON: Yes. That is the American tradition - that it's much better to air about ideas and address them than it is to try to force them underground because then people will hold onto them, and they will also feel aggrieved at the government for not letting them speak. That is absolutely the American theory of protecting, for example, hate speech.
DAVIES: And we've also seen the notions of free speech taken in a different direction, where political spending is defined as speech, which is protected. And corporations can adopt some of the protections of the First Amendment. Has that distorted the way it works?
BAZELON: Yeah. So I think - you see, starting in the 1970s, the Supreme Court start to protect corporate campaign spending right alongside individual donations. And, you know, in a world after Citizens United, the 2010 Supreme Court decision, you really have corporate spending on elections treated on par in terms of the protection of the First Amendment with the shouting of protesters. And I think that what you see here is that the Supreme Court has required the state to treat alike categories of speakers, meaning corporations and individuals. And that goes way past preventing the government from discriminating based on the viewpoint or the identity of an individual speaker, right?
It means something really different to see that - to say that a corporation has the same right to speak, the same right to give money to candidates as an individual, especially because corporations have proved to have such vast resources and to be really good at spending what's called dark money, money in political races where we can't trace the origin of it. We don't really know who is speaking so loudly to us with this paid political speech.
And so one of the law professors I talked to from my piece, Robert Post, who teaches at Yale, he was arguing that the problem with this corporate power that we've - the way in which the First Amendment has really been weaponized by corporations, in the words of Justice Elena Kagan - Robert Post from Yale Law School was arguing that the real problem here is we've lost sight of the idea that the purpose of free speech is to further democratic participation.
DAVIES: Let me take a break here. Let me reintroduce you. We are speaking with Emily Bazelon. She's a staff writer for The New York Times Magazine. Her cover story in this week's issue is "The Problem Of Free Speech In An Age Of Disinformation." We'll continue our conversation after the short break. This is FRESH AIR.
(SOUNDBITE OF AHMAD JAMAL'S "THE LINE")
DAVIES: This is FRESH AIR, and we're speaking with Emily Bazelon. She's a staff writer for The New York Times Magazine. Her cover story in this week's issue explores how well our traditional view of embracing unfettered free speech serves democracy in the digital age. The story is titled "The Problem Of Free Speech In An Age Of Disinformation."
You know, one of the things that I think we've long relied upon to deal with inaccuracies in the media is that, you know, our media would compete not simply for, you know, ratings or circulation but on accuracy. I mean, if there was - somebody had a story out there which got something drastically wrong, another paper or electronic outlet might take some pride in setting the record straight. Does that just no longer happen?
BAZELON: Well, that does still happen in the mainstream media and actually in the liberal media. And I'm going to go back to some of the researchers from the Berkman Klein Center at Harvard. So they did this huge study of 4 million news stories and all the links they had on the Internet in 2016 and 2017 for the most part. And what they found is that what you're describing, this kind of competition among media outlets for factual accuracy - they call it a reality check dynamic, and they say we can see that happening all the time in the mainstream media. The New York Times gets something wrong. The Washington Post comes along and says, nope, here's why that's a mistake.
And that reality check dynamic - it still allows for significant partisanship, right? Like, by no means is the media perfect or perfectly balanced, but it is a significant constraint on disinformation. The problem is that in this right-wing media ecosystem that we were talking about earlier, especially Fox and talk radio hosts Rush Limbaugh, you don't have that same kind of competition for factual accuracy. Those outlets don't really challenge each other on facts, and they're much more likely to pick up and kind of recycle a false narrative than they are to debunk it or even to admit themselves that they've gotten something wrong.
DAVIES: You know, this will strike some listeners, as, you know, a partisan comment. You're saying there's independent evidence that the conservative media are less interested in factual accuracy?
BAZELON: Or at least they're behaving in that way. And, yes, I'm sure it will strike some of your listeners as partisan. And I wish it wasn't true. It would be much better for our democracy if we didn't have this asymmetry where we had disinformation being propagated in the right wing media and not in the mainstream and liberal media because then we could address it all together. But this is this reality that research has really shown and confirmed - that we have these two different modes of operation. And so I feel as a journalist, like, I have to report those facts that are accurate, even though they are politically inconvenient.
DAVIES: Do these social media platforms have any responsibility for the content that others post on them? Can they be sued for a libelous claim?
BAZELON: So the social media platforms only have the obligations they give themselves. And they cannot be sued for libel or other kinds of civil suits. They benefit from a provision called Section 230 of the Communications Decency Act, which Congress passed in the mid-90s. And the idea was, hey, we have this new thing, this Internet. We want to help it grow. And so we're going to try to encourage websites and Internet service providers to do some content moderation by not holding them liable for suit if they miss stuff. What actually happens, though, as a result of not having any kind of civil liability for libel is that anything goes on these platforms to the extent that the platforms want to allow it to remain there.
So the analogy that that helps me with this is that, you know, functionally, social media in a lot of ways is like a public square in America and across the world right now. But legally speaking, it's not that at all. It's a private zone. It's like a mall where, you know, the people who own the mall can hire the police and decide the rules. But if something goes wrong in the mall, they're not liable. And so that kind of has its own skewing effect in terms of what kind of speech is promoted and amplified.
DAVIES: Right. And when you consider the financial model of the social media companies, particularly Facebook, it doesn't ensure that all voices are heard equally. In fact, it tends to promote stuff that's loud, that's controversial - right? - that's sexy.
BAZELON: Yes. And that is the way they make money because that kind of content we are all much more likely to click on and share with our friends. And then that keeps us online, and we produce data. And the social media companies then sell our data to advertisers. And that's their business model. So I think what's important here is to recognize that these algorithms - there's nothing neutral or the slightest bit public-interest-oriented about them. They are doing what's helpful to the social media platforms to turn a profit. And so the platforms make decisions about what content to promote based on what they think is going to go the most viral. And really, there has been little constraint, legally speaking, on what they can do. They have incentives to remove content like spam and pornography that could drive users away. But that's not a responsibility that comes to them from the government.
DAVIES: Right. And just so we understand this, the immunity from libel suits - that comes from the Communications Decency Act of 1996. And what was the kind of public-interest principle that protected them from this kind of liability?
BAZELON: I mean, it was really this idea that it was so important to have the Internet grow, that you wanted to make sure that people couldn't sue the people who were setting up the platforms and providing content on the Internet because you didn't want to stifle its growth. And one of the problems that at least people project about libel suits is that - so if I come along and I post something that's really mean about you, but it might be true - and then I post that on, say, Twitter. And then you go to Twitter, and you say, that's libel. I'm going to sue. Twitter could look at that and say, well, we don't know if this is true or not. We don't want to pay to litigate this and defend it. So we're just going to take it down. And so that's the concern. It's sometimes called the heckler's veto. The people who complain are going to get a lot of content taken down. And that's going to promote overcensorship. So that's what Section 230 of the Communications Decency Act was supposed to prevent.
DAVIES: We're going to take a break here. Let me reintroduce you again. Emily Bazelon is a staff writer for The New York Times Magazine. Her cover story in this week's issue is titled "The Problem Of Free Speech In An Age Of Disinformation." We'll talk more after this short break. I'm Dave Davies, and this is FRESH AIR.
(SOUNDBITE OF TAYLOR HASKIN'S "ALBERTO BALSALM")
DAVIES: This is FRESH AIR. I'm Dave Davies, in for Terry Gross. My guest is Emily Bazelon. She's a graduate of Yale Law School and a staff writer for The New York Times Magazine. Her cover story for this week's issue explores how well Americans' commitment to free, unfettered speech serves democracy in the digital age. It's titled "The Problem Of Free Speech In An Age Of Disinformation."
This year, some of the social media giants have taken some steps to advise people about content that might be inaccurate or troubling or disputed. For example, Twitter and Facebook did - took some steps in the cases where President Trump would make inaccurate claims about fraud in mail-in voting. What did they do?
BAZELON: Yeah. So Twitter and Facebook and YouTube have gotten worried about the integrity of the election. And, you know, Facebook in particular was blamed in some ways for the 2016 election because it turned out that Russian operatives had used the site to put up a bunch of paid ads and potentially depress the vote. So the social media platforms, they don't want that to happen again. On the other hand, they really don't want to seem like they are partisan or suddenly policing a lot of speech. So they've created pretty narrow categories of content that they think could interfere with voters, in some way really mislead people about how to vote or could mislead people about the results of the election after it takes place. And so with those narrow categories of content, they are removing some things, but they have an exception if you're a newsworthy figure. So President Trump, he is newsworthy and they have the idea, well, we're not going to take down these posts. You know, he's the American president. People should hear what he has to say. But we're going to put on what they call informational labels. We're going to tell something about the context for what Trump is saying.
And what's fascinating to me is that when you look at the sort of trajectory of what these fact-checking labels look like, they started out being very mild. So Facebook - on a false post, Trump tweeted about how, you know, we'll never know if the ballots are counted accurately. The election is going to be a disaster. Facebook started out by saying you might want to visit our voting information center, which is really not much of a check, right? It doesn't tell you outright that what Trump is saying is wrong. Now they've migrated to a label that says voter fraud is extremely rare across voting methods. So that's a more direct challenge on what Trump is saying. I should also note, though, that there is lots of misleading information about the election that's not getting labeled or still has the more mild labels on it. So, you know, the people who study this and track disinformation online are still pretty concerned about the effect of these disinformation campaigns on the platforms.
DAVIES: Right. There was one case where Facebook removed the page of a group called Freedom Works, which had a post involving LeBron James that seemed to make it as if he were discouraging mail-in voting. So they took that down. But Republicans and conservatives have complained and said, you know, this is in effect bias and censorship by the social media giants. What kind of impact had that had?
BAZELON: Yeah. I mean, I think conservative groups have been very effective or tried to be in working the refs (ph), and so the post about LeBron James you were referring to, that was a paid political ad. Twitter decided last year, you know what? We're not going to have paid political ads anymore, but Facebook said we think this is part of free speech. And so they have allowed groups to have false and misleading information in these paid ads. They've now very recently said that after the election, they are going to temporarily suspend paid advertising about politics. They're worried about the results of the election being accurately reported. And I just think it's really interesting that at a moment where they think the most is at stake for the democracy, they have decided that paid political speech is no longer something they feel good about promoting. That is a step they have taken kind of despite conservative opposition to any kind of fact-checking or content moderation. And so you see a kind of dance here where the social media companies want to be seen as more responsible. They can see that this disinformation is spreading, especially on the right. But then how far can they go before they risk being regulated or yelled at loudly by Republican politicians and government officials?
DAVIES: So, you know, the First Amendment scholars and analysts who you have talked to for this story, when they look at the steps that social media platforms have taken, you know, putting labels on things that the president says and the stuff about the Post story, how do they regard all of this? Do they think that it's a meaningful check on the spread of bad information?
BAZELON: The good news here is that there is pretty much a consensus among researchers that fact-checking can significantly reduce people's belief in false information. That doesn't mean that it's, you know, perfect and works for everyone because it doesn't. But it does seem to do some good for some people, at least in the short term. So, you know, I think what you've seen more and more from the mainstream media is the use of headlines or context in a story to say something's not true. So rather than simply reporting that President Trump has said something, if the media can see right away that there is a falsehood, they are more likely now to call it out. And you can also see that with chryons on television - right? - where sometimes the words underneath what someone is saying are actually challenging them even as they're talking.
The question is whether these fact-checking labels on social media can be made effective to do exactly that kind of thing. And there's an increasing push among people who study disinformation for stronger language and stronger visual cues, so like an exclamation point, a warning, something that really tells people what you're seeing here is probably wrong. And both Facebook and Twitter now in certain context, if you share something that they know is false, they will then send you a note to let you know that you've shared false information. So it'll be interesting to see if that has an effect on people and makes them less likely to share more false information in the future.
DAVIES: Gosh, I've never gotten one of those notes (laughter).
BAZELON: You're probably not sharing conspiracy theories.
DAVIES: All right. Let me reintroduce you again. We are speaking with Emily Bazelon. She's a staff writer for The New York Times Magazine. Her cover story in this week's issue is "The Problem Of Free Speech In An Age Of Disinformation." We'll continue our conversation in just a moment. This is FRESH AIR.
(SOUNDBITE OF WES MONTGOMERY'S "FOUR-ON-SIX")
DAVIES: This is FRESH AIR, and we're speaking with Emily Bazelon. She's a staff writer for The New York Times Magazine. Her cover story in this week's issue explores how well our traditional view of embracing unfettered free speech serves democracy in the digital age. The story is titled "The Problem Of Free Speech In An Age Of Disinformation."
You know, you write that the principle of free speech has a different shape and meaning in Europe. How is it different?
BAZELON: So the Europeans, you know, because of the experience of the rise of fascism in the 1930s and the Holocaust and Nazism, they are much more leery of this notion that the good ideas are always going to win out in the marketplace of ideas. And so they treat these free speech not as an absolute right from which all other freedoms flow but as a really important right that they're also balancing against rights like democratic participation and the integrity of elections. We see the same kind of balancing in countries like Canada and New Zealand, and it allows high courts, for example, to let states punish people who incite racial hatred or deny the Holocaust. Germany and France also have laws that are designed to prevent the widespread dissemination of hate speech and election-related disinformation. And so, you know, when I was talking to people in Europe about this, they were pointing out that the Nazis and other fascist governments were originally elected. And so in Europe, there's this much more acute historical understanding that democracy needs to protect itself from anti-democratic ideas. And I think it's because of that different ethos of democracy, of protecting democracy, that Europe has accepted more restrictions on speech.
DAVIES: And of course, that means the government then has to look at the speech and make a judgment - right? - what actually harms democracy? And you get into analyzing some very specific content. And I think to a lot of Americans, that would seem, gosh, that's just a quagmire. How does it work in Europe?
BAZELON: Well, that's a great question. I think, first of all, there is an emphasis only on hate speech or disinformation that really reaches a wide audience. So it's not like if you just, you know, say something to someone else, you're suddenly going to be prosecuted. These are pretty unusual instances. One thing I was really interested in looking at was the difference between the American election in 2016 and the French election in 2017. So in both cases, Russian operatives come in with a big hack that is trying to influence the results. In the U.S., this is the hack that the Russians give to WikiLeaks and then gets doled out to maximum effect over the summer and fall to damage Hillary Clinton. And the media just covered all of those stolen materials. I'm talking about the hacked emails from the Democratic National Committee. And there was very little discussion in the United States like, hey, are we being a tool of the Russians by covering all of this? I think, you know, the mainstream media just thought, well, we need to pass this information along to voters and let them decide whether it's true.
So same play gets run in France a few days before the French presidential election in 2017. There is a hack. It looks like it's Russian operatives. It's of the party of Emmanuel Macron, who is running for the French presidency. And the French media says, you know what? We're not going to cover this. Now, this is partly because the French have a blackout law where the day before an election and the day of the election itself, the media doesn't cover stories. That would be totally unconstitutional in the United States. If you take a step back, you see that a law like that, a blackout law, which is actually pretty common in other democracies around the world, the idea is that a last-minute story could be false, it could be a conspiracy, and there would be no effective way for the other side to respond.
The Macron hack actually came to light hours before the blackout law came into effect. So the French media could have said, hey, you know, this is fair game. We're going to cover it, let the chips fall, let people decide. They'll just have to figure out for themselves whether it's true or not. But they did not do that. They restrained themselves. And I think that difference there just says a lot about how European countries balance free speech rights against other values in a way that just would seem pretty unthinkable in the United States.
DAVIES: Right. And I'm interested in how the Internet played into this because were that to happen in the United States, I mean, even if mainstream outlets decided, well, I'm not going to touch this, you know, it might go wild on the Internet. Is there any evidence that disinformation spreads and thrives less well in Europe than it does here?
BAZELON: Yeah, there actually is. I talked to a couple of researchers who were trying to map the spread of political lies and conspiracy theories in France and Germany, much the way the Berkman Klein Center at Harvard did around the 2016 election and the first year of Trump's presidency. And they just didn't find anything like the large-scale feedback loops between right-wing media and social media that we have in the United States. They were actually kind of sheepish about it because they didn't have much that was publishable. They had just found a much smaller kind of conspiracy mongering, like, in a particular corner of YouTube rather than this big map of political disinformation, where you could see all these outlets linking to each other and getting a huge audience.
DAVIES: You write that our information crisis is not inevitable or insoluble. What can we be doing?
BAZELON: Well, I think that we are going to watch the Europeans really take the lead on this in the next couple of years. You know, they already have the kind of publicly funded broadcast and journalism. We could follow them down that path any time we want. But I think they are also going to start figuring out how to do more to regulate the Internet platforms. I don't think they're going to tell the platforms what to do. Like, I don't think there's going to be some, you know, EU department of content moderation. But I think what they are going to do is use their law of competition - we call it antitrust law - to look at whether just having these platforms be so huge, especially Facebook, which now owns Instagram and WhatsApp, whether that in itself is part of the problem, because Facebook basically has, like, a near monopoly on the social media platform of the type that it is.
And then I think that the Europeans also are going to start demanding a lot more transparency. So one thing that really shocked me in reporting this story is that at the moment, independent researchers have no way of knowing how prevalent disinformation or hate speech actually is on these sites. They can't see inside the data to know whether fact-checking is working, how it works, what's going viral exactly how fast. There are all these basic questions about the social media information landscape that the companies just haven't opened up their data. And so I think the EU will probably start there.
And then they may start talking about regulating the algorithm in some way - you know, this idea that if you're a known spreader of viral disinformation, if you have that track record, and a post starts to spread like wildfire, maybe it should stop until it's vetted. Maybe we want a kind of circuit breaker in there so that we don't have these viral episodes of disinformation that the companies can actually see as they're happening.
DAVIES: Right. And those are all kinds of things that just wouldn't, you know, wouldn't sit well with somebody who's deeply committed to the First Amendment in this country, I think, as we now understand it. You know, you could remove these social media platforms' immunity from libel suits. One simple thing - you know, political ads online don't - they don't have to have that disclosure statement that says who funded them, right?
BAZELON: Yes. There is an obvious opportunity to close a loophole the same way that we require, you know, TV and print and radio. When you have a paid political ad in those mediums, at the end, you have to say who paid for the message. It seems obvious that we could have the same kind of legal requirement for online political ads in the United States.
DAVIES: You said earlier you think the Europeans may take the lead on dealing with this issue. And, of course, the Europeans use these American-owned social media platforms a lot. Are they beginning to regulate them in ways so that the media platforms themselves might just find it more straightforward and simpler to make changes all across the spectrum?
BAZELON: I mean, I think that's a good question. So when you look at the laws against spreading hate speech and disinformation online that Germany and France have passed, you see more content getting taken down - not a ton more. And you see the social media companies starting to just negotiate in those countries. And there's been some friction. There's been fines of Facebook in Germany, for example, and complaints among the Germans that Facebook isn't doing what it's supposed to be doing.
There is this interesting migration, though, where just in the last couple of weeks, Facebook and Twitter started banning denial of the Holocaust on their platforms worldwide, which was previously something they were only doing when they were legally required to do it in European countries. So that's one small example of how you could see European laws and norms start to affect speech in other parts of the world.
DAVIES: Emily Bazelon, thank you so much for speaking with us again.
BAZELON: Thanks so much for having me.
DAVIES: Emily Bazelon is a graduate of Yale Law School and a staff writer for The New York Times Magazine. Her cover story for this week's issue is "The Problem Of Free Speech In An Age Of Disinformation."
Coming up, Kevin Whitehead reviews the new album by Diego Urcola, which also features saxophonist Paquito D'Rivera. This is FRESH AIR.
(SOUNDBITE OF DAVID NEWMAN AND RAY CHARLES' "HARD TIMES") Transcript provided by NPR, Copyright NPR.