Clicky

Defend free speech and individual liberty online. 

Push back against Big Tech and media gatekeepers.

Transcript of Gonzalez v. Google Supreme Court oral arguments on February 21, 2023

If you’re tired of censorship and surveillance, join Reclaim The Net.

Appearances:

  • Eric Schnapper, Esq., Seattle, Washington; on behalf of the Petitioners.
  • Malcolm L. Stewart, Deputy Solicitor General, Department of Justice, Washington, D.C.; for the United States, as amicus curiae, supporting vacatur.
  • Lisa S. Blatt, Esq., Washington, D.C.; on behalf of the Respondent.

 

CHIEF JUSTICE ROBERTS:

We’ll hear argument this morning in Case 21-1333, Gonzalez versus Google. Mr. Schnapper.

MR. SCHNAPPER:

Mr. Chief Justice, and may it please the Court: Section 230(c)(1) distinguishes between claims that seek to hold an Internet company liable for content created by someone else and claims based on the company’s own conduct. That distinction is drawn in each of the three sections of the statute.

First, Section 230(c)(1) is limited to claims that would treat the defendant as a publisher of third-party content. The statute uses “published” in the common law sense. The Fourth Circuit decision in Henderson correctly interprets the statute in that manner and concludes that it involves two elements: the claim must be based on the action of the defendant in disseminating third-party content, and the harm must arise from the content itself.

Second, Section 23– 230(c)(1) is limited to publication of information provided by another content provider, which is often referred to as third-party content. The statutory defense doesn’t apply insofar as a claim is based on words written by the defendant or other content created by the defendant. In some circumstances, the manner in which third-party content is organized or presented could convey other information from the defendant itself, as the government notes.

Third, Section 230(c)(1) only applies insofar as a defendant was acting as an Internet computer service. Most entities that are Internet computer services do other things as well. This Court technically is an interactive computer service because of its website. It does other things, as it is doing today. Conduct that falls outside that line of activity is outside the scope of this statute. A number of the briefs in this case urge the Court to adopt a general rule that things that might be referred to as a recommendation are inherently protected by the statute, a decision which would require the courts to then fashion some judicial definition of “recommendation.”

We think the Court should decline that invitation and should instead focus on interpreting the specific language of the statute. I welcome the Court’s questions.

JUSTICE THOMAS:

Mr. Schnapper, just so we’re clear about what we’re — your claim is, are you saying that YouTube’s application of its algorithms is particular to — in this case, that they’re using a different algorithm to the one that, say, they’re using for cooking videos, or are they using the same algorithm across the board?

MR. SCHNAPPER:

It’s the same algorithm across the board.

JUSTICE THOMAS:

So — so what is — if — if it’s the same algorithm, I think you have to give us a clearer example of what your point is exactly. The same algorithm to present cooking videos to people who are interested in cooking and ISIS videos to people who are interested in ISIS, racing videos to people who are interested in racing. Then I think you’re going to have to explain more clearly, if it’s neutral in that way, how your claim is set apart from that.

MR. SCHNAPPER:

Surely. The — if I might turn to the practice of displaying thumbnails, which is a major part of what’s at issue here, the problem — and the issue is not the manner in which YouTube displays videos. It actually displays, as you doubtless know from having looked at, these little pictures, which are referred to as thumbnails. They are intended to encourage the viewer to click on them and go see a video. It’s the use of algorithms to generate these — these thumbnails that’s at issue, and the thumbnails, in turn, involve a — involve content created by the defendant.

JUSTICE THOMAS:

But the — it’s basing the thumbnails — from what I understand, it’s based upon what the algorithm suggests the user is interested in. So, if you’re interested in cooking, you don’t want thumbnails on light jazz. You — so the — it’s — it’s — it’s neutral in that sense. You’re interested in cooking. Say you get interested in rice — in pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana.

The — so the — I don’t see how that is any different from what is happening in this case. And what I’m trying to get you to focus on is if — if the — are we talking about the neutral application of an algorithm that works generically for pilaf and — and it also works in a similar way for ISIS videos? Or is there something different?

MR. SCHNAPPER:

No, I think that’s correct, but — but our — our view is that the fact that a — a — an algorithm is neutral doesn’t alter the application of the statute. The statute requires that one work through each of the elements of the defense and see if it applies. The — the lower courts, in a couple of cases, have said that — really disregarding the requirements of the — of the defense, that as long as an algorithm is neutral, that puts the — the conduct outside the — within the protection of the statute. But that’s not what the statute says. The statute says you must be acting — you must be — the claim must treat you as a publisher.

CHIEF JUSTICE ROBERTS:

Well, but, I mean, the — the — the difference is that the Google, You — YouTube, they’re still not responsible for the content of the videos or — or text that is transmitted. Your focus is on the actual selection and recommendations. They’re responsible that a particular item is there but not for what the item — item says. And I don’t — I — I think part — it may be significant if the algorithm is the same across — as Justice Thomas was suggesting, across the different subject matters, because then they don’t have a focused algorithm with respect to terrorist activities or — or pilaf or something, and then I think it might be harder for you to say that there’s selection involved for which they could be held responsible.

MR. SCHNAPPER:

The — the — the statute, I think, doesn’t draw the distinction that way. The – the claim here is about the encouragement of — of — of users to go look at particular content. And that’s the JASTA claim that we’ll hear about tomorrow. And the underlying substantive claim is encouraging people to go look at ISIS videos would be aiding and abetting ISIS. More on that tomorrow. But, if that’s an actionable claim, then the conduct here would fit within it, the – because certain individuals would be shown these thumbnails, which would encourage them to go look at those videos.

JUSTICE KAGAN:

So I think you’re right, Mr. Schnapper, that the statute doesn’t make that distinction. This was a pre-algorithm statute. And, you know, everybody is trying their best to figure out how this statute applies, the statute which was a pre-algorithm statute applies in a post-algorithm world. But I think what was lying underneath Justice Thomas’s question was a suggestion that algorithms are endemic to the Internet, that every time anybody looks at anything on the Internet, there is an algorithm involved, whether it’s a Google search engine or whether it’s this YouTube site or — or — or a Twitter account or countless other things, that everything involves ways of organizing and prioritizing material.

And — and that would essentially mean that, you know, 230 — I guess what I’m asking is, does — does — does your position send us down the road such that 230 really can’t mean anything at all?

MR. SCHNAPPER:

I — I — I don’t think so, Your Honor. The question — as you say, algorithms are ubiquitous, but the question is what does the defendant do with the algorithm. If it uses the algorithm to direct– to encourage people to look at ISIS videos, that’s within the scope of JASTA. It’s not different than if back in 1999 a lot of clerks somewhere at Prodigy did this manually and just had a bunch of file cards and they figured out who was interested in what. The statute would have meant the same thing there that it does now. It’s automated, it’s at a larger scale, but it doesn’t change the nature of what they’re doing with the algorithm. So —

JUSTICE SOTOMAYOR:

Can I — I’m sorry, finish.

MR. SCHNAPPER:

The — the — the brief — I think the brief for Respondent points to a number of uses of algorithms, for example, to pick the cheapest fare or things like that. That’s just outside the scope of the statute. The algorithm is being used there to generate additional content. So the question is what you do with the algorithm. The fact that you did it with an algorithm doesn’t give — yield a different result than if you had a lot of hard-working people in a — in an office somewhere doing the same thing.

JUSTICE SOTOMAYOR:

You seem —

JUSTICE KAGAN:

Well, I –I–I guess I —

JUSTICE SOTOMAYOR:

Oh.

JUSTICE KAGAN:

— I — I take the point — if — if I could?

JUSTICE SOTOMAYOR:

No, no, go ahead.

JUSTICE KAGAN:

You know, I take the point that there are a lot of algorithms that are not going to produce pro-ISIS content and that won’t create a problem under this statute, but maybe they’ll produce defamatory content or maybe they’ll produce content that violates some other law.

And your — your argument can’t be limited to this one statute. It has to extend to any number of harms that can be done by — by speech and — and so by the organization of speech in ways that basically every provider uses.

MR. SCHNAPPER:

Well, if I might turn to the example of you said — you referred to an algorithm that produces defamation. I may be paraphrasing that wrong. If the — if the — let’s say the algorithm generates a recommendation — a – a — a face — a thumbnail that on its face is – is benign, it just says interesting information about Frank, you go there, and it’s defamatory. The defendant’s not responsible — or excuse me — the defense applies to the video itself that you saw. The question would be whether the thumbnail was actionable. And under– in most circumstances, thumbnails aren’t going to be actionable.

In addition, the — the thumbnails typically include a snippet from a — a video or a text or whatever. If the snippet itself were defamatory, again, the defense — the statutory defense would apply because what was being displayed was third-party content. And so the statute still applies there.

JUSTICE ALITO:

Suppose that Google could — YouTube could display these thumbnails purely at random, but if it does anything than displaying them purely at random, isn’t it organizing and presenting information to people who access YouTube?

MR. SCHNAPPER:

Yes, but —

JUSTICE ALITO:

All right.

MR. SCHNAPPER:

— that doesn’t put it within the scope of the statute – does that publishing — they would be publishing the —

JUSTICE ALITO:

Well, does that — constitute publishing?

MR. SCHNAPPER:

Yes. So they would —

JUSTICE ALITO:

It does?

MR. SCHNAPPER:

— they would be the thumbnail.

JUSTICE ALITO:

Right.

MR. SCHNAPPER:

But — but, if the — if the thumbnail isn’t itself — if — if the — if the — they way they’re using it is — is — is encouraging people to engage —

JUSTICE ALITO:

Well, that’s a different question, though, isn’t it? I — I don’t know where you’re drawing the line. That’s the problem.

MR. SCHNAPPER:

Oh, I see, I see, I see.

JUSTICE ALITO:

That’s the problem that I see.

MR. SCHNAPPER:

Oh.

JUSTICE ALITO:

Unless you’re — you’re saying that the publication requirement is satisfied under all circumstances unless the thumbnails are presented purely at random.

MR. SCHNAPPER:

It’s publication even if it’s at random, but the — but the — the — the injury in the hypothetical we’re talking about about ISIS doesn’t follow from the content of the thumbnail. The thumbnail would typically be fairly benign. The harm comes –

JUSTICE ALITO:

Yeah, but in every instance, in those instances where the thumbnail is benign, that’s not a concern for purposes of this case, but in all those instances where some plaintiff might have some cause of action based on the content of the video that has been posted –

MR. SCHNAPPER:

There would have to be a cause of action, as we assert there is in JASTA, for encouraging people to go look at the video. That’s a fairly uncommon form of cause of action. The cause of action — insofar as the plaintiff asserts a cause of action based on the video itself, that’s within the — that you’ve been sent to, that’s within the scope of the defense.

JUSTICE JACKSON:

Is that because of the way in which you’re interpreting the statute? I mean, can we — can we back up a little bit and try to at least help me get my mind around your argument about how we should read the text of the statute? I took your brief to be arguing and that of those who support you that the statute really is about one kind of publishing conduct, and that is the failure to block or screen offensive content.

Am I right about that? In other words, what you say is covered by Section 230 and that Google could, like — could rightly claim immunity for is a claim that there was something defective about their ability to screen or block content, that the content is up there and you should be liable for it?

MR. SCHNAPPER:

I — I think we — we’ve — I — I think that’s not our claim.

JUSTICE JACKSON:

Okay.

MR. SCHNAPPER:

I think we are trying to distinguish between liability for what’s in the content that’s on their websites that you could access and actions they take to encourage you to go look at it.

JUSTICE JACKSON:

Yes, yes, that’s your claim. I’m just trying to —

MR. SCHNAPPER:

If you encourage it, then we’re —

JUSTICE JACKSON:

— understand how you read the statute. Your — the statute, you say, covers only scenarios in which the claim that’s being made is that there’s offensive content on the website, that you didn’t take it down, that, you know, you failed to screen it out, but if you’re making a claim that you’re encouraging people to look at this content, that’s something different, that’s the claim you’re making, and it’s not covered by the statute.

MR. SCHNAPPER:

That’s our — that’s the distinction –

JUSTICE JACKSON:

All right.

MR. SCHNAPPER:

— we’re trying to draw. I mean, it — the distinction is illustrated by the e-mail in the Dyroff case, which — which is the precedent that — that got us here in the Ninth Circuit. In that case, there was a, I think, 26-word — 26-word e-mail from the website to an individual which read something like there’s something new that’s been posted to the question where can I buy heroin in Jacksonville, Florida. To access it, use this URL or use this URL. It’s our contention that that is outside the protection of the statute.

JUSTICE JACKSON:

But is that really different — I guess I’m trying — so they would argue, I think, that even assuming that the statute only covered the kinds of things that you say it covers, you know, defective blocking and screening, meaning there’s still offensive stuff on your website and you should be liable for it, I think they would say that to the extent your claim is talking about their – their algorithm that presents the information, it’s really the same thing, that you’re — that it reduces – it’s tantamount to saying we haven’t, you know, blocked this information,it’s still on the website, because algorithms are the way in which the information is presented.

MR. SCHNAPPER:

So, if I may make clear, as I may not have done that well, the distinction we’re drawing, our claim is not that they did an inadequate job of block — of keeping things off their — their computers that you can access from — from outside or from failure to — to block it. It’s that that’s the — that’s the heartland of the statute. What we’re saying is that insofar as they were encouraging people to go look at things, that’s what’s outside the protection of the statute, not that the stuff was there.

If they stopped recommending things tomorrow and — and all sorts of horrible stuff was on their website, as far as we read the statute, they’re fine. It’s the recommendation practice that we think is actionable.

JUSTICE SOTOMAYOR:

Can I break down your complaint a moment? There — the vast majority of it is paragraph after paragraph after paragraph that says they’re liable because they failed to take ISIS off their website. I think, as I’m listening to you today, you seem to have abandoned that and — and are saying they don’t have to take it off their website.

MR. SCHNAPPER:

That —

JUSTICE SOTOMAYOR:

Am I correct about that?

MR. SCHNAPPER:

That’s exactly right. That — that —

JUSTICE SOTOMAYOR:

So that can’t be – framed the question presented.

MR. SCHNAPPER:

— is the way we’ve –

JUSTICE SOTOMAYOR:

So that can’t be —

MR. SCHNAPPER:

We did not advance that claim.

JUSTICE SOTOMAYOR:

So you’re abandoning that claim, so that can’t be aiding and abetting. So I think I’m listening to you, and the only aiding and abetting that you’re arguing is the recommendation, correct?

MR. SCHNAPPER:

That’s correct.

JUSTICE SOTOMAYOR:

You’re not arguing that they’re — some of these providers create chat rooms or put people together, users together. You’re not claiming that that’s part of what you’re arguing about? The social networking, I want to call it.

MR. SCHNAPPER:

Well, that’s not at issue in this case.

JUSTICE SOTOMAYOR:

It’s in —

MR. SCHNAPPER:

Face —

JUSTICE SOTOMAYOR:

— tomorrow’s case? All right.

MR. SCHNAPPER:

Face — if I can be more specific —

JUSTICE SOTOMAYOR:

All right. So you’re limiting — you’re limiting your —

MR. SCHNAPPER:

— Facebook – Facebook does that.

JUSTICE SOTOMAYOR:

All right.

MR. SCHNAPPER:

Facebook recommends people —

JUSTICE SOTOMAYOR:

Right.

MR. SCHNAPPER:

— which is very difficult to find within the four walls of the statute. Google’s created a lot of things but so far not —

JUSTICE SOTOMAYOR:

But you’re not claiming that in this case?

MR. SCHNAPPER:

Not in — it’s not what —

JUSTICE SOTOMAYOR:

You’re just focusing —

MR. SCHNAPPER:

No. This is about content. It not about —

JUSTICE SOTOMAYOR:

This is about content. And I just want to focus your complaint so I understand it very clearly. You’re saying the — the YouTube or the “Next up” feature of the algorithm that says you viewed this and so you might like this, it’s the “you might like this” that’s the aiding and abetting?

MR. SCHNAPPER:

Uh —

JUSTICE SOTOMAYOR:

What — what part of what they’re doing? Because, I mean, you – whoever the user is types in something, they get an ISIS video, you say that’s okay — they can’t be liable for you, the — me, the viewer, looking at the ISIS vehicle. But the Internet

providers can be liable for what?

MR. SCHNAPPER:

Okay. So they’re – they’re —

JUSTICE SOTOMAYOR:

For showing me the next video that’s similar to that?

MR. SCHNAPPER:

All right. They’re – it would be helpful perhaps if I distinguish between two kinds of practices that — that go on at YouTube. The complaint doesn’t describe them in detail, but we’re fairly familiar with them. So what we can talk —

JUSTICE SOTOMAYOR:

I’m glad, but I’m going to be to look at complaint because it can only survive if the complaint is adequate. So you’re going to have to tell me where in the complaint you’re saying this if I’m going to think about holding them liable. So —

MR. SCHNAPPER:

I’m about three questions —

JUSTICE SOTOMAYOR:

— you’re going to have to separate out the two things then.

MR. SCHNAPPER: Okay. I’m about three questions behind. Let me —

JUSTICE SOTOMAYOR:

All right.

MR. SCHNAPPER:

— let me try and do my best here. So what we’ve been talking about up until now is the use of — of thumbnails to encourage people to look at content — people who haven’t clicked on any video yet. And our contention is the use of thumbnails is — is the same thing under the statute as sending someone an e-mail and saying: You might like to look at this new video. Now the “Up next” feature is a different problem, and the problem there is — is that when you click on one video and you picked that one, YouTube will automatically keep sending you more videos which you haven’t asked for. That, in our view, runs afoul of a different element of the statutory defense, which is that they be acting as an interactive computer service. And when they go beyond delivering to you what you’ve asked for, to start sending things you haven’t asked for, our contention is they’re no longer acting as an interactive computer service.

JUSTICE SOTOMAYOR:

All right. So, even if I accept that you’re right that sending you unrequested things that are similar to what you’ve viewed, whether it’s a thumbnail or an e-mail, how does that become aiding and abetting? I’m going back to Justice Thomas’s question, okay, which is, if they aren’t purposely creating their algorithm in some way to feature ISIS videos, if they’re — I mean, I can really see that an Internet provider who was in cahoots with ISIS provided them with an algorithm that would take anybody in the world and find them for them and — and do recruiting of people by showing them other videos that will lead them to ISIS, that’s an intentional act, and I could see 230 not going that far.

I guess the question is, how do you get yourself from a neutral algorithm to an aiding and abetting?

MR. SCHNAPPER:

Right.

JUSTICE SOTOMAYOR:

An intent, knowledge. There has to be some intent to aid and abet. You have to have knowledge that you’re doing this.

MR. SCHNAPPER:

Yes.

JUSTICE SOTOMAYOR:

So how do you get there?

MR. SCHNAPPER:

So the — the — if – if the automatically plays it, that — as we’ll see tomorrow, that by itself isn’t going to satisfy aiding and abetting. Aiding and abetting requires knowledge that it’s happening. So the elements of the aiding and abetting claim, which we’ll be talking about tomorrow, address the question you’re asking. If — if this was teed up, if they didn’t know it was happening, and the other elements of an aiding-and-abetting claim were present, they would not be liable for aiding and abetting.

CHIEF JUSTICE ROBERTS:

Thank you, counsel. Just one short question. Your — your friend on the other side presented an analogy that she thought would be helpful, which — a book seller that has a table with sports books on it and somebody comes in and says, I’m looking for the book about Roger Maris, and the bookseller says, well, it’s over there on the table with the other sports books.

Isn’t that analogous to what’s happening here? You type in ISIS —

MR. SCHNAPPER:

I’m not sure — I’m not sure where that — that gets us. I mean, it wouldn’t be any different than sending an e-mail saying that.

CHIEF JUSTICE ROBERTS:

Well, we’ll figure out where we get — it gets us in a minute. But I just want to know if you think that’s a good — a good analogy.

MR. SCHNAPPER:

I — I — I’m a little concerned to know where it’s taking me. It’s an analogy of – (Laughter.)

MR. SCHNAPPER:

— it’s an analogy of sorts.

CHIEF JUSTICE ROBERTS:

That’s what we call — that’s what we call questions.

MR. SCHNAPPER:

But — but I still – I mean, I’m going to — at some point, I’m going to go yes, but you still have to fit it within the four walls of the statute. Perhaps you could — you could tell me what lies ahead. I think I could — I mean, sure, it’s an analogy of sorts, but – (Laughter.)

CHIEF JUSTICE ROBERTS:

What lies ahead is, I give up, Your Honor.

MR. SCHNAPPER:

— but I would like to know what it leads up to. Yes. Yeah. But –

CHIEF JUSTICE ROBERTS:

No, what lies ahead is the idea that you could look at that and say it’s not pitching something in particular to the person who’s made the request. It is recognizing that it’s a request about a particular subject matter and it’s there on the table, and they might want to look at it but they may not want to look at it. But it’s really just a 21st version of what has taken place for a in many contexts, which, when you ask a question, people are putting together that or long time a group of things, not necessarily precisely answering your question. I mean, if somebody said —

MR. SCHNAPPER:

Yes — no, I — all right. I think — I think I know where we’re going here. The — insofar as I — I go to YouTube and I say show me a cat–you know, it’s a little more complicated than this — but show me– show — — tell me what cat videos you have, and in responding to that, they’re —

CHIEF JUSTICE ROBERTS:

Sure. That’s an easy case. They give you a bunch of cat videos. You don’t have any complaint about something like that. In this case, if they put in something, say, show me ISIS videos, they would get a bunch of ISIS videos, and you don’t have any objection to that given the way the search was phrased.

MR. SCHNAPPER:

It — I have to answer that with precision. If I say, play for me an ISIS video, and they just directly play the video, then what they’ve done falls within the language of the statute. It’s requested, it’s purely third-party content, and I would try and be hold — trying to be holding them liable for displaying that content.

But what actually has happened — and this is maybe analogous to what goes on to some extent at Twitter, where they might actually literally just show you the thing. But what’s happening at YouTube is they’re not doing that. I type in ISIS video, and there are going to be a catalogue of thumbnails which they created. It’s as if I went into the bookstore and said, I’m interested in sports books, and they said, we’ve got this catalogue which we wrote of sports books, sports books we have here, and handed that to me. They created that content.

And — and — and if you publish content you’ve created, you’re not within the four walls of the statute. So —

CHIEF JUSTICE ROBERTS:

But you would not — you would not — under your theory, they would not be liable for the content of the books, they’d be liable for the catalogue?

MR. SCHNAPPER:

By — by — by providing the catalogue.

CHIEF JUSTICE ROBERTS:

Okay. Thank you. Justice Thomas, anything further?

JUSTICE THOMAS:

What if the YouTube, instead of automatically providing this list, which is hard — it’s hard for me because I don’t see this as — I see these as suggestions and not really recommendations because they don’t really comment on them. But what if you had to click on something like “For more like this, click here”? Would that also be, as far as you’re concerned, aiding and abetting or outside this statute?

MR. SCHNAPPER:

It’s — so you — you’ve played one video and they say click here to see another one?

JUSTICE THOMAS:

No, click here if you want suggestions for more like this.

MR. SCHNAPPER:

No, suggestions are — depending how it happens. Let’s say they say send me more — show me more thumbnails. It’s outside the statute. And if I might come back to an earlier part of what’s embedded in your question, we aren’t asking the Court to adopt a rule that’s about recommendations versus suggestions. What we’re suggesting — what — what we’re arguing is — is that this — is that you take the normal standards in each of the elements and you apply it to what’s going on. It doesn’t — it doesn’t matter if they’re encouraging it.

If — if — in terms of aiding and abetting, if someone comes to me and says what’s al-Baghdadi’s phone call — phone number, I’d like to call him, and I give him the phone number, I’m aiding and abetting even if I — I don’t say, and I hope you’ll join ISIS.

Whether we label it a recommendation or not on our view is not the issue here. We tried to say that in our brief.

JUSTICE THOMAS:

Thank you.

MR. SCHNAPPER:

Was that responsive? I’m not —

JUSTICE THOMAS:

Well, it’s responsive, but I don’t understand it. (Laughter.)

JUSTICE THOMAS:

You called — I mean, if you called Information and asked for al-Baghdadi’s number and they give it to you, I don’t see how that’s aiding and abetting. And I don’t understand how a neutral suggestion about something that you’ve expressed an interest in is aiding and abetting. I just don’t — I don’t understand it.

And I’m trying to get you to explain to us how something that is standard on YouTube for virtually anything that you have an interest in suddenly amounts to aiding and abetting because you’re in the ISIS category.

MR. SCHNAPPER:

Well, again, I’ll be answering that probably again tomorrow, but as little — what you describe without more probably wouldn’t. But, as you’ll — as we’ll learn tomorrow, the circumstances are far different than that, that these — YouTube and these other companies were repeatedly told by government officials, by the media, dozens of times that this was going on, and they didn’t do any — they did almost nothing about it. That’s very different than providing one phone number through Information.

JUSTICE THOMAS:

Well, I mean, did —

MR. SCHNAPPER:

So it goes to the scope of JASTA, not to 230.

JUSTICE THOMAS:

So we’ve gone from recommendation to inaction being the source of the problem. And this is what I’m — you know, the — I understand you’re putting it in context, but I — it’s hard for me also to understand where this obligation to take specific actions can lead to an aiding-and-abetting claim.

MR. SCHNAPPER:

Well, the interconnection in this case is that — that we’re focusing on the recommendation function, that they are affirmatively recommending or suggesting ISIS content, and it’s — and it’s not mere inaction. Mere inaction might work under aiding and abetting, but we’ll get there tomorrow, but– but the claim that we’re focusing on today is that, in fact, they’re affirmatively recommending things. You turn on your computer and the — and the — the — the computers at – at YouTube send you stuff you didn’t ask them for. They just send you stuff. It’s no different than if they were sending you e-mails. That’s affirmative conduct.

CHIEF JUSTICE ROBERTS:

Justice Alito?

JUSTICE ALITO:

I’m afraid I’m completely confused by whatever argument you’re making at the present time. So, if someone goes on YouTube and puts in ISIS videos and they show thumbnails of ISIS videos, and don’t — don’t — don’t tell me anything about the substantive underlying tort claim, if the person is — if — if YouTube is sued for doing that, is it acting as a publisher simply by displaying these thumbnails of ISIS videos after a search for ISIS videos?

MR. SCHNAPPER:

It is acting as a publisher but of something that they helped to create because the thumbnail is a joint creation that involves materials from a third party and a URL from them and some other things.

JUSTICE ALITO:

So, if YouTube use thumbnails at all, it is acting as a publisher with respect to every thumbnail that it displays?

MR. SCHNAPPER:

Yes. Yes. They’re – they’re publishing the thumbnails. And the question is, are the thumbnails third-party content, or are they content they’ve created? And the problem is they are content.

JUSTICE ALITO:

Yeah, I mean, if that’s your argument, then you’re really arguing that — that this statute does not provide protection against a suit that is in substance based on the third-party-provided content.

MR. SCHNAPPER:

No, we’re — we’re basing the — I’m sorry. I don’t mean to be so —

JUSTICE ALITO:

Okay.

MR. SCHNAPPER:

That — that — that they — the particular business model they have involves using this — these thumbnails, which are materials they’ve in part created to –to — to operate. Let me —

JUSTICE ALITO:

So they shouldn’t use thumbnails at all? If they want protection under the statute, they shouldn’t use thumbnails?

MR. SCHNAPPER:

Let me — let — that’s — that’s the problem they have with the way the statute’s written. So, if I — if I may give a —

JUSTICE ALITO:

Is there any other way they could organize themselves without using thumbnails? I suppose, if you type in “I want ISIS videos,” they can just put ISIS video 1, ISIS video 2, and so forth.

MR. SCHNAPPER:

That’s the technical problem they have.

JUSTICE ALITO:

Well, would that be acting as a publisher if they did that?

MR. SCHNAPPER:

Yes, but they’d be publishing third-party content because the video itself is the content. If I might — if I might respond –

JUSTICE ALITO:

Okay. I just — I – I — I have one final question. It’s a technical question and probably better addressed to Ms. Blatt. Is it your contention that everybody who uses YouTube and searches for a video involving a particular subject will be automatically presented with thumbnails that are related to that regardless of that user’s YouTube setting, preferences, preferences that YouTube allows you to —

MR. SCHNAPPER:

I — I — I don’t — I don’t know. The practices are too varied. I don’t know. But –

JUSTICE ALITO:

You don’t know if somebody uses YouTube, they can — can — do they have — is there a function that allows them not to be presented with similar videos?

MR. SCHNAPPER:

I — I don’t know. I mean, I’ve gone onto — on YouTube and never seen that, but I — I wouldn’t —

JUSTICE ALITO:

Uh-huh. Okay.

MR. SCHNAPPER:

The functions there are widely varied. But if I might make a broader point about the way you framed that question?

JUSTICE ALITO:

I — I think you — you answered my question. Thank you.

CHIEF JUSTICE ROBERTS:

Justice Sotomayor?

JUSTICE SOTOMAYOR:

I — I do. This has gone further than I thought, or your position has gone further than I thought. No provider or user of a interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

And I thought that you started by telling me, if I put in ISIS and they just give me a download of information, the Internet provider is not liable, correct, under (c)(1)? I just read to you (c)(1), correct?

MR. SCHNAPPER:

It — it depends what the information is they give you.

JUSTICE SOTOMAYOR:

If they give me everything that has —

MR. SCHNAPPER:

If they give you information they created —

JUSTICE SOTOMAYOR:

No, they have —

MR. SCHNAPPER:

— they’re not protected.

JUSTICE SOTOMAYOR:

So you are going to the extreme. Assume I don’t think you’re right, I think you’re wrong, that if I put in a search and they give me materials that they believe answers my search, no matter how they organize it, that they’re okay. Do you survive — does your complaint survive if I believe 230 goes that far?

MR. SCHNAPPER:

So it depends on what materials they present you with. If — if all they presented you with — Twitter would maybe be a cleaner example — is materials created by third parties, they — what they’ve published is third-party materials, and they’re good. If they present you with things that they wrote, at the other extreme, then they’re not protected because what they presented is not third-party content.

JUSTICE SOTOMAYOR:

So why do you think the thumbnails are — I type it in, they give me a thumbnail of everything they think answers my inquiry, the suggestion box.

MR. SCHNAPPER:

Yes.

JUSTICE SOTOMAYOR:

Why are they liable?

MR. SCHNAPPER:

Because a thumbnail is not exclusively third-party material. It’s a joint operation, and you can find — if you look at the thumbnail, it’ll have a picture, which comes from the third party, it has an embedded URL, which comes from the defendant, and it might have some information below the —

JUSTICE SOTOMAYOR:

The URL tells you where to find it, correct?

MR. SCHNAPPER:

Sorry?

JUSTICE SOTOMAYOR:

The URL tells you where to find it? It’s a computer language that tells you this is where this is located?

MR. SCHNAPPER:

Yes, but it is information within the meaning of the statute. This is no different than an e-mail which writes it out for you.

JUSTICE SOTOMAYOR:

If I don’t accept your line —

MR. SCHNAPPER:

Yeah.

JUSTICE SOTOMAYOR:

— assume that you’ve lost on that — with that line.

MR. SCHNAPPER:

Yes.

JUSTICE SOTOMAYOR:

I gave you an example earlier of an Internet provider working directly with ISIS and doing an algorithm that — teaching them how to do an algorithm that will look for everybody who is just ISIS-related. There’s more a collusion in the creation than a neutral algorithm. How do I draw the line between not accepting your point about the thumbnails and going to the other extreme of active collusion? Because there has to be a line somewhere in between. It can’t be merely because you’re a computer person that you can create an algorithm that discriminates against people. You have no problem with that, right? If a — if a –

MR. SCHNAPPER:

The writing of the algorithm would probably constitute aiding and abetting —

JUSTICE SOTOMAYOR:

Exactly. If you write one that discriminated against people or a user, you’re probably going to be liable.

MR. SCHNAPPER:

I’m not sure, as we describe it, it would fall outside the four walls of the defense. If you write an algorithm that — that in response — that in — that – the — the way you implement it’s —

JUSTICE SOTOMAYOR:

If you write an algorithm —

MR. SCHNAPPER:

— going to put you outside the defense. Yes.

JUSTICE SOTOMAYOR:

— if you write an algorithm for someone that, in its structure, ensures the discrimination between people, a dating app, for example, someone comes to you and says, I’m going to create an algorithm that inherently discriminates against people, it won’t match black people to white people, Asian people to Hispanics, it’s going to discriminate, you would say that Internet provider is discriminating, correct?

MR. SCHNAPPER:

I would — what they did — the way the distinction played out would be important, though. They would — you know, if — if they’re — they would have to fall outside of one of the elements of the claim. It’s hard to do this in the abstract.

JUSTICE SOTOMAYOR:

All right.

CHIEF JUSTICE ROBERTS:

Justice Kagan?

JUSTICE KAGAN:

Mr. Schnapper, can I give you three kinds of practices and you tell me which gets 230 protection and which doesn’t? So one is the YouTube practices that you’re complaining of, and we know you think that that does not get 230 protection. A second would be Facebook or Twitter or any entity that essentially prioritizes items. So you’re on Facebook and certain items are prioritized on your news feed, or certain tweets are prioritized on your Twitter feed, all right, and that there’s some algorithm that’s doing that and that’s amplifying certain messages rather than other messages on your feed. That’s the second. And then the third is just a regular search engine.

You know, you put in a search and something comes back, and in some ways, you know, that’s one giant recommendation system. Here’s the first item you should look at. Here’s the second item you should look at. So are all three of those not protected, or what happens to my second and third? Are they protected or not protected? And if they’re — and if they are protected, what’s the difference between them and your practices?

MR. SCHNAPPER:

Certainly. So let me– let me start with the search engine. The – the — there’s a lot of discussion on search engines, but there’s not a specific provision in the statute that says search engines are protected. The question is, do they fit within the language of the statute? So, if I ask a search engine for stories about John Doe and it gives me a list and, if I click on one of them, it turns out to be defamatory, they’re not liable because they —

JUSTICE KAGAN:

Well, they just gave it to you. It’s the first thing. They just prioritized it. They think it’s really a great one to click on.

MR. SCHNAPPER:

The — the mere fact– there are three — multiple questions here. First, are they liable just because what you — you — you clicked on turned out to be defamatory? The answer we think is no. Secondly, what if the snippet that they took from the John Doe document said John Doe is a shoplifter? And the answer is they’re not liable because they didn’t write that. It’s publishing third-party content. The third question is, could they be liable for the way they prioritize things? And the answer is I think so. It’s going to depend how — what happened. And the example, I could —

JUSTICE KAGAN:

So even all the way to the — to the straight search engine, that they could be liable for their prioritization system?

MR. SCHNAPPER:

Yes, there was — let me–

JUSTICE KAGAN:

Okay.

MR. SCHNAPPER:

If I might continue —

JUSTICE KAGAN:

No, I appreciate the — the — go ahead. I’m sorry.

MR. SCHNAPPER:

Those are the facts which led the European Union to fine Google 2 billion euros, because they used prioritization to wipe out competition —

JUSTICE KAGAN:

Okay. So here’s —

MR. SCHNAPPER:

— for things they were selling.

JUSTICE KAGAN:

Yeah, so I don’t think that a court did it over there, and I think that that’s my concern, is I can imagine a world where you’re right that none of this stuff gets protection. And, you know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? A little bit unclear.

On the other hand, I mean, we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the Internet.

(Laughter.)

JUSTICE KAGAN:

And I don’t have to — I don’t have to accept all Ms. Blatt’s “the sky is falling” stuff to accept something about, boy, there is a lot of uncertainty about going the way you would have us go, in part, just because of the difficulty of drawing lines in this area and just because of the fact that, once we go with you, all of a sudden we’re finding that Google isn’t protected. And maybe Congress should want that system, but isn’t that something for Congress to do, not the Court?

MR. SCHNAPPER:

Well, I — I think the– the — the — the line-drawing problems are real. No one minimizes that. I think that the task for this Court is to apply the statute the way it was written. And if I might return to a point that Justice Alito made, much of what goes on now didn’t exist in 1996. The statute was written to address one or two very specific problems about defamation cases, and it drew lines around certain kind of things and it protected those. It did not and could not have written — been written in such a way to protect everything else that might come along that was highly desirable. Congress didn’t adopt a regulatory scheme. They protected a few things. It will inevitably happen, it has happened, that companies have devised practices which are maybe highly laudable, but they don’t fit within the four walls of the statute.

That will continue to happen no matter what happens — what you do. And the answer is, when — when someone devises some new — some new practice that may be highly desirable but doesn’t fit within the four walls of the statute, the — the industry has to go back to Congress and say: We need you to broaden the statute because you wrote this to protect chat rooms in 1996, and we want to do something that doesn’t fit within the statutes. And — and using thumbnails would be a perfect example of that.

JUSTICE KAGAN:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Gorsuch?

JUSTICE GORSUCH:

Mr. Schnapper, I just want to make sure I understand, as you say, the statutory language and how this case fits with it, and if we could start with Section 230(f)(4), which defines the term “access software provider.” It includes, among other things, picking, choosing, analyzing, or digesting content. And we might in another world in our First Amendment jurisprudence think of picking and choosing, analyzing or digesting content as content providing, but the statute seems to suggest that’s not what it is, it’s something different in this context, in this statutory context, and it’s protected.

Do you agree with that?

MR. SCHNAPPER:

No. Let — and I — if I might explain why?

JUSTICE GORSUCH:

Briefly.

MR. SCHNAPPER:

I’ll do my best. The — the language that you refer to in Section (f)(4) doesn’t apply here.

JUSTICE GORSUCH:

No, I — I — I — we’ll get to that in a minute. But let’s just take that as given, okay, that I think that what, say, Google does in picking, choosing, analyzing, or even digesting content just makes it an access software provider. Let’s take that as given. And so that would normally be protected activity.

But (f)(3) carves out a scenario where you become a content provider, and that’s something different in my mind to picking, choosing, analyzing, or digesting content, okay? Let’s just take those two premises as given.

MR. SCHNAPPER:

Okay.

JUSTICE GORSUCH:

All right? You got to do something beyond picking, choosing, or analyzing or digesting content, which is what search engines typically do, even as I understand it. You’ve got to do something beyond that. As I take your argument, you think that the Ninth Circuit’s Neutral Tools Rule is wrong because, in a post-algorithm world, artificial intelligence can generate some forms of content, even according to Neutral Rules. I mean, artificial intelligence generates poetry, it generates polemics today. That — that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected. Let’s — let’s assume that’s right, okay? Then I guess the question becomes, what do we do about YouTube’s recommendations?

And — and as I see it, we have a few options. We could say that YouTube does generate its own content when it makes a recommendation, says up next. We could say no, that’s more like picking and choosing. Or we could say the Ninth Circuit’s Neutral Tools test was mistaken because, in some circumstances, even neutral tools, like algorithms, can generate through artificial intelligence forms of content and that the Ninth Circuit wasn’t sensitive to that possibility and remand the case for it to consider that question. What’s wrong with that?

MR. SCHNAPPER:

Well, it’s not our theory, but it’s – (Laughter.)

MR. SCHNAPPER:

If the alternative is what Ms. Blatt will be telling you, I’ll —

JUSTICE GORSUCH:

I’m not asking you, you know, hey, I’ll win at any cost.

MR. SCHNAPPER:

No, there’s nothing wrong with it.

JUSTICE GORSUCH:

I’m asking you what’s — what’s — whether that is a correct analysis of the statutory terms you keep referring us to —

MR. SCHNAPPER:

Yes.

JUSTICE GORSUCH:

— or whether it is not.

MR. SCHNAPPER:

Yes, yes, yes. As – as we’ve said, this now is close to something we set out in our brief, which is that the — that the algorithm could create things on its own. It could create a catalogue of ISIS videos, which would be analogous to a compilation under Section 10 of the Copyright Act. A compilation is a distinct entity, it’s copyrightable, even if the elements of it were not. So, yes, absolutely, the software could create something like that. It would not be third-party content, and, therefore, it would fall outside the scope of the statute.

JUSTICE GORSUCH:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Kavanaugh?

JUSTICE KAVANAUGH:

Just to pick up on Justice Gorsuch’s questions, the idea of recommendations is not in the statute. And the statute does refer to organization and the definition, as he was saying, of interactive computer service means one that filters, screens, picks, chooses, organizes content. And your position, I think, would mean that the very thing that makes the website an interactive computer service also mean that it loses the protection of 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself. So what’s your response to that?

MR. SCHNAPPER:

My response is that the text doesn’t apply here. Let me explain why. The — the element in the — the list in — in (f)(4) refers to only one of the three kinds of interactive computer services in (f)(2). In (f)(2) — and this is — this is on page 26 of the petition appendix. (f)(2) says an interactive computer service means — and there — it gives you three candidates, you’ve got one of them – an information service, a system, or an access software provider. Now YouTube is one of the first two. It doesn’t — it’s not a software provider. The definition in (f)(4) only delineates who is an access software provider. It doesn’t apply to who’s an information system or service. And that was Congress’s choice.

Congress didn’t say you’re an interactive — you’re a service, an information service or a system if you do those things. It said you’re only — those things only bring you within the four walls of interactive computer service if you’re — if you’re a software provider. And — and that made sense in the context of what was happening in 1996. In 1996, if you wanted to go online, you would typically sign up with CompuServe or Prodigy and they would literally give you diskettes. They would sell — they would be selling you software.

And — and this provision in (f)(4) is about that activity. That’s not what’s happening here.

JUSTICE KAVANAUGH:

Well, just — just to go back to 199 and maybe pick up on Justice Kagan’s questions earlier, it seems that you continually want to focus on the precise issue that was going on in 1996, but then Congress drafted a broad text, and that text has been unanimously read by courts of appeals over the years to provide protection in this sort of situation and that you now want to challenge that consensus.

But the amici on the other side say: Well, to do that, to pull back now from the interpretation that’s been in place would create a lot of economic dislocation, would really crash the digital economy with all sorts of effects on workers and consumers, retirement plans and what have you, and those are serious concerns and concerns that Congress, if it were to take a look at this and try to fashion something along the lines of what you’re saying, could account for.

We are not equipped to account for that. So are the predictions of problems overstated? If so, how? And are we really the right body to draw back from what had been the text and consistent understanding in courts of appeals?

MR. SCHNAPPER:

Well, I — our position is that the text doesn’t — doesn’t say this. With regard to the issue of what we’ve come to call recommendations, this isn’t a longstanding, well-established body of precedent. It’s really three decisions: the decision in this case, the Dyroff decision, and Force. And — and of the eight justices to —

JUSTICE KAVANAUGH:

What about the implications then? Go to that, the implications for the economy, that you have a lot of amicus briefs that we have to take seriously that say this is going to cause a lot of economic dislocation in the country.

MR. SCHNAPPER:

I mean, I’d say a couple things in response to that. The first one is, on a close reading of the amicus briefs, it’s clear that they are urging the Court to hold that a wide variety of different kinds of things are protected. They’re — they’re inviting the Court to adopt a rule that recommendations are protected and that whatever they’re doing would qualify as a recommendation. The — you can’t —

JUSTICE KAVANAUGH:

Well, I think they’re saying a recommendation is a recommendation, something express. And your – your whole thing is the algorithms are an implied recommendation. And they’re saying: Well, they’re not an express recommendation. That — that — so –

MR. SCHNAPPER:

I’m – we don’t challenge – yes.

JUSTICE KAVANAUGH:

But, in any event, we focus on the question.

MR. SCHNAPPER:

Yes. Yes.

JUSTICE KAVANAUGH:

Do you — do you the — the basic point?

MR. SCHNAPPER:

I think — I think —

JUSTICE KAVANAUGH:

And so —

MR. SCHNAPPER:

We — we do, on — on a couple grounds. One of them is that I’m not sure all these decisions – these briefs are distinguishing as we have today between liability because of the content of third-party materials and the recommendation function itself. 1A — a distinction between more and less specific suggestions —

JUSTICE KAVANAUGH:

What would the difference be in liability, in damages?

MR. SCHNAPPER:

I’m sorry, between which two things?

JUSTICE KAVANAUGH:

The third-party content and the recommendation.

MR. SCHNAPPER: Well, most of the time the recommendations isn’t going to —

JUSTICE KAVANAUGH:

Like how would the money at the end of the day differ if you are successful?

MR. SCHNAPPER:

It might not be. But most recommendations just aren’t actionable. I mean, there — there is no cause of action for telling someone to look at a book that has something defamatory in it. JASTA, the statute we’re talking about tomorrow, is unusual in that recommendations could run you afoul of the statute, but there are very few claims that are like that. So it’s — it’s a very different kind of situation.

It’s — the implications of this are limited because the kinds of circumstances in which a recommendation would be actionable are limited.

JUSTICE KAVANAUGH:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Barrett?

JUSTICE BARRETT:

I’d like to take you back, Mr. Schnapper, to Justice Sotomayor’s about the complaint. It seems to me–

MR. SCHNAPPER:

The complaint in which case? I’m sorry.

JUSTICE BARRETT:

In tomorrow’s case, in the Taamneh case, the Twitter case, and this one.

MR. SCHNAPPER:

Pretty much.

JUSTICE BARRETT:

So they’re both relying on the same aiding-and-abetting theory. So, if you lose tomorrow, do we even have to reach the Section 230 question here? Would you concede that you would lose on that ground here?

MR. SCHNAPPER:

No. The — there was a motion to dismiss in tomorrow’s case on JASTA grounds. It didn’t get decided. So, if we lose tomorrow, they’ll be — the defense will be free in this case to — to move to dismiss, but we’d be entitled to try to amend the complaint in this case to satisfy whatever standard you establish tomorrow.

JUSTICE BARRETT:

Okay. Let me ask you this. I’m switching gears now. So Section 230 protects not only providers but also users. So I’m thinking about these recommendations. Let’s say I retweet an ISIS video. On your theory, am I aiding and abetting and does the statute protect me, or does my putting the thumbs-up on it create new content?

MR. SCHNAPPER:

I — we don’t read the word “user” in — that broadly. There’s not been a lot of litigation about this. We — we think the word “user” is there to deal with a situation in which one entity accesses a — a — a server, YouTube, for example, and then someone else uses that entity, like when I go to FedEx Office, FedEx Office is the user that is accessing my e-mail, and the statute protects them when I look at the FedEx computer and find the defamatory —

JUSTICE BARRETT:

Well, let’s say that I disagree with you. Let’s say I’m an entity that’s using the service — the service, so I count as a user. You know, my computer is accessing the servers when I retweet the image. On your theory, could I be liable under JASTA for aiding and abetting without — do I lose 230 protection –

MR. SCHNAPPER:

Right. Right. Right.

JUSTICE BARRETT:

— if I created new content?

MR. SCHNAPPER:

The problem — whether it’s enough for JASTA is a separate —

JUSTICE BARRETT:

Okay. Right. Fair enough.

MR. SCHNAPPER:

The question is, is it outside of 230?

JUSTICE BARRETT:

Is it outside of 230.

MR. SCHNAPPER:

Right. And our view is the statute doesn’t mean anyone who’s a user who re — who tweet — who — who conveys third-party liable is protected. If you — let’s say that you — you read a book, and it says John Doe is a shoplifter, and you send an e-mail that says John Doe is a shoplifter, you’re using, you know, the Internet. You’re using the — the e-mail system. But nobody thinks that — that Section 230 gives — is a blanket exemption for defamation on the website as long as you’re quoting somebody else. Retweeting is a very automatic way of doing it, but if you start down that road, you’d end up having to hold that — that anytime I send a defamatory e-mail, I’m protected as long as I’m quoting somebody else. And I don’t think anybody would —

JUSTICE BARRETT:

Well, I guess I don’t understand — I mean, let’s see, I guess I don’t understand logically why your argument wouldn’t mean that I was creating new content if I retweeted or if I liked it or if I said check this out. Why —

MR. SCHNAPPER:

Well — well, you —

JUSTICE BARRETT:

— why wouldn’t that?

MR. SCHNAPPER:

— you would be, but I’m advancing an argument that gets to the same place, which is you’re – you’re not a user within the meaning of the statute just because you use — you go on e-mail or — or YouTube or — or on Twitter.

JUSTICE BARRETT:

Let’s say I disagree with you. Let’s say that I think you’re a user of Twitter if you go on Twitter and you’re using Twitter and you retweet or you like or you say check this out. On your theory, I’m not protected by Section 230.

MR. SCHNAPPER:

That’s content you’ve created.

JUSTICE BARRETT:

That’s content I’ve created. Okay. And on the content creation point, let’s imagine — it seems like you’re putting a whole lot of weight on the fact that these are thumbnails, and so it’s something that YouTube separately creates.

MR. SCHNAPPER:

Yes.

JUSTICE BARRETT:

What if they just screenshot? They just screenshot the ISIS thing. They don’t do the thumbnail. Then are they –

MR. SCHNAPPER:

That’s — that’s pure third-party content.

JUSTICE BARRETT:

That’s pure third — so this is just about how YouTube set it up?

MR. SCHNAPPER:

That’s — that’s — that’s correct in this context. And it gets back to the conversation we were having earlier about this is a new technology that didn’t exist in 1996, and rather than ask Congress to write the statute to cover it, they just went ahead and did it.

JUSTICE BARRETT:

Okay. And last question, turning to the statutory text. So it seems to me that some the briefs in this case are focusing on what it means to treat someone as a publisher, treat an entity as a publisher. You’re not really focusing on that and the traditional editorial functions argument. I mean, you’re really focusing on the content provider argument, correct?

MR. SCHNAPPER:

No. Well, we’ve advanced views as to each element of the claim. Our–

JUSTICE BARRETT:

But today you’ve really been honing in on this are you actually creating content or just presenting third-party content.

MR. SCHNAPPER:

Well, I’ve been answering — that’s where the questions —

JUSTICE BARRETT:

Yes.

MR. SCHNAPPER:

— have taken us, but — but — but our — our view would be that you’re not being treated as a publisher of the video just because you — you publish the thumbnail.

JUSTICE BARRETT:

Okay. Thank you.

MR. SCHNAPPER:

You’re not being harmed by the thumbnail.

JUSTICE BARRETT:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Jackson?

JUSTICE JACKSON:

So I guess — I guess I’m thoroughly confused, but let me — let me try to — let me try to understand what your argument is. I think that the confusion that

I’m feeling is arising from the possibility that we’re talking about two different concepts and conflating them in a way. I thought that Section 230 and the questions that we were asking in this case today was about whether there was immunity and whether Google could claim the defense of immunity and that that’s actually different than the question of whether whatever it does gives rise to liability. That is, is there liability for aiding and abetting? That’s tomorrow’s question. And to the extent that you keep coming back to this notion of creating content or whatnot, I feel like we’re conflating the two in a way that I’d like to just see if I can clear up from my perspective.

Your brief says that the immunity question, Section 230(c)(1)’s text is most naturally read to prohibit courts from holding a website liable for failing to block or remove third-party content. And I read the arguments in your brief and I read what you said about Stratton Oakmont and the sort of background, and so I thought your argument was that the — that you can only claim immunity, Google, if the claim that’s being made against you is about your failing to block or remove third-party content.

To the extent we are making a claim about recommendations or doing anything else, any of the, you know, hypotheticals that people have brought up, that’s outside of the scope of the statute because, really, the statute is narrowly tailored in a way to protect Internet platforms from claims about failing to block or remove, right? I mean, that’s what I thought was happening.

All right. So, if that’s true, then all the hypotheticals and the questions about are you aiding and abetting if Google, you know, has a priority list or if there’s recommendations, maybe, but that’s not in the statute because we’re just talking about immunity. We’re just talking about whether or not you’ve made a claim for failing to block or remove in this case today related to Section 230.

Am I doing too much of a separation here in terms of how I’m conceiving of it?

MR. SCHNAPPER:

Well, let me articulate what — what the contention is that we are advancing, and I think it’s not quite the way you described it. The contention we’re advancing is that a variety of things that we’re loosely characterizing as recommendations fall outside of the statute.

JUSTICE JACKSON:

Why?

MR. SCHNAPPER:

Because, in some of them, the defendant’s not being treated as the publisher; because, in some of them, third-party content’s being — content is being created by the defendant; because, in some of them, the defendant’s not acting as an interactive computer service.

JUSTICE JACKSON:

I see. So I — I thought — I thought you were — the answer to why was because the statute is limited, because the statute only focuses on certain kind of publisher conduct, and to the extent that – that they’re doing anything else, recommending or whatever, that’s not going to be covered by this statute.

But you’re sort of saying, well, let’s look at what they’re actually doing and it may fit in or it may not. You’re not sort of hewing very closely to the understanding of the original scope of the statute in terms of what it is trying to immunize these platforms against.

MR. SCHNAPPER:

I — I — I think we’re trying to do that in somewhat more of a particularized way, that is, to — to identify– to work our way through each of the three specific elements of the statute, each tied to particular language, to —

JUSTICE JACKSON:

But I’ve got to tell you I don’t see three elements in this. I mean, part of me — part of this is all the confusion, I think, that has developed over time about the meaning of the statement in the statute, right? I don’t see three elements. I see literally a sentence, and the sentence in my view reads as though they’re trying to actually direct courts to not impose publisher liability, strict publisher liability, against the backdrop of of Stratton Oakmont. So there’s like some — somehow we’ve gotten to a world in which we’ve teased out three elements and we’re trying to fit it all into that, when I thought there was sort of a very simple, sort of straightforward way to read the statute that you articulate in your brief, which is this is really — this statute, (c)(1), is really just Congress trying to not disincentivize these platforms for blocking and screening offensive conduct.

And so what they said is let’s look at (c)(1). Let’s have (c)(2). Let’s have a system in which a system — a platform is not going to be punished, strict liability for just having offensive conduct on their website and, if they try, if they try to screen out, we’re not — we’re going to say you won’t be responsible for that either. That’s (c)(2).

But it really doesn’t speak to whether you do a recommendation or whether you have an algorithm that does priorities or any of these other things. That’s how I thought that — that at least I was looking at the statute in light of its purposes and history and — and — and Stratton Oakmont and all of that, in which case I think you would win unless your recommendation’s argument really is just the same thing as saying they are hosting ISIS videos on their website.

MR. SCHNAPPER:

Well, I — I think – I think we do have to be drawing that distinction. But with regard to your question about the three elements, the — the text does take you there. It says, if you track the briefs, probably of either side, the — part of what we’re arguing about, the meaning of treat as a publisher because that’s the first couple of words of the statute. Then we’re arguing about did they create the content because publisher has to be of — has to be of information provided by another content provider. So we have to parse out the meaning of that. And then it refers to the defendant as an interactive computer service. And we have to parse out the meaning of, well, what does that mean? So we — we are forced to — this — this — the language of the statute has those three components. And it — although the overall purpose is I think as you described it, the language is more complex and particularized.

JUSTICE JACKSON:

Thank you.

CHIEF JUSTICE ROBERTS:

Thank you, counsel. Mr. Stewart.

ORAL ARGUMENT OF MALCOLM L. STEWART FOR THE UNITED STATES, AS AMICUS CURIAE, SUPPORTING VACATUR

MR. STEWART:

Thank you, Mr. Chief Justice and may it please the Court: I’d like to begin by addressing the Roger Maris hypothetical because I — I think it illustrates our position and limits on our position.

Imagine in a particular state there was an unusually protective law that said no books at sellers shall be held liable on any theory for the content of any book that it sells and then the scenario that the Chief Justice described occurred, the person was asked where is the Roger Maris book and said it’s over on that table with the other sports book — books.

Now, if the book seller was sued for making that statement, our position would be there’s no way textually that the immunity statute would apply. This is a statement about the book,not the contents of the book. Now, the statement “the book is over there” is so obviously innocuous that it might seem like pedantry to quibble about should the dismissal of the suit be based on immunity or for failure to state a claim.

But a court in thinking about the possibility of harder cases down the road should distinguish carefully between liability for the content itself, liability for statements about the content. And the other one other thing I would say is, if the consequence of saying ‘it’s over there’ was that the book seller lost its immunity for the content of the book, that would be a big deal. But our position on 230(c)(1) is nothing like that. Our position is that the Internet service provider can be sued for its own organizational choices, but the fact that it makes organizational choices doesn’t deprive it of the protection it receives for liability based on third-party content.

I welcome the Court’s questions.

JUSTICE THOMAS:

Well, I’m still confused. But what if the book seller said it’s over there on the table with the other

trustworthy books?

MR. STEWART:

I mean, I think at that point you would be asking could it conceivably be an actionable tort to describe the book as trustworthy.

JUSTICE THOMAS:

Well, we’re putting a lot of weight on organization. But doesn’t it really depend on how we’re organizing it and on what the basis of the organization — for example, we could say this set — you could organize it on the basis of what’s more trustworthy than — than something else.

MR. STEWART:

I think that might matter with respect to whether there was substantive liability under the underlying cause of action. It — it shouldn’t matter for purposes, either of the hypothetical immunity I — statute I described, which focuses exclusively on the contents of the books or for 230(c)(1).

Now, Mr. Schnapper said in a colloquy earlier that he thought the allegations in his complaint are basically the same as those in the Twitter complaint. And the government is arguing in Twitter that those allegations are not sufficient to state a claim under the Antiterrorism Act.

So our — our interest in 230(c)(1) is not in allowing this particular suit to go forward. It is in preserving the distinction between immunity — protection for the underlying content and protection for the platform’s own choices.

JUSTICE THOMAS:

Well, I — I just think it’s just going to be difficult. How would you respond to Justice Gorsuch’s hypothetical about the artificial intelligence creating content organizational decisions?

MR. STEWART:

I think the organizational decisions could still be subjected to a suit. Whether you think of them as recommendations or simply as the platform – the operation of the platform, it’s still the platform’s own choice. And if you ask how did a particular video wind up in the queue of a particular individual, it — it could be some — some sort of artificial intelligence that was making that choice but it would have to do with the – YouTube’s administration of its own platform.

It wouldn’t be a choice made by any third-party who had posted it, because third parties who post on YouTube don’t direct their videos to particular recipients. And — and I — I do want to emphasize this — this theory, this rationale applies even in the most mundane circumstances.

For instance, if you do a Google search on the name for a famous person and you misspell the name slightly, you still get lots of content about that person. Google knows that it’s smarter than we are and it knows that — more about what we want than the literal terms of our search might suggest. I went to the Court’s website and used the docket search function and typed in Google and left off the–the final E and I got a message that said no items find — found. In order to call up the docket for this case, you have to spell Google exactly right.

Now, the choice between those two modes of operating the platform, it’s extraordinarily unlikely, almost inconceivable that it could ever give rise to legal liability, but those are choices made by the platforms themselves. They are not choices made by any third-party. They just don’t implicate 230(c)(1). And the choice — any conceivable lawsuit about the decision to use one mode of operation rather than another, presumably, would be dismissed on the merits. But —

JUSTICE KAGAN:

I — I think the problem, Mr. Stewart, with minimizing what your position is is that in trying to separate the content from the choices that are being made, whether it’s by YouTube or anyone else, you can’t present this content without making choices. So in every case in which there is content, there’s also a choice about presentation and prioritization. And the whole point of suits like this, is that those choices about presentation and prioritization amplify certain message —

messages and thus create more harm.

Now I appreciate what you’re saying is like, well, that doesn’t mean that you’re going to have liability in every case, but — but – but still, I mean, you are creating a world of lawsuits. Really anytime you have content, you also have these presentational and prioritization choices that can be subject to suit.

MR. STEWART:

Let — let me say a couple things about that. The first thing I would say is you could make substantially the same argument about employment decisions; that is, in order for YouTube to operate, it has to hire employees. But Ms. Blatt acknowledges in the – the brief that employment decisions wouldn’t be shielded by 230(c)(1) if there was an allegation of unlawful discrimination for instance.

So the fact that the platform has to make some sorts of organizational choices doesn’t mean it’s immune from suit in the rare instance where it might make a choice that violates some other provision of law.

The second thing is that the concern we have in mind are things like imagine a hypothetical job matching service like Indeed,where job applicants can post their qualifications and potential employers can post their own listings and the website will match them up.

And suppose it came to light that the job — the job search mechanism was routing the high-paying, more professional jobs disproportionately to the white applicants and the lower paying jobs to the black applicants even when the qualifications were the same. At — at a general level, you could describe that as choices about which content would go to which users. But when we saw that kind of stark impropriety in the criteria that the platform was — was using, I think we would say there has to be — assuming it violates applicable law, 230(c)(1) really shouldn’t be protecting that. That’s not — the complaint we have here is not to the content itself or the presence of the third-party job postings on the platform. The complaint is about the use of illicit criteria to decide which users will get which content.

And our point is, in the more innocuous cases or in the borderline cases where the criteria seem a little bit shaky but it’s not clear whether they violate any applicable law, that — that choice ought to be made based on the law that the plaintiff invokes as the cause of action. And the Court ought to be determining does the use of those criteria violate that law? And it —

CHIEF JUSTICE ROBERTS:

Well, I was just going to say, your — the problem with your analogies is that they involve — I don’t know how many employment decisions are made in the country every day, but I know that whatever it is, hundreds of millions, billions of responses to inquiries on the Internet are made every day. And as Justice Kagan suggested, under your view, every one of those would be a possibility of a lawsuit, if they thought there was something that the algorithm referred that was defamatory, that, you know, whatever it is, exposed them to harmful information. And so that maybe the analogy doesn’t fit the particular — particular context.

MR. STEWART:

I mean, I think it is true that many platforms today are making an enormous number of these choices. And if Congress thinks that circumstances have changed in such a way that amendments to the statute are warranted because things that didn’t exist or that weren’t on people’s minds in 1999 have taken on greater prominence, that would be a choice for Congress to make. But —

CHIEF JUSTICE ROBERTS:

Well, but choice for Congress to make — I mean, the – the amici suggest that if we wait for Congress to make that choice, the Internet will — will be sunk. And so maybe that’s not as persuasive a outcome as it might seem in other cases.

MR. STEWART:

I — I think the main thing I would say is most of the amici making that projection are making it based on a misunderstanding of our position; namely, they are misunder- our – misunderstanding our position to be that once YouTube recommends a video or once YouTube sends a video to a particular user without the user requesting it, that YouTube is liable for any impropriety in the content of the video itself. And that’s not our position.

Our position is that YouTube’s own conduct falls outside of 230(c)(1). It’s unlikely in very many instances to give rise to actual liability.

JUSTICE KAVANAUGH:

Why not? Why – why — why wouldn’t it be liable? Explain that.

MR. STEWART:

I think the reason – the reason we would say is for — for — in this case, in particular, to — to look ahead a little bit to the — the Twitter argument tomorrow, there were questions at the beginning of Mr. Schnapper’s presentation about the role that neutrality played in the analysis. And our view is neutrality is not part of the 230(c)(1) analysis. But it’s a big part of the Antiterrorism Act analysis because we say a person is much more likely to be liable for aiding and abetting if it is due — kind of giving special treatment to the primary wrongdoing, if it is taking —

JUSTICE KAVANAUGH:

Well you — keep going.

MR. STEWART:

And — and — and so, if it is, in fact, the case that YouTube is applying neutral algorithms, is simply showing more ISIS videos to people who’ve shown an interest in ISIS, just as it does more cat videos to people who’ve shown an interest in – in cats, that’s much less likely to give rise to liability under the Antiterrorism —

JUSTICE KAVANAUGH:

And much less likely — I’m not sure based on what. You seem to be putting a lot of stock on the liability piece of this, rather than, as Justice Jackson, was saying, the immunity piece. And I’m just not sure, you know, if we — if we go down this road, I’m not sure that’s going to really pan out. Certainly, as Justice Kagan says, lawsuits will be non-stop —

MR. STEWART:

I —

JUSTICE KAVANAUGH:

— on defamatory material, which there’s a lot of, that is out there and finds its way onto the websites that host third-party conduct.

MR. STEWART:

And — and —

JUSTICE KAVANAUGH:

There will be lots of lawsuits. You agree with that?

MR. STEWART:

I — I wouldn’t necessarily agree with there would be lots of lawsuits, simply because there are a lot of things to sue about, but they would not be suits that have much likelihood of prevailing, especially if the Court makes clear that even after there’s a recommendation, the website still can’t be treated as the publisher or speaker of the underlying third-party content.

JUSTICE KAVANAUGH:

Well, just bigger picture, then, to the Chief’s question, isn’t it better for — to keep it the way it is, for us, and Congress — to put the burden on Congress to change that and they can consider the implications and make these predictive judgments? You’re asking us right now to make a very precise predictive judgment that, don’t worry about it, it’s really not going to be that bad. I don’t know that that’s at all the case, and I don’t know how we can assess that in any meaningful way.

MR. STEWART:

I — I think, with respect, that that — that characterization of the existing case law overstates the extent to which courts are in agreement that platform design choices —

JUSTICE KAVANAUGH:

Assume they are. Assume the status quo is against you in the law. And you’re asking us, well, the status quo is wrong, okay, and this Court is the first time we’re getting to look at it. But don’t worry about the implications of this because it’s really all going to be fine, there won’t be much successful lawsuits, there won’t be really many lawsuits at all. And I — I don’t know how we can make that assessment.

MR. STEWART:

I think if the Court thought that kind of the interpretive question, looking at the plain language of the statute, was on a knife’s edge, it was an authentically close call, then, yes, the Court could — and the Court perceived the existing case law to be basically uniform, the Court could give some the interest in stability.

But I think, for us, neither of those true is things is true.

JUSTICE BARRETT:

Mr. Stewart – oh, sorry. Please finish.

MR. STEWART:

I was — I was going to say the statutory text really is not — it may have a little bit of ambiguity at the margins, but it is very clearly focused on protecting the platform from liability for information provided by another information content provider, not by the platform’s own choices. I’m sorry, Justice Barrett?

JUSTICE BARRETT:

No, no, no, I’m sorry. So speaking of this question of what are the implications of this, and Justice Jackson’s points about liability and immunity overlapping, it seems like one of the responses to should we worry about this is, well, it’s going to be the rare kind of claim that could be based on recommendations.

So speaking of that, what is the government’s position, if you have one, on whether, if the plaintiffs below lose tomorrow in Twitter, should we just send this back? Because there isn’t — I mean, you said the government’s position is that there is no claim. So–

MR. STEWART:

Certainly, our position– we haven’t analyzed the — the Gonzalez —

JUSTICE BARRETT:

Right.

MR. STEWART:

— complaint in detail, but that is our position as to the Twitter complaint. And Mr. Schnapper said he doesn’t perceive a material difference between the two. Now, presumably, the Court granted cert in both cases because it thought it would at least be helpful to clarify the law both as to the Antiterrorism Act and as to Section 230(c)(1). But if the Court no longer believes that or if it resolves Twitter in such a way that it seems evident that its decision on the 230(c)(1) issue wouldn’t ultimately be outcome-determinative in Gonzalez, then it could vacate and remand for further analysis of the ATA question. That would be a permissible — I mean, a possible course of action.

JUSTICE BARRETT:

Okay.

CHIEF JUSTICE ROBERTS:

Thank you, counsel. We’re talking about the prospect of significant liability in litigation. And — and up to this point, people have focused on the ATA because that’s the one point that’s at issue here. But I suspect there would be many, many times more defamation suits, discrimination suits, as — as some of the discussion has been this morning, infliction of emotional distress, antitrust actions.

I — I mean, it — I guess I’d be interested to understand exactly what the government’s position is on the scope of the actions that could be brought and whether or not we ought to be–I mean, it would seem to me that the terrorism support thing would be just a tiny bit of all the other stuff. And why shouldn’t we be concerned about that?

MR. STEWART:

Let me just address the — the potential causes of action that you mentioned. For defamation, even if somebody is suing about the recommendation, 230(c)(1) still directs that the platform can’t be treated as the publisher or speaker of the underlying content. And so the question —

CHIEF JUSTICE ROBERTS:

Well, right. But it’s — it’s — defamation law is implicated if you repeat libel, even though you didn’t originally commit defamation.

MR. STEWART:

If you repeat it, and so if YouTube circulated videos with a little blurb saying — and I think one of the amicus briefs describes this hypothetical scenario — if you repeated it with a little blurb saying this video shows that John Smith is a murderer, then, yes, there would be liability. But —

CHIEF JUSTICE ROBERTS:

But there wouldn’t be if you just repeated it without any commentary? Normally, it would be if you’re the newspaper and you just publish something, so and so’s a shoplifter, the newspaper would be liable for that.

MR. STEWART:

No, we think it should be analyzed as though it were an explicit recommendation. And so if Google had posted a message that we said we recommend that you watch this video, now the recommendation would be its own content. But in answering the question can it be held liable for defamation, you would ask: Can a person under the law of the applicable – of the relevant state be held liable for recommending content that is itself defamatory, if the recommender does not repeat the defamatory aspects of that content in the course of the recommendation? And our understanding is that at least under the common law the answer to that would be no, that simply saying you should read this book that turns out to be defamatory would not be basis for defamation liability.

I think the same would basically be true of intentional infliction of emotional distress. That is, unless you could show that the platform was acting with the intent to cause emotional distress by circulating the video, there would be no liability. And the fact that the third-party poster may have met the elements of that offense wouldn’t carry the day.

With respect to antitrust, if you had a claim that a particular search engine had configured its results in such a way as to boost its own products or to diminish the search results for products of the competitor and if that were found to be a viable claim under the antitrust laws, there would be no reason to insulate the provider from liability for that.

CHIEF JUSTICE ROBERTS:

Now that’s – that’s a broad overview of a lot of different areas of law, but, certainly, the law is not established the way you’re suggesting, I — I think, in any of those areas.

MR. STEWART:

But I guess the question is, what did Congress intend to do or what did it do when it passed this statute?

And Congress didn’t create anything that was — even resembled a — an all purposes of immunity, immunity for anything it might do in the course of its functions. It focused very precisely on information provided by another information content provider.

CHIEF JUSTICE ROBERTS:

Thank you, thank you. Justice Thomas? Justice Alito?

JUSTICE ALITO:

In the government’s view, are there any circumstances in which an Internet service provider could be sued for defamatory content in a video that it provides?

MR. STEWART:

I think —

JUSTICE ALITO:

Third-party video.

MR. STEWART:

— I think the only – given our understanding of the — the common law, I think the only way that would happen is if the third-party provider, in circulating the video, added its own comment that incorporated the defamatory gist of the allegations.

And as the Chief Justice was pointing out, it is true that under common law, if you repeat somebody else’s defamatory statement but say what it is, that you can be held liable for that.

JUSTICE ALITO:

I mean, imagine the most defamatory — terribly defamatory video. So suppose the competitor of a restaurant posts a video saying that this rival restaurant suffers from all sorts of health problems, it – it creates a fake video showing rats running around in the kitchen, it says that the chef has some highly communicable disease and so forth, and YouTube knows that this is defamatory, knows it’s — it’s completely false, and yet refuses to take it down. They could not be civilly liable for that?

MR. STEWART:

That — that’s our — I mean, we think that Zeran — Zeran was not exactly a defamation case, but it fit within – pretty closely within that profile. That is, Zeran was the early Fourth Circuit case in which a person posted a video that purported to be from another person and subjected that other person to complaints and harassment that seemed justified to — to the people who were doing it.

JUSTICE ALITO:

Well, did any — did any entity have that scope of protection under common law?

MR. STEWART:

No, not — no, I don’t believe so. And that was the point of (c)(1). The point of (c)(1) was to say —

JUSTICE ALITO:

Well, it was at least to — to shield Internet service providers from liability they — excuse me — based on their status as a publisher.

MR. STEWART:

I — I wouldn’t put it as–

JUSTICE ALITO:

But even a distributor wouldn’t have immunity if it knew as a matter of fact that this material that it was distributing was defamatory, isn’t that right?

MR. STEWART:

I mean, that — that — that is right. I think we would think of the distributor as a subcategory of publisher, but, yes, the book seller would not be strictly liable. And, obviously, Justice Thomas —

JUSTICE ALITO:

You really think that Congress meant to go that far?

MR. STEWART:

We — we do, but, obviously, that is — if we’re arguing about whether the failure to take something down is actionable if it is done knowingly and with an understanding of the contents, then that — that’s a very different argument from the one that we’ve been having up to this point.

That — that would be saying that the statute should be construed —

JUSTICE ALITO:

But that is your — but that is your position?

MR. STEWART:

Our position —

JUSTICE ALITO:

That is the government’s position, is it not?

MR. STEWART:

— our position — yes, our position is that if the — if the wrong alleged is simply the failure to block or remove the third-party content, that 230(c)(1) protects the platform from liability for that, whether it’s based on a strict liability theory or on a theory — theory of negligence or unreasonableness in failing to take the material down upon request.

JUSTICE ALITO:

The Internet service provider wants to — really has it in for somebody, wants to harm this person as much as possible, and so posts extraordinarily gruesome videos of a family member who’s been involved in an automobile accident or something like that.

MR. STEWART:

Well, when you use the verb “posts,” that — that’s a different analysis. That is, if YouTube created —

JUSTICE ALITO:

No, it’s provided by somebody else, and YouTube knows that it’s – knows what it’s — what it is, and yet it puts it up and refuses to take it down.

MR. STEWART:

Yes. Our view is, if the only wrong alleged is the failure to block or remove, that would be protected by 230(c)(1). But — but that’s — the 230(c)(1) protection doesn’t go beyond that. And the theory of protecting the — the website from that was that the wrong is essentially done by the person who makes the post, the website at most allows the harm to continue.

And what we’re talking about when we’re talking about the — the website’s own choices are affirmative acts by the website, not simply allowing third-party material to stay on the platform.

JUSTICE ALITO:

So an express recommendation would potentially subject YouTube to civil liabilities. So they put up — they say, watch this ISIS video, spectacular, okay, they could be liable there?

MR. STEWART:

Yes, if the other elements —

JUSTICE ALITO:

If it’s expressed. What if it’s just implicit? What if it’s the fact that they put this up first and therefore amplify the message of that?

MR. STEWART:

Again, you would have to ask — they — they could potentially be held liable for that, but you would have to ask whether the elements of the relevant tort have been shown. And with respect to the ATA, those elements include scienter, causation of the relevant harm, et cetera. If you were looking at another cause of action, you would look at those elements. And I think part of our reason for preferring that most of the work be done at the liability stage rather than the 230(c)(1) stage is, rather than do a kind of undirected inquiry into whether this seems neutral enough, you would be looking at a specific cause of action and asking

but for 230(c)(1), would this be an actionable tort under —

JUSTICE KAGAN:

Let me just make sure I understand. Let’s talk about defamation and an explicit recommendation, go watch this video, it’s the greatest of all time, okay? But it does not repeat anything about the video. It just says go watch this video, it’s the greatest

of all time. And the video is terribly defamatory in the way Justice Alito was describing. Now is the provider on the hook for that defamation?

MR. STEWART:

The two things I would say are that depends on the defamation law of the relevant state, and, as we say in the brief, you should analyze that as though the platform was recommending in the same terms a video posted on another site. So, if it would give rise to defamation liability under the law of the relevant state to give that sort of glowing recommendation of content posted on a different platform, then there’s no reason that YouTube should be off the hook by virtue of the fact that the material was on its own platform.

JUSTICE KAGAN:

And — and now it’s —

CHIEF JUSTICE ROBERTS:

Thank you. Justice Sotomayor, anything further?

JUSTICE SOTOMAYOR:

Let’s assume we’re looking for a line because it’s clear from our questions we are, okay? And let’s assume that we’re uncomfortable with a line that says merely recommending something without adornment, you suggest, we — you’re — you might be interested in this, something neutral, not something like they’re right, watch this video, because I could see someone possibly having a defamation action if they said — if I said that video is right about that person.

I could see someone saying that I’m spreading a defamatory statement, correct?

MR. STEWART:

I mean, we — we don’t understand the common law to have operated in that way, but, obviously, the laws vary from state to state and a particular law — state could adopt a law to that effect.

JUSTICE SOTOMAYOR:

All right. How do we draw a line so we don’t have to go past the complaint in every case?

MR. STEWART:

I mean —

JUSTICE SOTOMAYOR:

And I think that’s where my colleagues seem to be suffering. And I understand your point, which is there is a line at which affirmative action by an Internet provider should not get them protection under 230(c) because that seems logical. The — the example I used earlier, the dating site, they create a search engine that discriminates. Their action is in creating the search engine. And I would think they would be liable for that. So tell — tell me how we get there.

MR. STEWART:

I guess whether they would be liable would depend on the applicable substantive law, which could be a federal law or it could be a state law. And those questions, obviously, are — are routinely decided at the motion to dismiss stage. That is, with respect to the search engine choices that I described earlier, do you include misspellings or not? The plaintiff would still have to identify a law that was violated by the choice that the search engine made and would have to allege facts sufficient to show a violation of law.

And — and suits like that could easily be dismissed at the pleading stage. But it would at least predominantly be a question of the adequacy of the allegations under the underlying law.

CHIEF JUSTICE ROBERTS:

Justice Kagan?

JUSTICE KAGAN:

I guess I thought that the claims in these kinds of suits are that in making the recommendation or in presenting something as first, so really prioritizing it, that the — the provider is — is amplifying the harm, is creating a kind of harm that wouldn’t have existed had the provider made other choices. Are you saying that that — that is something that could lead to liability or is not?

MR. STEWART:

I think it is something that could lead to liability, but, again, it would — you would have to establish the elements of the — of the substantive law. And so kind of the hypothetical we’re concerned with and the hypothetical that I — I think would come out in our view as the wrong way under Respondent’s theory is imagine a particular platform had been systematically promoting third-party ISIS videos and promoting in the sense of putting them at the top of people’s queues, not of adding their own messages, in order to enlist support for ISIS.

If that was the motivation and you could show the right causal link to a particular act of international terrorism, then that could give rise to liability under the ATA.

JUSTICE KAGAN:

And you’re not saying that the motivation matters for 230; you’re saying that the motivation matters with respect to the — the liability question down the road, right?

MR. STEWART:

Exactly. Exactly.

CHIEF JUSTICE ROBERTS:

Justice Gorsuch?

JUSTICE GORSUCH:

Mr. Stewart, I just again kind of want to make sure I understand your argument, and so I’m going to ask you a question similar to what I asked Mr. Schnapper, which is the Ninth Circuit held that any information a company provides using neutral tools is protected under 230. That’s at 34a of the — of the petition.

And your argument is that this neutral tools test isn’t in the statute. What is in the statute is a distinction on the one hand between interactive computer service and access software providers and on the other hand content providers.

And when we look at that, the access software provider is protected for picking, choosing, analyzing, or even digesting content. So 230 protects an access software provider, an interactive computer service provider, who does any of those things, whether using a neutral tool or not. They — they can order, they can pick, they can choose, they can analyze, they can digest however they wish and they’re protected, even those — even though those editorial functions we might well think of as some form of content in our First Amendment jurisprudence, but, here, they’re shielded by 230.

And then your argument, I think, goes that none of that means that they’re protected for content generated beyond those functions. And it doesn’t matter whether that content is generated by neutral rules or not. That content is actionable whether the — and one could think of content generated by neutral rules, for example, by artificial intelligence. And another problem also is that it begs the question what a neutral rule is. Is an algorithm always neutral? Don’t many of them seek to profit-maximize or promote their own products? Some might even prefer one point of view over another.

And because the Ninth Circuit applied the wrong test, this neutral tools test, rather than the content test, we should remand the case for reconsideration under the appropriate standard. Is that a fair summary of your position? And, if not, what am I missing?

MR. STEWART:

I think the thing — the aspect of that we would disagree with is we don’t think that the definition of “access software provider” means that an entity is immune from liability for performing all of those functions. The statute makes clear that even if you perform those sorting, arranging, et cetera, functions, you still fall within the definition of “interactive computer service,” and you are still entitled to the protection of (c)(1).

But the protection of (c)(1) is protection from liability for the third-party content. And so, if you perform those sorting functions in a way that was otherwise unlawful, you could be on the hook for that. And that — that takes me back to the hypothetical about the job placement service that discriminates based on race. The — the allegation of the job placement — of that job placement service is not that it created any of its own content. The allegation would be that with respect to third-party content provided by the firms that were looking for employees, it had used an impermissibly legal — a legally impermissible criterion to decide which content would be sent to which users. And that wouldn’t be protected by (c)(1) because imposing liability wouldn’t hold the platform — wouldn’t treat the platform as the publisher or speaker of the third-party content.

JUSTICE GORSUCH:

Thank you.

CHIEF JUSTICE ROBERTS: ‘

Justice Kavanaugh?

JUSTICE KAVANAUGH:

First, to follow up on Justice Alito’s question, the distributor liability question, my understanding is that issue is not before us at this time, right?

MR. STEWART:

That’s correct.

JUSTICE KAVANAUGH:

And your position, though, or your response to him suggested that if we were addressing that, the reason that falls within 230 is because the distributor at common law or at least by 1996 was treated as a secondary publisher in the circumstances described there. Is that —

MR. STEWART:

That’s basically correct, yes.

JUSTICE KAVANAUGH:

Okay. Then focusing on the text of the statute and following up on Justice Gorsuch’s question, it seems to me that the key move in your position as I understand it is to treat organization through the algorithms as the same thing as an express recommendation. Is that accurate?

MR. STEWART:

I don’t — I don’t think we would put it quite that way. That is, in some instances, if the operation of the algorithm causes particular content to appear in a particular person’s queue that the person hadn’t requested, then that person might perceive it to be a recommendation at least to

the effect that you will like this based on what you have seen before. So algorithms can’t have that effect. I don’t know that we would equate the two. I think we would say more the recommendation is simply one instance of the platform potentially being held liable for its own content rather than the third-party content.

JUSTICE KAVANAUGH:

And if the algorithm prioritizes certain content, that becomes the platform’s own speech under your theory of 231, correct — or 230?

MR. STEWART:

I don’t know that we would call it the platform’s own speech, but it’s the platform’s own conduct, the platform’s own choice. And so, if — if it violated antitrust law, for instance, to prioritize search results in a particular way, whether or not you thought of that as speech by the — the platform, it would be the platform’s own conduct. Holding it liable for that sort of ordering wouldn’t be treating it as then publisher or speaker of any of the third-party submissions.

JUSTICE KAVANAUGH:

So the other side and the amici say that happens — that’s what the — and Justice Kagan’s question, that’s happening everywhere.

MR. STEWART:

And —

JUSTICE KAVANAUGH:

And, therefore, 230 really becomes somewhat meaningless, and you’ve read what makes the definition of “interactive computer service,” including organizing, to be a self-defeating provision that really does nothing at all.

MR. STEWART:

No, I think — I mean, I think, if — if it is happening everywhere, that is, if search engines are using a wide variety of mechanisms to decide how content should be ordered, that —

JUSTICE KAVANAUGH:

Do you disagree with that? I mean, that’s all —

MR. STEWART:

No, I — no, I agree with that.

JUSTICE KAVANAUGH:

Okay.

MR. STEWART:

And I think that’s probably because there are very few, if any, laws out there that direct Internet service providers to order the content in a particular way. If a particular legislature wanted to say it will now be a violation of our law to give greater priority to search results of companies that advertise with you, then the question whether that could violate the Commerce Clause, the question whether it could violate the First Amendment, those would be live questions. They wouldn’t be 230(c)(1) questions because the state’s attempt to impose liability on that rationale would not be an attempt to hold the platform liable as the publisher or speaker of the third-party content.

JUSTICE KAVANAUGH:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Barrett?

JUSTICE BARRETT:

I want to ask you the question that Mr. Schnapper and I went back and forth about, thumbnails versus screenshots. What would the government’s position on that be? So, if there were screenshots on the side, his objection seemed to be that it was Google’s content because YouTube creates these thumbnails.

MR. STEWART:

And that — that was one aspect of Mr. Schnapper’s theory that we disagreed —

JUSTICE BARRETT:

Disagreed.

MR. STEWART:

— with in the brief. That is, we thought that it’s basically the same content, the same information either way, even if in the one instance Google is creating a URL and in the other instance it’s not.

JUSTICE BARRETT:

So, for purposes of this case, is there any difference — let’s imagine that the Google algorithm when you search for ISIS prioritizes videos produced by ISIS in search results. I’m not talking about being on YouTube. Content produced by ISIS, as opposed to articles, if you’re just looking for articles about ISIS, they could be critical of ISIS, they could be all kinds of things, but in the search result rankings, you first get the article — the articles written by ISIS, videos made by ISIS. Is that the same thing as this case then?

MR. STEWART:

I think that would be the same thing as this case because we would say the fact that the videos appear in that order is the result of choices made by the platform, not the choice of any person who posted an ISIS video on the platform. And Congress — it was very important to Congress to absolve the platforms of liability for the third-party content, but it didn’t try to go beyond that. The likelihood that ISIS would be held liable just for that seems very, very slim, but it would not be a 231 — 230(c)(1) question, it would be a question under whatever cause of action the plaintiff invoked.

JUSTICE BARRETT:

Okay. And then what about users and retweets and likes, the question I asked Mr. Schnapper about that. So, you know, I gather 230(c) would protect me from liability if I simply retweeted. On Ms. Blatt’s theory, on your theory, if I retweet it, am I doing something different than pointing to third-party content?

MR. STEWART:

I mean, I think, honestly, there hasn’t been a lot of litigation over the — the — the user prong of it, and those are difficult issues. I think 230(c)(1) at the very least would say just by virtue of having retweeted, you can’t be treated as though you had made the original post yourself. But, with respect to you retweet, can the retweet itself be grounds for liability, I — I’m not sure, and I doubt that there would be much of a common law history to draw upon.

JUSTICE BARRETT:

So you — but the logic of your position, I think, is that retweets or likes or check this out, for users, the logic of your position would be that 230 would not protect in that situation either, correct?

MR. STEWART:

I — I think it would – I think more or less the case, the — the one difference I would point to between the user and the platform is the user is — who reads a tweet is typically making an individualized choice, do I want to like this tweet, retweet it, or neither, whereas the — the platform decisions about which video should wind up in — in my queue at a particular point in time, there’s no live human being making that choice on an individualized basis. It’s being — that – those choices are being made on a systemic basis.

JUSTICE BARRETT:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Jackson?

JUSTICE JACKSON:

Yes. So can — can you help me to understand whether there really is a difference between the recommendations and what you say is core 230 conduct? I mean, I get — I get and I’m holding firm in my mind that 230 immunity, Congress intended it to be directed to certain conduct by the platform and that conduct is its failure to block or screen the offensive conduct, so that if the claim is this offensive content is on your website and you didn’t block or screen it, 230 says you’re immune. I get that.

I guess what I’m trying to understand is whether you say and plaintiff says, Petitioner in this case says, well, what they’re really doing in the situation in which they display it under a banner that says “up next” is more than just providing that content and failing to block it. They are promoting it in some way.

And I — I’m really drilling down on whether or not there is actually a distinction in a world of the Internet where, as Ms. Blatt and others have said, in order to be a platform, what you’re doing is you have an algorithm, and in the universe of things that exist, you are presenting it to people so that they can read it.

Why — why is that — even though it’s — you know, you call it a recommendation or whatever, why is that act any different than being a publisher who has this information and hasn’t taken it down?

MR. STEWART:

I mean, I think I would say, in — in the situation that 230(c)(1) was designed to address, the decision whether the material would go up on the platform was not that of the platform itself, it was the decision of the third-party poster. And Congress said, once that has happened, you also can’t be liable for failing to take it down. But, with respect to what prominence you give it, that’s the result of your own choice, not the third-party poster.

Now, in most circumstances, it won’t make a difference because the recommendation won’t be actionable. And so what we are concerned with is the — the hypothetical that I suggested earlier. You have —

JUSTICE JACKSON:

Yes. I mean, I get the — I get the liability piece and all of the — the parade of horribles will depend on whether or not they can actually be held liable for organizing it in a certain way. And you say they probably can’t. And others say they might be able to. And that’s a separate issue. Just back on the 230 piece of it, in terms of Congress’s intent with respect to the

1scope of immunity, I’m — I — I guess I just want to understand why Google or YouTube, when they have a box that brings up all of the ISIS videos and tees them up, and if you don’t do anything, they just keep playing, why that’s actually different than the newspaper publisher who gets the offensive content and decides to put it on page versus page 20, it seemed like Congress in 230 was saying, if you — if — if under the common law a newspaper publisher would be liable for having put it on page or whatever and given it to people, we don’t want that to be the case for these Internet service companies. And so I — I don’t know that I understand fully why the fact that it’s called — that you call it a recommendation or whatever is actually any different.

MR. STEWART:

I — I guess one difference I would point to is newspaper publishers can make decisions about what will be on the front page and what’ll be in the back, but it’s going to be the same for everybody. And one of the things about why we call them targeted recommendations with YouTube is they are being sent differently to different users. And the situation we’re concerned with is what if a platform is able through its algorithms to identify users who are likely to be especially receptive to ISIS’s message, and what if it systematically attempts to radicalize them by sending more and more and more and more extreme ISIS videos, is that the sort of behavior that implicates either the text or the purposes of Section 230(c)(1), and we would say that it doesn’t.

JUSTICE JACKSON:

Thank you.

CHIEF JUSTICE ROBERTS:

Thank you, counsel.

MR. STEWART:

Thank you.

CHIEF JUSTICE ROBERTS:

Ms. Blatt.

ORAL ARGUMENT OF LISA S. BLATT ON BEHALF OF THE RESPONDENT

MS. BLATT:

Mr. Chief Justice, and may it please the Court: Section 230(c)(1)’s words created

today’s Internet. (c)(1) forbids treating websites as “the publisher or speaker of any information provided by another.” Publication means communicating information. So, when websites communicate third-party information and the plaintiff’s harm flows from that information, (c)(1) bars the claim. The other side agrees Section 230 bars any claim that YouTube aided and abetted ISIS by broadcasting ISIS videos. So they instead focus on YouTube’s organization of videos based on what’s known about viewers, what they call targeted recommendations. They say that feature can be separated out because it implicitly conveys what viewers should watch or that they might like the content.

But accepting that theory would let plaintiffs always plead around (c)(1). All publishing requires organization and inherently conveys that same implicit message. Plaintiffs should not be able to circumvent (c)(1) by pointing to features inherent in all publishing. (c)(1) reflects Congress’s choice to shield websites for publishing other people’s speech, even if they intentionally publish other people’s harmful speech. Congress made that choice to stop lawsuits from stifling the Internet in its infancy. The result has been revolutionary. Innovators opened up new frontiers for the world to share infinite information, and websites necessarily pick, choose, and organize what third-party information users see first. Helping users find the proverbial needle in the haystack is an existential necessity on the Internet. Search engines thus tailor what users see based on what’s known about users. So does Amazon, Tripadvisor, Wikipedia, Yelp!, Zillow, and countless video, music, news, job-finding, social media, and dating websites. Exposing websites to liability for implicitly recommending third-party context defies the text and threatens today’s Internet.

I welcome your questions.

JUSTICE THOMAS:

Ms. Blatt, is –could you give me an example of, not a recommendation, but an endorsement similar to this that would take you beyond 230?

MS. BLATT:

Sure. So whenever you have something that’s going beyond the implicit features of publishing and you have an express statement, you have a continuum, and this continuum is this: You have something that’s the functional equivalent of an implicit message, basically a topic heading or “Up next,” all the way to the other extreme of an endorsement of the content, such that the website is adopting the content as its own.

Now, when you have that situation, the claim is fairly treating the website for publishing its own speech, and you can separate that out from the harm that’s just coming from the information provided by another.

And the danger which your hypotheticals has raised with express speech is where on that continuum any express speech may go, because unlike Google and YouTube, which are the two world’s largest sites, we don’t have a lot of endorsements and that kind of stuff, but other websites and other users use a myriad of topic headings and emojis that have different meanings that I’m not prepared and you woul have to know what they mean, like kinds of checkmarks and, I don’t know, high fives and all kinds of things.

But the basic features of topic headings, “Up next,” “Trending now,” those kinds of things we would say are core, inherent – they’re no different than expressing what is implicit in any publishing, which is we hope you read this.

CHIEF JUSTICE ROBERTS:

Well, it seems to me that the language of the statute doesn’t go that far. It says that — their claim is limited, as I understand it, to the recommendations themselves. In other words, this — this is the list of things that you might like.

But that information, the recommendation, is not provided — under the words of the statute, it’s not provided by another information content provider. It’s provided by YouTube or — or Google. And so, although whatever the liability issue may be, there’s some issue tomorrow and there are a lot of others, the presence of an immunity under 230(c), it seems to me, is just not directly applicable.

MS. BLATT:

Well, that’s incorrect because of the word “recommendation.” There is no word called “recommendation” on YouTube’s website. It is videos that are posted by third parties. That is solely information provided by another. You could say any posting is a recommendation. Any time anyone publishes something, you could be said, it’s a recommendation. Anything.

CHIEF JUSTICE ROBERTS:

Well, the videos just don’t appear out of thin air. They appear pursuant to the algorithms that your clients have. And those algorithms must be targeted to something. And they’re targeted – that targeting, I think, is fairly called a recommendation, and that is Google’s. That’s not the — the provider of the underlying information.

MS. BLATT:

So nothing in the statute or the common law defamation turns on the degree of tailoring or how you organized it. There’s no distinct actionable message. If you say I think my readers would all be interested in this or I think the readers in ZIP code 200 would be interested in it, or you walk up to someone and say I’m going to defame someone because I thought you might be interested in it, it’s still publishing. And the other side gives you no line and no way to say in some way that would be workable or give websites or users any clarity of how you would organize the world’s information. Just think about search. There are 3 billion searches per day.

All of those are displays of other people’s information. And you could call all of them a recommendation that are tailored to the user because all search engines take user information into account. They take the location, the language, and what have you.

And I can give the example of football. Football — the same two users will enter the word “football” and get radically different results based on the user’s past search history and their location and their language because most of the world thinks of football as soccer, not the way we do. And so if you go down this road of did you target it, then you have to say how much? Was the topic hitting too much? Was it okay to have a violence channel? Was it okay to have a sex channel? Was it okay to have, you know, what have you, some other channel about skinny models that you could say, well, that just kept repeating the — the channel and that made me crazy. So —

JUSTICE JACKSON:

But, Ms. — Ms. Blatt, Mr. Stewart suggests all of those kinds of questions in terms of the extent of liability for this kind of organization would be addressed in the context of liability, not — by that I mean each state — when somebody tried to claim that YouTube had done something improper in terms of pulling up those kinds of videos, that each state would then look and determine, based on their own, you know, common law, whether or not you were liable. And he posits that that wouldn’t happen very often. But we don’t know.

My question is isn’t there something different to what Congress was trying to do with 230? Isn’t it true that that statute had a more narrow scope of immunity than is — than courts have, you know, ultimately interpreted it to have and that what YouTube is arguing here today, and that it really was just about making sure that your platform and other platforms weren’t disincentivized to block and screen and remove offensive conduct — content? And so to the extent the question today is, well, can we be sued for making recommendations, that’s just not something the statute was directed to.

MS. BLATT:

So can I take this in two parts? Because I — I feel like your first part of your question is addressing what the dispute is between the parties, and the second part of your question goes most deeper, and which is, you know, beyond the question presented. But just on your first question about why not — why do you need an immunity as opposed to liability, and in our view, that’s like saying — I mean, that’s death by a thousand cuts, and the Internet would have never gotten off the ground if anybody could sue every time and it was left up to 50 states’ negligence regime.

And let me give you an example. A website could put something alphabetical in terms of reviews, and every Young, Williams and Zimmerman, i.e., X, Y, Z, could say, well, that was negligent because you should have rated it somewhere else.

JUSTICE JACKSON:

No, I totally understand that. But I think my things are not actually different. What I’m saying is that problem that you identify, which is a real problem, the Internet never would have gotten off the ground if everybody would have sued, was not what Congress was concerned about at the time it enacted this statute.

MS. BLATT:

Well, so I — that’s correct. I mean, that’s incorrect for a number of reasons. And we can talk about what two choices you’re talking about. There’s only two arguments on the table for what you could think that (c)(1) does. And that is it simply says, you know, no Internet — interactive computer service shall be treated as a publisher. And you could think, well, there are two — two ways of looking at that. One is that you need an external law that has publication as an element.

And then, second, which I think that your question may be going to, is it only directed to eliminating forms of strict liability across all causes of action? And so both — both of those ways are highly problematic and also inaccurate, given what was happening in 1996. In terms of just looking at this as is this just talking about defamation, it plainly can’t be because the statute would be a dead letter upon inception because any defamation cause of action can be replead as negligence or intentional infliction of emotional distress.

So we think the word “treat,” which means to regard, applies whenever the claim is treating the — or imposing liability because – by virtue of publishing; in other words —

JUSTICE JACKSON:

But what do you do — what do you do with the title and the content and the context? Right? The title of Section 230 is “protection for private blocking and screening of offensive material.”

MS. BLATT:

So let me just pinpoint, then, the second one, which hopefully I won’t – we’ll get to on section (e), which is all the exceptions. But in terms of the title, Stratton Oakmont and restrictions, (c)(1) and (c)(2) are a pair. So what you have is (c)(2) is — and they work together, and if you — every time you weaken (c)(1), you make (c)(2) useless and defeats the whole point of this statute, at least in terms of cleaning up the Internet. (c)(2) is just a safe harbor and directs what happens when you take stuff down. It says nothing about what happens to the content that’s left up. And so the more any website removes material, it perversely is showing that it has knowledge or should have known or could have known about the content that was left up.

And so you have one of two things happen — that would happen and would have happened then and would happen now. The first is websites just won’t take down content. And that just defeats it — the whole point, and you basically have the Internet of filth, violence, hate speech and everything else that’s not attractive.

And the second thing, which I think a lot of the briefs are worried about in terms of free speech, is you have websites taking everything down and leaving up — you know, basically you take down anything that anyone might object to, and then you basically have, and I’m speaking figuratively and not literally, but you have the Truman Show versus a horror show.

You have only anodyne, you know, cartoon-like stuff that’s very happy talk, and otherwise you just have garbage on the Internet.

And Congress would not have achieved its purpose of — and, remember, it had in all those findings only three of which are addressing the harmful content. Most of it is dealing with having free speech flourish on the Internet, jump-starting a new industry. And it’s inconceivable that any website would have started in — I mean, one lawsuit freaked out the Congress.

JUSTICE KAGAN:

Ms. Blatt?

MS. BLATT:

Yes. Sorry.

JUSTICE KAGAN:

Just suppose that this were a pro-ISIS algorithm. In other words, it was an algorithm that was designed to give people ISIS videos, even if they hadn’t requested them or hadn’t shown any interest in them. Still the same answer, that — that — that a claim built on that would get 230 protection?

MS. BLATT:

Yes, except for the way Justice Sotomayor raised it, which is material support. So, if there’s any — I mean, there’s a criminal exception. So, if you have material supporting collusion with ISIS, that’s excepted from the statute. But, if I can just take the notion of algorithms, either they’re raising —

JUSTICE KAGAN:

But — but — but what I take you to be saying is that in general — and this goes back to Justice Thomas’s very first question —

MS. BLATT:

Yes.

JUSTICE KAGAN:

— in general, whether it’s neutral or whether it’s not neutral, whether it is designed to push a particular message, does not matter under the statute and you get protection either way?

MS. BLATT:

That’s correct. And just referring — I agree with what Justice Gorsuch said, except for he was saying that somehow the Ninth Circuit was at fault because it recognized this was an easy case.

It’s not the Ninth Circuit’s fault that the complaint said there’s nothing wrong with your algorithm. You just kept repeating the same information, independent of any content. And so we shouldn’t be faulted because his complaint doesn’t allege anything wrongful.

JUSTICE KAGAN:

No —

MS. BLATT:

But, in your hypothetical, where someone could say — and, again, this is always going to turn on the claim. But let’s just think of — I don’t know what your hypothetical would be about tortious speech, but the bookstore example, you could decide that you want to put the adult bookstore — book — adult book section separated from the kid section. That’s a “biased” choice, and I’m doing scare quotes for the transcript, but —

JUSTICE KAGAN:

Or — or have an algorithm that looks for defamatory speech and puts it up top, right, and you’re still saying 230 protection?

MS. BLATT:

So our test, when you look at the claim, and so, if you have a claim for defamation, is always going to look at the claim and say is the harm flowing from the third-party information or from the website’s own conduct or speech. And so, if I can mention the race example, that’s an excellent example of the claim has nothing to do with the content of the third-party information. It can be —

JUSTICE KAGAN:

Right. But this is the claim would have something to do with the content of the information. It would say, you know, my complaint is that you just made defamatory speech available to millions of people who otherwise would never have seen it. And you are on the hook for that. That was your choice. That’s your responsibility. Why doesn’t — why — why — why should there be protection for that?

MS. BLATT:

Well, so, if there was some sort of misrepresentation or some sort of terms of service that you weren’t going to do that, but let me give you an example where this opens up a can of worms is because you could say that about any content, that you elevated the most recent content. I mean, search engines of all kinds, including Google Search, but all the amici briefs are telling you they have to make choices. They’ve got an undescribable amount of content, and it has to be based on something, whether it’s relevance to a user request, a search history. If it says headache, the Microsoft example, do you want something from the 18– you know, the 1300s, or do you want something that’s a little more recent? Do you –

JUSTICE BARRETT:

Okay. But what if – what if — I’m sorry, but I just want to make sure in Justice Kagan’s example, what if the criteria, the sorting mechanism, was really defamatory or pro-ISIS? I guess I don’t see analytically why your argument wouldn’t say, as Justice Kagan said, that, yeah, 230 applies to that.

MS. BLATT:

Well, I mean, it’s similar to your — your 230 case. You can make a distinction between content choices in terms of how you would organize or deal with any kind of publication, whether it’s a book, a newspaper, a television channel, that kind of stuff, and that is inherent to all publishing. But you –

JUSTICE KAGAN:

Right. So you’re saying 230 does apply to that?

MS. BLATT:

Yes.

JUSTICE KAGAN:

230 gives protection regardless?

MS. BLATT:

Yes. I hope I didn’t say something incorrect.

JUSTICE KAGAN:

230 gives protection —

MS. BLATT:

Yes.

JUSTICE KAGAN:

— regardless, whether it’s like put the defamatory stuff up top, put the pro-ISIS stuff on top, or whether it’s, you know, what — what people might consider a more content-neutral principle.

MS. BLATT:

Correct. And let me just say you have websites that are hate speech, so they may be elevating more racist speech as opposed to some other speech that talks about how the equality of the races. You might have a speech devoted to, you know, an interest of a certain community,

like an ethnic community. So they may be saying, you know what, we don’t want to put some other kind of content, we may want to publish it, but we may want to put it further down on our algorithm. And if you said — again, this is a content distinction. If you have a claim that —

JUSTICE KAGAN:

So I can’t imagine that — and, you know, we’re in a predicament here, right, because this is a statute that was written at a different time when the Internet was completely different, but the problem that the statute is trying to address is you’re being held responsible for what is another person’s defamatory remark. Now, in my example, you’re not being held responsible for another person’s defamatory remark. You’re being held responsible for your choice in broadcasting that defamatory remark to millions and millions of people who wouldn’t have seen it otherwise through this pro-defamatory algorithm.

MS. BLATT:

I mean–

JUSTICE KAGAN:

And the question is, you know, should 230 really be taken to go that far?

MS. BLATT:

The question is can you carve out pro-defamatory as opposed to pro anything else, pro some other type of content that someone may be suing over over negligence. If I can just give you an example of a TV channel. When you broadcast an excessively violent TV channel, you’re giving it a new audience that they wouldn’t otherwise have.

It’s still inherent to publishing. And if you decide to run reruns of the most sexually explicit and violently explicit, you could say that’s a bad thing, and it may be, but on your choice — but it would be protected under 230. In terms of what was happening in 1996, I strongly disagree with the notion that algorithms weren’t present based on targeted recommendations. The Center for Democracy and Technology has this wonderful history lesson of what was happening in ‘92 through ‘94 on how targeted recommendations developed.And you had something called news groups, which were for anyone using the Internet, that was sort of what people did. They signed up for a news group, and those news groups adopted the technology that is the technology that is alleged in this case.

They looked at what the user was looking at. Say the user was looking at science news. And they thought, oh, that also user is looking at some other kind of news, maybe on psychology or something. And so they would make recommendations based on your user history and that of others. Amazon two months into 1997 introduced its famous feature, if you buy X, you might like Y based on that technology.

So this technology was present starting in ’92. And ’92 through ’96, the Internet was definitely different, but it was kind of a mess. You still had to organize it. So there were search engines. There was all kinds of features that were organizing content because even then it was massive. It’s just now on, like, an exponentially greater scale.

JUSTICE JACKSON:

Ms. Blatt, I guess my concern is that your theory that 230 covers the scenario that Justice Kagan pointed out seems to bear no relationship in my view to the text —

MS. BLATT:

Okay.

JUSTICE JACKSON:

— of the actual statute.

MS. BLATT:

Sure.

JUSTICE JACKSON:

I mean, the — the — when we look at 230(c), it says protection for good samaritan blocking and screening of offensive material, suggesting that Congress was really trying to protect those Internet platforms that were in good faith blocking and screening offensive material. Yet, if we take Justice Kagan’s example, you’re saying the protection extends to Internet platforms that are promoting offensive material. So it suggests to me that it is exactly the opposite of what Congress was trying to do in the statute.

MS. BLATT:

Well, I think promoting – I think a lot of things are offensive that other people might think are entertaining, and so —

JUSTICE JACKSON:

No, it’s not about — it’s not about whether — let’s take as a given we’re talking about offensive material because that’s all through the statute, right? You don’t — you don’t disagree that Congress was focused on offensive material, that that’s sort of the basis of the whole statutory scheme. So, if we take as a given that we’re talking about offensive material, it looks to me from the text of the statute that Congress is trying to immunize those platforms that are taking it down, that are doing things to try to clean up the Internet. And in the hypothetical that was just presented, we have a platform that is not only not taking it down in the way that the statute is focused on, it is creating a separate algorithm that pushes to the front so that more people would see than otherwise the offensive material.

So how is that even conceptually consistent with what it looks as though this statute is about?

MS. BLATT:

Well, so just a couple things. And, again, I — we’re on this defamatory material. The website itself does something defamatory that’s not — it’s independent of the third-party content. It’s not protected. But that same hypothetical could be said if it was on the front — the home page as opposed to you had to do a search engine first. And I don’t see anything in the statute that protects it. In terms of what I think your deeper section is — deeper concern is, the reading of the statute, I don’t think it’s coterminous with (c)(2), which is dealing with the type of offensive material, which, by the way, doesn’t mention defamation.

In terms of (c), we talked about how they work together. We talked about how it could be easily overrode if it had just publication. The one thing we didn’t talk about was the structure in Section (e). (e) is a laundry list, a laundry list, of a variety of exceptions under federal law to which (c)(1) does not apply as well as (c)(2). And those exceptions make very little sense if (c)(1) is read the way you’re reading it. It would almost never apply to (c)(2).

And let’s just take federal criminal laws. It would make very little sense because those laws — almost none of them have strict liability as an element, and vanishingly few would be publication or speaking as an element. It’s in there for no other reason, other than that (c)(1) would otherwise apply to the — the– the information provided by another. And in terms of just the pure text, when you keep saying its failure to take down, I’m hearing you say what Congress wrote was treatment as a publisher. That means dissemination. That means publishing.

JUSTICE JACKSON:

So Congress didn’t say that.

MS. BLATT:

You cannot be held liable for publishing.

JUSTICE JACKSON:

If you look at the statute, it says “protection for good Samaritan blocking and screening.” If you take into account Stratton Oakmont, if — those things I thought were like a given, what — what the people who were crafting this statute were worried about was filth on the Internet and the extent to which, because of that court case and — and perhaps others, the platforms were not being incentivized to take it down, because if they were trying to take it down like Prodigy, they were going to be slammed because they were going to be treated as a publisher.

And so the statute is like we want you to take these things down, and so here’s what we’re going to do. We’re going to say that just because they’re on your — your — your website, it doesn’t mean you’re going to be held automatically liable for it. And that’s (c)(1). And to the extent you’re in (c)(2), you’re trying to take it down but you don’t get them all, we’re not going to hold you liable for it.

That seems to me to be a very narrow scope of immunity that doesn’t cover whether or not you’re making recommendations or promoting or doing anything else.

MS. BLATT:

Well, I mean, that that is– what I understand the government and the Petitioner to be saying is that disseminating, even disseminating of ISIS videos, is protected. The only thing that’s not protected is whether you can tease out something about the organization and call it a recommendation when there is no express speech recommending it. It’s just the placement of where in the order in which content appears.

And that same complaint could be made about search engines. So I think, under your view, search engines would not be covered because they are taking user information, targeting recommendations in the sense of they’re saying we think you would be interested in the first content as opposed to the content on, you know, 1,692,000 sections. I mean, they have millions and millions of hits for any search result. And if you think those are recommendations and the other side gives you no basis for distinguishing between search engines, then the statute is just very different than I think the one that Congress was talking about, because, again, if you’re going to look at findings and history and policy, this is about diversity of viewpoints, jump-starting an industry, having information flourishing on the Internet, and free speech.

JUSTICE BARRETT:

Ms. Blatt, what about Justice Sotomayor’s dating hypothetical? The discrimination, like, oh, we’re only going to — we’re not going to match Black people and white people, et cetera, what about that? Is that given 230’s shield?

MS. BLATT:

Absolutely not, because any disparate treatment claim or race discrimination is saying you’re treating people different regardless of the content. So if I’m — I’m going to use it like with an advertising, like I don’t know, whether I’m a woman of– I mean, that was a bad example — a woman of 30 or whatever, and whether I live somewhere, it really doesn’t matter in terms of the law that’s prohibiting discrimination. The law is indifferent to what the content is. It’s just very unhappy about any kind of status-based distinction. So we think — and the — the harm that would flow is not the third-party information. It’s the website’s conduct, whether you want to call it speech or conduct, that’s based on status.

JUSTICE BARRETT:

But what about the dating profile? I mean, isn’t that part of the content? Isn’t that part of the third-party information?

MS. BLATT:

Sure. And it’s just – you could put it a bunch of different ways. You could say, even before the profiles go up there’s a complete harm, or even if the profiles go up, it doesn’t matter. We would distinguish between the way dating sites work, which don’t work based on status but based on criteria that’s uploaded, and those are, you know, you’re matching with somebody else. The website is not saying you should only date a white person.

JUSTICE BARRETT:

Okay. Then what about news? What about an algorithm that says, you know, you are a white person, you’re only going to be interested in news about white people. And it will screen out anything that is a story featuring racial justice issues.

MS. BLATT:

Yeah, again, anything based on status because the harm is complete, independent of the information, but if a website wants to say we’re going to celebrate Black History Month, no, a white person or black person is not going to be able to complain and say, well, I didn’t get enough white history month on your website. Those are claims that are core within treating them as publishing of the information –

JUSTICE BARRETT:

Yeah, but I guess I’m — don’t you think you’re just fighting on liability?

MS. BLATT:

No.

JUSTICE BARRETT:

I mean, it seems to me that you’re kind of going back to liability, because all of those are choices that are made independently, right? I mean, we’ve been talking about the distinction between — or — or the lack of distinction, in your view, between the content itself and the website’s choice of how to publish it. I guess I don’t see why —

MS. BLATT:

So here’s —

JUSTICE BARRETT:

— for 230 purposes.

MS. BLATT:

Here’s our test, and it’s the test the Fourth Circuit recently took in Henderson, and it’s the test the Ninth Circuit took. Let me give you an example and — that I think may help; with the ad revenue sharing. So this was an allegation that YouTube was giving money to ISIS. Now, this was in connection with third-party videos, third-party information. But the court said no, that is not within Section 230 because that’s independent of the information, that’s giving money to ISIS.

That kind of, whatever you think about its validity under the statute, you’re not treating them as a publisher; you’re treating them as a financer. And it’s just — and that’s the test of the Fourth Circuit too. The Fourth Circuit is looking — in that case, it was about — you know, all kinds of things were happening with third-party information, and they were trying to tease out is it the credit report, did they contribute to the credit report, was it based on the website’s failure to — to notify the employee? And what the Fourth Circuit said is the exact same thing we said, and it’s the exact same thing the plaintiff has said on four pages of its brief, for four times in its brief, that

you’re looking for the harm. What is the harm caused? And this case is the perfect example.

The plaintiffs suffered a terrible fate, and their argument is it’s because people were radicalized by ISIS. And if you start with the concession that the dissemination of those ISIS videos are– and a claim based on that is barred, the question is, is what additional comes from the way it was organized?

The government just says I don’t know, let some state figure it out. That’s not very helpful to Internets that have to work on a national level and are posting and sorting and organizing billions upon billions upon billions of piece of — pieces of information.

JUSTICE BARRETT:

Just to clarify, this is my last point, you’re happy with the Henderson test, the Fourth Circuit test?

MS. BLATT:

Yes. I would say Henderson is like 96 percent correct. I got a little lost when they were going down the common law on publication, but the result was great. I just thought they got a little weird on the publication. But yeah, no, their test is correct, and it’s also the Ninth Circuit’s test on the ISIS revenue. It’s the exact same test we quote in our brief, and it’s the exact same test Petitioner did. And what the harm test is doing, if I could just explain it because it kinds shorthand, but if you take the — which I’m not sure Justice Jackson agrees with, but if you take the underlying notion that this bars treatment as a publisher, and you’re saying, well, can they get around it by the way they’re pleading it, you’re just looking to the harm, so you are saying you can’t really say that’s negligence or intentional infliction because the harm is coming from the publishing of the defamatory content.

And so what I think all these cases where the courts are correctly saying 230 does not apply to the claim, is they’re isolating the harm and saying that’s independent of the third-party information. It’s either based on the website’s own speech or it’s website’s own conduct that’s independent of the harm flowing from the third-party information.

JUSTICE ALITO:

If YouTube labeled certain videos as the product of what it labels as responsible news providers, that would be — that would be Google’s own content, right?

MS. BLATT:

Yes. Yes.

JUSTICE ALITO:

And —

MS. BLATT:

Yes. Can I say one thing just because —

JUSTICE ALITO:

Yeah. Sure.

MS. BLATT:

— I forgot to mention thumbnails? I’m sorry. Thumbnails aren’t mentioned in the complaint. So I was literally trying to figure out what he was talking about when I was up there because it’s just not something in the complaint. But that is a screenshot of the information being provided by another. It’s the embedded third-party speech. Okay. Sorry. Keep going.

JUSTICE ALITO:

All right. So if — but then if I do a search for today’s news in YouTube — and in fact, I did that yesterday — and all the top hits were very well-known news sources. Those are not recommendations. That’s not YouTube’s speech? The fact that YouTube put those at the top, so those are the ones I’m most likely to look at, that’s not YouTube’s speech?

MS. BLATT:

Right. But, I mean, all search engines work the same way. If you type in whatever you type in, there is a algorithm that’s deciding what content to display. It has to be displayed somehow. And what I think is going on, on YouTube or it’s certainly going on on Google search, is they’re not going to — they’re looking at what did other users look, how popular was it, that kind of thing. You know, is it — is that news source, you know, from Russia? Probably not going to get on the top list.

So, yeah, they’re having to make choices because there could be over a billion hits from yours, and there are a — a billion hours of videos watched each day on YouTube and 500 hours uploaded every minute. So it’s a lot of content on YouTube. So some of it’s based on channels. And some of it’s based on searches. But they have to organize it somehow. But that is what’s going on, I think, on your top searches, is they’re — in most search engines too, and you can look at the Microsoft brief, they’re basing it on what — time spent on those news sites,

how many users are looking at them, how relevant it is, if it’s — if you’re — if you’re typing in the Turkey earthquake they might be elevating some stuff featuring that because it — you know, seems more relevant. If there’s a recent election, they might feature that. So all these kinds of decisions are being made by websites every day.

JUSTICE ALITO:

Would — would the – would Google collapse and the Internet be destroyed if YouTube and, therefore, Google were potentially liable for posting and refusing to take down videos that it knows are defamatory and false?

MS. BLATT:

Well, I don’t think Google would. I think probably every other website might be, because they’re not as big as Google. But here’s what happens. I mean, you do have that situation in Europe, but there — there’s not class actions. There’s not plaintiffs’ lawyers. There’s not the tort system. So what you would have is a deluge of people saying, you know, my — that restaurant review was — you know, you say my restaurant review, I didn’t like it. I think Yelp! does an amazing job on this, about how much they got hit and had to spend, you know, almost crushing litigation because they were being accused of being, you know, biased on reviewers.

And everyone — no matter what — they couldn’t win for losing or lose for winning, whatever the phrase is, because whoever they — whoever got reviewed, somebody was upset. And so I think those websites, they never would have happened. And they probably would collapse.

CHIEF JUSTICE ROBERTS: ‘

Thank you, counsel. Justice Thomas, anything further? Justice Alito? Justice Sotomayor? Justice Kagan? Justice Gorsuch?

JUSTICE GORSUCH:

Ms. Blatt, I — I kind of want to return to some of the questions I asked earlier. It seems to me inherent in (c)(1) is a distinction between those who are simply interactive computer services and those who are information content providers. And so, when we flip over to (f), the distinction I — I — I glean from that is that if you’re picking, choosing, analyzing, or digesting content, which is the bulk of what you — how you describe Google’s activities in — in the search engine context, are — are protected and that content must be something more than that, providing content must be something more than that. Is — is that right in your view?

MS. BLATT:

I — I thought you were absolutely correct. And I think some of the amici’s briefs do this. In terms of if you’re looking at what is information being created or developed, there is that distinction. It can’t be that you — by sorting, you created or partially developed the information. So I think you had it exactly right. I got a little upset when you talked about a remand that somehow the Ninth Circuit got it wrong.

JUSTICE GORSUCH:

Well, let’s — let’s go there next then, because it seems to me that even under that understanding of the statute, there is some residual content for which an interactive computer service can be liable. You’d agree with that, that that’s possible?

MS. BLATT:

Not on this complaint because —

JUSTICE GORSUCH:

No, no, no, of course, not on this complaint, but in the abstract, it — it’s possible?

MS. BLATT:

Absolutely correct.

JUSTICE GORSUCH:

Okay. And then, when — when it comes to what the Ninth Circuit did, it applied this neutral tools test, and I guess my problem with that is that language isn’t anywhere in the statute, number one.

Number two, you can use algorithms as well as persons to generate content, so just because it’s an algorithm doesn’t mean it doesn’t — can’t generate content, it seems to me.

And third, that I’m not even sure any algorithm really is neutral. I’m not even sure what that test means because most algorithms are designed these days to maximize profits.

There are other examples — Justice Kagan offered some, the Solicitor General offered some — where an algorithm might be – contain a — a point of view and even a discriminatory one. So I — I guess I’m not sure I understand why the Ninth Circuit’s test was the appropriate one and why a remand wouldn’t be appropriate to have it apply the test that we just discussed.

MS. BLATT:

Because it’s not — I don’t think that was the Ninth Circuit’s test. It was one sentence that — maybe I think it mentioned it twice — that’s basically, you know, almost making fun of the complaint. The complaint doesn’t —

JUSTICE GORSUCH:

Oh, oh, okay. Okay. So we’re just disagreeing over how we read the Ninth Circuit’s opinion, but if I read it that way, then would a remand be appropriate?

MS. BLATT:

Well, I’m — I’m going to say no because I don’t understand how — how somehow that they have a bad complaint means the Ninth Circuit’s worse off when the Ninth Circuit said over and over and over you haven’t — this is just the way you’re organizing it. And the complaint never alleges there was something independently wrongful about the content. It never says these were colloquial recommendations. It just says because you previously liked this content. And one other thing. The complaint never even alleges that YouTube ever recommended to any — in terms of even displaying an ISIS video, to anybody who wasn’t looking for it. I don’t even know how you could get ISIS on your YouTube system unless you were searching for it. And the one —

JUSTICE GORSUCH:

I certainly understand your — your — your complaints about the complaint. But, if I — if — if you – you — you don’t think neutral tools — you’re not defending the neutral tools principle either as I understand it.

MS. BLATT:

I’m defending it with respect to Justice Kagan’s question, absolutely, because she’s concerned about biased algorithms, and she doesn’t have to worry about that in this case because they have neutral algorithms. They don’t allege. And what they mean by neutral algorithms is neutral with respect to content. So there’s no —

JUSTICE GORSUCH:

Thank you.

MS. BLATT:

Okay.

JUSTICE GORSUCH:

Thank you.

MS. BLATT:

Thank you.

CHIEF JUSTICE ROBERTS:

Justice Kavanaugh? No? Justice Barrett? Justice Jackson?

JUSTICE JACKSON:

So I understood you to say that 230 immunizes platforms for treatment as a publisher, which you take to mean if they are acting as a publisher in the sense that they are organizing and editing and — not editing, but organizing and — content.

MS. BLATT:

Communicating, broadcasting, which includes how it’s displayed.

JUSTICE JACKSON:

And — and would that include — I — I just want to go back to Justice Alito’s point. Would that include the home page of the YouTube website that has a featured video box and the featured video is the ISIS video?

MS. BLATT: Right.

JUSTICE JACKSON:

That is — is covered?

MS. BLATT:

Well, maybe not because that gets into my continuum question. If you think that featured is some sort of endorsement such that the claim is actually treating the website as — and that the harm is flowing from that — the word feature, then that’s out of — 230. I think you would —

JUSTICE JACKSON:

No, I’m sorry, why? Why — why is that out of 230?

MS. BLATT:

So the whole point about what we’re saying is making sure that if you start with the assumption that the dissemination of YouTube — I’m sorry — of ISIS videos, you can’t hold the YouTube liable for that, then the only question that we’re concerned about and which is so destabilizing is if you can just plead around it by pointing to anything inherent in the publication.

And the government never said what websites are supposed to do.

JUSTICE JACKSON:

No, this is not inherent in the publication.

MS. BLATT:

Exactly, it’s featured.

JUSTICE JACKSON:

So — so — so this is helpful, I mean, if —

MS. BLATT:

Yes.

JUSTICE JACKSON:

— we — we have a — a home page on YouTube and it has featured as the little title and a box, and let’s say the algorithm randomly selects videos from their content and puts them up for a week at a time, and the random video that it selected is the YouTube — is the ISIS video, and it runs when you open up YouTube for a week.

MS. BLATT:

Right.

JUSTICE JACKSON:

Covered or not covered?

MS. BLATT:

Well, it depends on whether you think it’s an endorsement of — I mean, if it said this is the Library of Congress and we feature this because we want to show you how bad ISIS is, you know, I don’t know.

The reason why I care so much about this is because, like I said, Google and YouTube don’t do this, but all the other amicus briefs are talking about they do things like that and they might have a little emoji.

JUSTICE JACKSON:

No, I guess I’m just trying — I don’t understand. I just want to know whether the — put — putting on the home page of YouTube, the decision to have an algorithm that puts on its home page various videos, third-party content, and it turns out that one of those videos is an ISIS video and the person is radicalized and they harm the Petitioner’s family.

MS. BLATT:

Yes. So that is inherent to publishing the home page. The word “feature,” actually using the express statement of feature, it — first of all, is not — the website didn’t have to do it. The owner —

JUSTICE JACKSON:

So I’m sorry, inherent to publishing, it’s covered?

MS. BLATT:

The home page.

JUSTICE JACKSON:

It’s covered?

MS. BLATT:

Absolutely, because no website — how are you supposed to — how are you supposed to operate a website unless you put a home page on, and so they have to do something. And if you could always say, well, the home page — you know, unless you’re just going to do it alphabetically or reverse chronological order, a website is always going to be sued for negligence.

JUSTICE JACKSON:

All right. So, if I–if I disagree with you and I’m -and I’m– about the meaning of the statute, all right, focusing in on the meaning of the statute, you say if you’re making editorial judgments about how to organize things, then you’re a publisher and you’re covered. If I think that the statute really only provides immunity if the claim is that the platform has this ISIS video there and it can be accessed and it hasn’t taken it down, do you have an argument that the recommendations that they’re talking about is — is tantamount to the same thing?

MS. BLATT:

Yes, because the only basis for saying recommendations are not covered is — that I saw is the government saying is it conveys a distinct implicit message that you might be interested. That is a distinct implicit message that can only — it happens every time you publish. If you publish one thing on the Internet, it conveys a distinct message of dear reader, we sat around and thought you might be interested —

JUSTICE JACKSON:

And you’re saying —

MS. BLATT:

Or we want to make money —

JUSTICE JACKSON:

You’re saying that — that there’s no — that organizational choices that put that content on the front page, on the first thing, when you open it up without typing in anything, cannot be isolated and that it’s the same thing as it appears on the Internet anywhere such that 230 applies?

MS. BLATT:

Yes, yes, and I’ll use the government’s own words. They said if you hold them liable for topic headings, you render the statute a dead letter because you have to organize the content. So if you think the topic headings are conveying some implicit message you can target out, the government said then the web can’t function. And I think we care about it because we’re big websites that have lots of information. Other websites, and all the amici briefs are saying, is our whole business is organizing to make it useful. If you need a job, you’re going to organize it by location —

JUSTICE JACKSON:

Are you aware of any defamation claim in any state or jurisdiction in which you would be held liable, you would — you would actually be liable for organizational choices like this?

MS. BLATT:

No, I’m not worried about the defamation claim. I’m worried for a products liability claim or what the government kept saying, your design choices. Those could just be a product liability claim or a negligence claim. You negligently went alphabetical or you negligently featured whatever you featured that made my, you know, kid addicted to whatever it was. And that – those kind of claims happen because they’re publishing. And the whole point of getting this statute was to protect against publishing. So whatever is publishing, inherent to publishing, yeah, has to be covered.

JUSTICE JACKSON:

Thank you.

CHIEF JUSTICE ROBERTS:

Thank you, counsel. Rebuttal, Mr. Schnapper?

REBUTTAL ARGUMENT OF ERIC SCHNAPPER ON BEHALF OF THE PETITIONERS

MR. SCHNAPPER:

Thank you, Mr. Chief Justice, and may it please the Court: If I might start with my colleague’s reference to things inherent in publishing, I would just offer a cautionary note and review of the transcript will support this. That — that has been given an extraordinarily expansive account here.

So topic headings were characterized as inherent in publishing. You know, a topic heading could how Bob steals things all the time. That’s not — shouldn’t be protected. She mentioned “trending now” as inherent in publishing, but that’s like “featured today.” You could — you could have a site that didn’t use the words “trending now.” Auto-play certainly isn’t inherent in publication. And she mentioned home pages, and you have to have a home page, and that’s fair, bu you don’t have to have on the home page selected things that you’re drawing people’s attention to. The home page that I have on my desktop for Google is a box and those charming little cartoons, and there isn’t anything featured there.

One could have a — a website home page for YouTube that wasn’t promoting particular things. That’s just how they’ve chosen to do it. With regard to neutral tools, and this goes back to the point a number of you made about race, a neutral algorithm can end up creating very non-neutral rules. It’s not hard to imagine that an algorithm might conclude that most people who — who went to Spelman and Morehouse now live in Prince George’s County and, therefore, in showing you videos, people who asked for videos about places to live near Washington, if they’re Black, they’ll be shown Prince George’s County; if they’ll be — if they’re white, they’ll be shown Montgomery County. The algorithms can create those kinds of rules. Whether — characterizing that as neutral loses its force once the defendant knows it’s happening. You know, to some extent, algorithms and computer functions can run amuck, but you can’t call it neutral once the defendant knows that its algorithm is doing that. And this runs a little bit into the issue that we’ll be talking about tomorrow.

Two short points and then one closing item. With regard to Rule — Section (f)(4), I said this before, I just want to reiterate it, Section (f)(4) does not apply to systems or to information services. It only applies to software providers. The language of the statute is very specific. And with the question about the possible implications of the decision in — in Taamneh, it — it is fair — it is normal practice in the district court when there’s a motion to dismiss, to permit the plaintiff to amend, to deal with the relevant standard, and that’s exactly what we ought to be afforded an opportunity to do. Thank you very much.

CHIEF JUSTICE ROBERTS:

Thank you, counsel. The case is submitted.

If you’re tired of censorship and surveillance, join Reclaim The Net.

Read more

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.

Share