Red Tape is the newest R Street podcast about the country’s biggest problems and the surprising ways that governments (and regular people) often get in the way of solving them. It was produced in partnership with Pod People. Listen wherever you find podcasts, including Apple Podcasts and Spotify, and learn more about the podcast here.

Episode description:

The internet as we know it exists today thanks to an obscure law called Section 230. On this week’s episode, Kelli interviews Red Tape’s very own Shoshana Weissmann about Section 230’s crucial role in our digital liberty. Shoshana discusses how this legislation empowers individuals to voice their opinions freely online without fear of litigation, a pivotal element in today’s social media-driven world.

Kelli also speaks with Josh Withrow, R Street’s Innovation and Technology Fellow, about how to keep kids safe on the internet. Josh talks about how to create a safe and responsible online environment for their kids, and brings to light the potential pitfalls of laws designed to protect children that might inadvertently lead to greater harm.

Transcript:

Kelli Pierce:

Hey, Shosh.

Shoshana Weissmann:

Hey Kelli.

Kelli Pierce:

Are you pumped for today? It’s the episode you’ve been waiting for.

Shoshana Weissmann:

Oh my gosh. Is this the episode where I finally get to talk about my love for Section 230 of Title 47 of the US code that was enacted as part of the Communication Decency Act of 1996, which is Title V of the Telecommunications Act of 1996?

Kelli Pierce:

Yeah. Also love the enthusiasm. You must have that title memorized, right?

Shoshana Weissmann:

Perhaps. I do love Section 230. It’s one of my favorite laws in general, and especially about the internet, and I don’t like laws a lot. I think a lot of them are badly crafted, but this one, ah.

Kelli Pierce:

I know you’re obsessed with it and I’m really glad you feel that way because I have a surprise for you.

Shoshana Weissmann:

Oh my gosh.

Kelli Pierce:

Since today’s episode’s all about the internet and we want to talk about Section 230, I couldn’t think of a better guest to have on than you my co-host, Shoshana Weissmann.

Shoshana Weissmann:

Kelli, I’m speechless. This is a true honor. I had no idea. There’s so many people I’d like to thank. First and foremost, myself, the best person in the world. My parents, for getting dial up in our house when I was young, so I could use the internet too much. Everyone at R Street for supporting my dreams of keeping the internet free and full of sloths. And really all you listeners out there who have come back week after week to get us to this point.

Kelli Pierce:

That was a great Oscar speech. I didn’t even have to play you off. So it’s going to be fun for sure. Shall we play the opening theme music, introduce ourselves, get right to it?

Shoshana Weissmann:

I think the music’s already playing.

Kelli Pierce:

I am Kelli Pierce, an award-winning journalist and digital media associate at R Street.

Shoshana Weissmann:

I’m Shoshana Weissmann, director of Digital Media and the monster in Kelli’s basement.

Kelli Pierce:

And this is Red Tape. All right, Shosh. You’re literally in my basement right now. We don’t actually record in the same room. We record in two different places and right now that means you in my basement.

Shoshana Weissmann:

It’s great. Like flew from DC to here to hike, and you are kind enough in exchange for 12 bags of Reese’s to let me stay in your basement.

Kelli Pierce:

That’s an absolutely true story. I have no problem with you being in my house using it as basecamp for all your hiking adventures. And also, yes, she did buy me off with a bunch of Reese’s. I’m a pretty cheap date. Before we get into your interview about Section 230, for those who don’t know, what’s Section 230 and why should I care?

Shoshana Weissmann:

Oh man, this is the law that basically makes it so other people aren’t responsible for your legal speech. So if you post something on Twitter, you’re responsible. Twitter isn’t responsible. If I go comment something horrible on your mom’s blog and the comments, she’s not liable, which is good. I’m liable. And also it stops these companies from being sued over this if fit can’t be their speech to begin with.

Kelli Pierce:

So it’s a law that says that what I put on the internet, my speech, and therefore third party platforms like my mom’s theoretical blog, hosting that speech aren’t liable for what I say.

Shoshana Weissmann:

Yeah, exactly. The basic way I think about it is that it’s a law clarifying personal responsibility.

Kelli Pierce:

Yeah, that makes a lot of sense. But before we get into that, I want to let our listeners know that after you and I talk about Section 230, I’ll also be speaking with Josh Withrow, R Street’s Resident Fellow for Technology and Innovation about child privacy laws on the internet. And how so many of them, while well-intentioned, can lead to really bad policy that leads to more harm for everyone than good. But first, the woman who made all the cool kids talk about Section 230, Shosh!

Shoshana Weissmann:

Oh my gosh, I’m ready.

Kelli Pierce:

Great. When you talk about something like limited liability for social media companies, folks usually conservative but some on the left too, get legitimately angry that their speech is being censored through content moderation decisions. Why would it be worse for them if Section 230 goes away?

Shoshana Weissmann:

So this is something that they don’t tend to think about a lot. Like Ben Shapiro exists because of social media. There’s so many social media stars that wouldn’t have existed without it. And it puts more power back in the hands of the people. Before social media, it was like you had Fox News, you had CNN, you had MSNBC, you had the New York Times. But if you weren’t a columnist, if you didn’t have your own show, you didn’t have this network like you do today. I only exist in the forum I do because of social media. People follow me for sloths and regulatory reform. Pre Twitter and pre social media, I couldn’t have gotten much of a following. People might think it was funny and I might’ve had some followers, but it’s fascinating that it works this way. And the big reason is just because it takes power back from the hands of the relatively few who had it before.

It doesn’t mean that social media companies won’t make mistakes that they won’t be biased. But there’s a constitutional First Amendment right to bias, that has nothing to do with Section 230. You take it away, they’re still allowed to be biased. That’s not what that is. People often get mad at 230 while they’re really mad at the First Amendment, which is really disappointing because we shouldn’t be hating on free speech. You want that ability for people on the left and right to be able to have platforms for people who really want just free open speech to have platforms. You want all of the above and let what succeeds succeed. But I think it’s just so naive to think that we’d have a better ecosystem for conservatives without social media. So many of the contexts I have on all sides of the aisle, but especially on the right, exist only because of it. And when companies fail and annoy us, we’re going to leave them and we’ll find better companies. That’s how markets work.

Kelli Pierce:

I do think that oftentimes as you’re alluding to social media as the New Town square and a company should be held accountable if someone’s unfairly shadow-banned or deplatformed or they’re reach restricted. So for them, this is a First Amendment violation if that happens. What do you say in defense of Section 230 in that case?

Shoshana Weissmann:

The First Amendment doesn’t apply to platforms. The First Amendment applies to the government. If the government were running a platform and restricted their speech, yeah, you can sue the government. But someone restricting your speech online, on their platform, that’s their right to do it. And Section 230 has nothing to do with that. Before Section 230 was enacted, moderation was still in any form, whether it’s banning, restricting, promoting, all of that was fully legal and constitutionally protected speech, on their platforms that wasn’t theirs. So the reason it actually got enacted, and this is one of my favorite stories in policy. Is that the Wolf of Wall Street sued Prodigy, a very old platform in the nineties. And some guy just commented on Prodigy in the platform and said, “Hey, Stratton Oakmont, they’re scammers, they’re frauds, all this stuff.” And even though he was right, it’s liable to say that if you don’t have proof.

So Stratton Oakmont took him to court and took Prodigy to court. And the judge was like, “Yeah, this is liable.” And because you say Prodigy, that you moderate your platform and try to keep it family friendly. Well, this one piece of content on your entire platform isn’t family friendly, so you’re also liable. And what’s key there is the judge was saying, “Oh, try to keep your users safe and if you fail on one piece of illegal content, then you’re out of here. You’re going to be liable for it.” But don’t try to protect your users and don’t try to take illegal stuff off your platform and you’re fine. But it’s so funny to think about that one, the guy was actually right and he is a whistleblower. So under the previous law it was just easier to suppress whistleblowers because I’m not sure that Stratton Oakmont would’ve sued if they could only go after that one guy.

But if they can go after the platform, that’s where the money is and that’s where the, “Hey, if you dare allow other speech on here, we’re going to sue you.” And that threat’s real. Section 230 is why we can have negative reviews. Otherwise, businesses, doctors would all sue to take down negative reviews even if it went nowhere in court. So back to here, you still have all these different platforms you can engage in from Facebook to Instagram, Twitter, Blue Sky is becoming a thing. But it’s so wild to me that any conservative would think that the previous way things worked was better when you had very limited outlets for conservatives. There’s still tons of conservatives on Twitter and there were under prior leadership too.

Kelli Pierce:

Absolutely, and I will say it’s not just conservatives though. There are people that are clearly on the left side of the fence, socialists and progressives who have the same argument that they’ve been shadow-banned, deplatform because they’ve criticized those in power. But there’s something that’s called the moderator’s dilemma that you’ve written about. What is it and how do government rules around this stuff make it harder for new companies, maybe with better moderation policies or practices, to enter the market?

Shoshana Weissmann:

So basically before Section 230, the state of affairs was the moderator’s dilemma. And it meant that either platforms touch everything so that way they wouldn’t be liable for anything at all on there, or they moderated absolutely nothing so courts wouldn’t assume they were liable for anything. A lot of people think moderation is all judgment based calls. “Oh, do I like this? Do I not like this?” But there’s racism. There’s hate speech and hate speech is legal speech, totally legal speech, but it allows them to take that off. Threats, violence, child exploitation, terrorism, terrorist speech is fully legal content. You can say pro-terrorist things and you’re protected by the First Amendment, but they want to take that stuff down. And even if they tried to take down illegal speech, there’s often no way to know. Like in the Prodigy case, calling the Wolf of Wall Street a scammer and a fraud that was proven by a court to be liable even though it was true.

There’s all these different kinds of scenarios where you basically want them to have that discretion to create good platforms. Otherwise you’d become 4chan and even 4chan moderates and it’s still just this awful, awful place. Without the moderator’s dilemma, you either have incentive to do nothing and that way avoid liability. Or the other side of that is you have to moderate everything so that it’s just like rainbows and sunshine. If anyone says anything mean that could be liable and you just don’t know so you want to take that down. And that’s just not a healthy place for the internet. It doesn’t mean what we have now is perfect and it doesn’t mean it can’t be better, but section 230 isn’t the problem. In most cases, the issues people have are just sadly with the First Amendment.

Kelli Pierce:

Also though, you hit on this sort of public safety component that people are worried about when it comes to the interwebs. You have child exploitation, like you said, terrorism. Those are really top of mind. And we got to point out child exploitation, terrorism, these things are already illegal. It’s just about how do we fight those in the new era? And that’s going to be coming up with policies that really have nothing to do with Section 230. I do think that Section 230 can also help us fight crime, as you have explained with the Wolf of Wall Street.

Shoshana Weissmann:

Oh yeah, it allows whistleblowers to flourish. Me Too could not have happened without Section 230. Me Too stuff is effectively liable until it’s proved in court. But you can go and say, “Oh, well, you don’t have actual proof of that, so you’re just lying.” But if Weinstein and others were able to sue the platforms, they absolutely would have to shut them up and shut up their opposers. And even when it comes to government, you want to make sure that government dissent is allowed, that you’re able to have that kind of speech. But I think one key thing is one whistleblower is a lot of the speech is effectively liable until it’s proven and it flourishes with 230 since you can’t see the platforms. And also giving them the flexibility to go after the bad actors and even maybe the platforms help monitor it, send it to the government. That can be very useful. There needs to always be more coordination, but there’s also… It can run up against bigger problems. Sometimes government wants a little bit too much information. Maybe they want the exact location of someone for whom they don’t have a warrant.

And then the government will yell at them saying, “Oh, they’re not giving us this information.” And it’s like, “Okay, well due process is still a thing.” There’s a lot of complexity in it where you want them to go after those exploiting children when you have evidence that, “Hey, this is where this is happening.” But you also don’t want them to have the ability to go after people for just any reason, that it should be when there’s real concern and real threat, and government will abuse its power. It’s a hell of a position to be put in. And the former Twitter administration did a really good job with pushing back. And different companies have different standards, but there’s this push and pull between helping law enforcement with legitimate threats and not letting them go too far.

Kelli Pierce:

We’ve talked a lot about some of the issues that people have with Section 230. What are some of the good things about it? Specifically, how does it protect free speech online?

Shoshana Weissmann:

One, is it allows moderation and it allows the internet we know today. Wikipedia, AllTrails, DocDoc like everyone’s thinking, Google, Twitter, Facebook, whatever. But for me, it’s the reason I’m able to find good doctor reviews. I mean, I have seven plus autoimmune diseases. But when I started, it took me eight gastroenterologists to find one who believed me and who didn’t call me crazy.

Kelli Pierce:

Wow.

Shoshana Weissmann:

And crazy is their word, not mine. But these days I’ll go to Zocdoc and I’ll look at reviews and I’ll figure out, “Okay, what do they say about this doctor? Does this doctor listen? Is she attentive? Is he going to be dismissive?” And I’ve been able to find some really, really great doctors. People have recommended supplements to me and I’ll check the interactions and go for those. But without section 230, if I give someone medical advice or if they give me medical advice, a lot of it would constitute giving medical advice without a license which infringes on doctor licenses. And people could go after me if they wanted in some of these cases I guess. But it stops them from going after the platforms. I actually found out I had fibromyalgia when I googled something like endometriosis and getting sick all the time. And then I saw this forum where everyone was giving each other medical advice that would definitely be unlicensed practice of medicine.

But that’s how I found out I had fibromyalgia because I’d never been able to figure out why I was getting sick all the time. And someone said, “Yeah, these diseases often go together, so go to a rheumatologist.” I looked up reviews, I found one. He was really helpful. He’s like, “Yeah, you super have fibromyalgia.” But without section 230, I don’t know that any of this would’ve happened. I would’ve had to go through so many more doctors. I don’t know that I would’ve bothered if it wasn’t working, let alone I can use AllTrails to look for comments and see if there have been bears in the area, if there’s known to be grizzlies, stuff like that. So I don’t die alone in the woods. There’s endless potential. It’s not to say there aren’t problems online. I just think that most of the time when people are mad at section 230, they’re actually just mad at the First Amendment, which is a larger problem.

Kelli Pierce:

Why is section 230 so vital?

Shoshana Weissmann:

Oh man. Section 230 is the reason we have the internet we do today. And a lot of people think you can just edit it like, “Oh, we’ll just exclude this or exclude this.” But that’s death by a thousand cuts. And I tend to liken it more to an avocado than a bowl of M&M’s. People think, “Oh, take this exception out. Take this exception out. You still have this whole bowl of M&M’s.” But it ends up being a lot more like an avocado. You cut it open, you don’t have very long, and it all starts to rot. Because let’s say you have an exemption in section 230 where you can sue platforms if they moderate political content. Okay, well, every racist is going to say that his content is actually political and then that goes to court. I mean, without this law, we wouldn’t have all trails, we wouldn’t have Wikipedia, we wouldn’t have any social media in the forms we do. We wouldn’t have the depth and breadth of online expression we do.

And the ability of someone to just pop up a new platform whenever they wanted, which I think is an incredible thing. So I think it’s really important that people take section 230 seriously and don’t hand wave it away. It’s also cool that it was a bipartisan victory that Senator Ron Wyden, Representative Chris Cox, or then Representative Wyden, I forget that he was a representative back then created this. And I think they had a lot of wisdom in mind with it. And I think we need to go by that principle in the future, making sure that liabilities is assigned to the person who is doing the bad thing. When people go after section 230 for allowing bias or for allowing content they don’t like. It’s really scary sometimes to me to see how often people are just angry at the First Amendment. And Section 230 is a really good law, and I don’t think laws are often crafted very well, but it’s the law that really let the internet become what it is today.

[TRANSITION MUSIC]

Kelli Pierce:

Thanks for breaking it all down for us here, Shosh.

Shoshana Weissmann:

Anytime, Kelli, am I free to go now?

Kelli Pierce:

No, I still need you to help me co-host the rest of the episode.

Shoshana Weissmann:

Oh, yeah. I forgot I got so into talking about Section 230 that I forgot we have a whole second half of the show where you’re speaking with someone who’s not me, Josh Withrow about the internet and child privacy laws.

Kelli Pierce:

I know you talk about that too, but it’s time for someone else to do the heavy lifting.

Shoshana Weissmann:

Josh can get swole with the heavy lifting on this issue. Let’s take a break. Red tape from R Street, we’ll be right back.

Kelli Pierce:

Welcome back.

Shoshana Weissmann:

Okay, Kelli, I think the connection is pretty clear, but why did you choose Josh Withrow and Child Privacy Laws on the internet as your second guest today?

Kelli Pierce:

I have a kiddo. I’ve talked about that on the show before, but it seemed like a lot of laws were being passed around the country, red states and blue that were aimed at the internet. And protecting children was always the excuse. But when you looked at what these bills actually do, as a parent I got scared.

Shoshana Weissmann:

Yeah, it’s really frustrating because I know that the intent is good in most cases. Lawmakers are really trying to protect kids, and we’re right now in Spencer Cox’s state of Utah, and I love Spencer Cox, but I don’t love his idea for how to protect kids online.

Kelli Pierce:

And this is an important conversation to listen to, even if you never ever want kids. Because what we’re doing right now might take everyone’s freedom away.

Shoshana Weissmann:

And there’s a ton of danger that comes from it that I know Josh is going to talk about.

Kelli Pierce:

How does this actually make everything worse for all of us and children’s safety online? Here’s my conversation with Josh Withrow.

[TRANSITION MUSIC]

Kelli Pierce:

 All right, so Josh, why do you want children looking at porn while they try to join ISIS?

Josh Withrow:

You know it’s funny, but I get asked that question seriously a lot. Why is it that you want to expose children to all the terrible things online? And the answer is, I don’t. Don’t have kids yet, but God willing I would like to, and this is something I think about a lot. There is a lot of terrible stuff on the internet that I absolutely do not want my son or daughter to see, and I absolutely want to do what I can to make sure that they have a safe and productive use of computers and the internet. But I also want to make sure that in protecting our kids, we don’t pass policies and laws that fundamentally destroy the way social media, the internet works. And unfortunately in a lot of cases, legislation that sounds good to keep the kids safe has a lot of unintended consequences, but lawmakers in their haste to do something for the children don’t stop and actually think about how things work.

Kelli Pierce:

I asked that question to you and Jess, but you really brought up a great point because this conversation tends to devolve into that, right? Just pointing fingers and accusing folks of wanting to harm children. And I want to say as the parent of a young child, I’m also the daughter of a retired teacher. We want to stress that threats to children’s safety and mental wellbeing do exist online.

Josh Withrow:

Absolutely. And hey, they existed in my time. I’m just of the right age that my parents first got an internet connection at our house when I was 10 years old. So I’ve had internet access for most of my conscious life. And there were threats and problems and bad things on the internet back then too. It was different. We didn’t have the social media platforms. We didn’t have the incredible level of community and connectivity quite to the same degree that we do now, but it’s always been a thing. All of the problems with people and society that happen in the real world manifest themselves digitally as you would expect that they would. And we absolutely do have to worry about things like bullying, harassment, pornographic content, and many other sorts of things that rear their ugly head online.

Kelli Pierce:

Absolutely. And for those that don’t have children and who don’t ever foresee themselves having children, this issue still matters because if a child sees things that they’re not old enough to process yet, it can really lead to mental health issues. And if you have a child with mental health issues, that child could grow up to be an adult with a lot of problems that society will have to fix. So it is an important topic to talk about. But as you brought up Josh in the race to protect children that can lead to really bad policy that does more harm than good. What are some of the worst things you’ve seen states do in the name of protecting kids online?

Josh Withrow:

Well, let’s take the first policy that is common to a lot of the pieces of legislation, both at the state and the federal level, some of which have passed. Which is this idea that to make all social media sites, or in some cases, all websites have to verify exactly the age of everybody coming to their platform. And you stop and think, “Okay, well if we want to protect kids, that makes a lot of sense. They should know who the kids are. And so that way they know how to better protect them.” This is where one of those theory practice things comes into practice. In order to verify and find out who all the kids are on your platform, you have to verify the age of everybody on your platform. You can’t just run this by just the kids. This isn’t like a store where you can see and be like, “Oh, you look like a kid. Clearly, I need to see your ID.” And the way you need to do that is you either have to have people cough up some sort of documentary identification.

Like in the case of Utah, they literally contemplated having people send a copy of their government ID. Or you use some sort of intrusive technology like facial recognition technology or face scanning that supposedly only scans your face but doesn’t identify you. Or some sites have you send in a selfie or a video of yourself so that they can get a read on you and figure out if you’re most likely not a minor. All of these things are, one, they’re intrusive, and two, they add an extra layer of friction to you actually just being able to access and use the internet. But most importantly, most of these technologies are hugely intrusive to your privacy. And in a lot of cases, they also get very close to the line or even over the line of just de-anonymizing the internet experience. And that’s really important because the ability to just go around pseudonymously or anonymously online is one of the fundamental things that makes the internet great and sort of been baked into the architecture of it.

People don’t realize that pushing for age verification could fundamentally transform our ability to speak and act and interact on the internet without associating our real names with it.

Kelli Pierce:

With parents, they need to know that requiring social media companies to collect that data ensures that social media company has it. And that opens up another can of worms or maybe even several, I think.

Josh Withrow:

Yeah, I mean, it’s kind of twofold. One is the cybersecurity privacy risk. You’re now asking these social media platforms to collect personally identifiable information, the kind of information that is actually valuable to hackers if they get it because they can use it to impersonate you and open up accounts in your name and all that kind of stuff. So in Utah’s case, they actually require the sites to store this data, so not only to collect it but then not delete it and keep it stored, which is just asking for a hack. Any cybersecurity professional will tell you that when it comes to especially personally identifiable information, the fastest, easiest way to make sure that you’re not hacked is don’t store the data if you don’t need it. And so some of these laws completely counter that. The other thing too is that there is actually value, especially in the case of kids in some instances, to at least add a barrier between immediately knowing who each person is.

Because if you are a nasty person who is going after kids, a child predator or somebody who wants to advertise something to kids that they shouldn’t be seeing, it’s harder to do that if you don’t know who the kids are right away. If you create that layer so that the kid doesn’t have to identify themselves unless they want to, makes it harder for people to target them.

Kelli Pierce:

I don’t want my son using social media until after he’s married, right? But I also don’t want his information stored on any company’s platform or with any company unless I say so. I also hear a lot too, Josh, about politicians wagging their fingers at social media companies algorithms. And they’re blaming them for a whole host of problems. What’s an algorithm and does it put a bunch of age inappropriate stuff in kids’ feeds?

Josh Withrow:

All an algorithm is a line of code that tells a program to do things. You put an input, the algorithms fit something else out. In the case of social media, they use very complicated algorithms to recommend content to you often based upon your personal likes and needs. And as you use a platform more, it learns more about you and then recommends you more of the kind of stuff that it thinks that you want to consume. Interestingly enough, I mean anything that displays your content is an algorithm. So when you switch Facebook to just most recent so that it’s just listing everybody, all of your friends posts in chronological order, that’s also an algorithm. It’s just a very simple one. As opposed to the one that they use to feed you videos that you don’t care about and whatever post it thinks that you might like. And lawmakers tend to blame the algorithms for feeding things like overconsumption of social media or for targeting people with content that might be harmful to them.

The simple fact of the matter is that the recommendation algorithms that Instagram, TikTok and all of these sites use, it’s sort of garbage in, garbage out, right? It will recommend more of what you want. So if you’re searching for the kind of things that are maybe not great for you mentally or that are harmful, it’s possible that it can reinforce that. But that’s sort of a user guided experience. Otherwise, I look for videos of people’s dogs and videos of chefs cooking, and that’s what I get fed in my feed constantly. It is a user created experience that users are responsible for curating. The algorithm is not responsible for what you do.

Kelli Pierce:

We all have an interest in seeing the next generation grow up healthy, mentally strong. But how do you balance that with these laws that many people also feel are infringing on parental rights?

Josh Withrow:

That’s a great way to frame that, the way that I would frame that. But lawmakers often couch these laws in terms of empowering parents. These laws are very often telling parents in advance how lawmakers think they ought to be regulating their kids’ use online and forcing them to do things that parents otherwise might not do. Like having to provide parental verification for every kid’s account. Like having to as a parent, submit your ID to a social media site to prove that you’re this kid’s parents before you create them a profile. The center for… Oh, I’m blanking on the name.

Kelli Pierce:

So the center is the Center for Growth and Opportunity.

Josh Withrow:

Thank you. I can’t believe I blanked on that.

Kelli Pierce:

Oh, don’t worry. I’ve blanked many times and I’ve been reporting live on radio, so no worries there.

Josh Withrow:

The CGO at Utah State University ran a great poll of parents asking them, “Do you feel comfortable having to identify yourself to a social media site as a parent in order for your kid to create an account?” Like 70% who responded said, “No, I wouldn’t like to do that.” The government in the state of Utah decided that whether they like it or not, all parents are going to have to do that anyway. It’s actually an additional burden now on parents that they might not have wanted.

Kelli Pierce:

You’ve also made the point beyond parental rights to not make surveillance of kids the default position. What do you mean by that and what harm does that thinking really cause?

Josh Withrow:

I think there’s a fundamental sort of respect between the kid and the parents that is violated when you make it the norm, that kids always know that everything that they’re doing is being watched. It sort of forecloses the opportunity of ever having a give and take trust relationship there. Where some kids, they’re getting into trouble and they’re acting out and they might need to be watched more because they’re making bad decisions for themselves, and you want to know everything that they’re doing as a parent. Other kids are pretty self-maintained and it may do them better psychologically to sort of reinforce their independence and their growth as an individual, to sort of let them do their own thing unless they’re doing something wrong, right? Every kid is different. Every family situation is different. And I think making the default on these sites that everything is being monitored in real time, that’s a really bad precedent. And I worry about what it does to the mentality of kids as well, to normalize the feeling that somebody is always watching everything that you do.

Kelli Pierce:

But let’s discuss the good. There’s some evidence that social media is good for mental health.

Josh Withrow:

There’s been tons of studies. This is something that’s a well-funded sort of area of research now because of all the concerns about social media and kids. And if you go out and look at all of the medical and scientific and psychological studies that have been done, there is no scientific consensus that social media is a net negative for kids at all. Because of the positive that balances out all of the negative anecdotes that you talk about. Social media is a great way for kids to build a sense of community and connection with other people, particularly if maybe they live in a small town or live in a place where fewer people share their interests. They can go and find communities that share some of their same interests and values online. And it often provides a creative outlet that they wouldn’t have had. They can build an audience for themselves, start kind of becoming entrepreneurial at a younger age by putting their talents and their goods out there.

And interestingly, even in some of the reports that have been put out there as proving that social media is bad for kids, you remember the Facebook whistleblower Francis Haugen. And she was testifying before Congress and they said that this was proof that social media is just harmful for kids. And if you actually looked at the research that she was citing that Facebook had done, it showed that yeah, there was something like a quarter of teens who self-reported that they thought that social media was a net negative. And something like 60% of them or more, I think it was actually more than that, said that they thought that social media was a net good.

Kelli Pierce:

Wow.

Josh Withrow:

But of course, the media doesn’t ever focus on that part. They find the anecdotes which do exist, and I’m not trying to pretend that they don’t. They find the anecdotes of the bullying and the harassment and the negative experiences that social media can enable, and focus only on those instead of the fact that social media is a useful tool that helps a lot of us stay connected with our families and friends and improves our lives.

Kelli Pierce:

And something also to keep in mind is that there are parental controls in these social media platforms. A lot of them, some of these came about because parents were asking for them.

Josh Withrow:

Yeah, this is something I’ve been repeatedly putting out there, and it’s important for two reasons to note that these parental controls already exist. The first reason is because a lot of these parental controls that are built in both at your hardware, at the device level and also at the software level on Facebook or Instagram or Twitter are more effective and powerful at granularly controlling what your kid’s experiences online than any of these laws would force these companies to create. These tools already exist to very, very, in detail control what kind of content, what websites your kids can go to, how much screen time they’re allowed to have. A lot of this is already built into your devices and to the extent that it’s not built into your devices, you can download software that is specifically parental control software to make these things happen. It may be that a lot of parents aren’t aware of the level of control they already have within their own devices in order to make their kids safer online, and that might be an education gap that does need filling.

And maybe there’s even a policy solution there in terms of education and making sure people know that’s out there. But there’s also a legal reason why this is important. Because all of the software and all of these parental controls are already out there, the courts have been pretty consistent in ruling that even up to the Supreme Court level, that doing things like making access to social media contingent on parental controls or mandatory age verification that compromises fecal identity is unconstitutional. Because it’s a restraint on anonymous speech or a restraint on kids’ access to speech platforms that is more restrictive than existing solutions that are already out there. Justices have specifically cited the existence of parental controls as a reason why it’s not justifiable to mandate these things from the government level.

Kelli Pierce:

So many parents obviously don’t know that they exist because there’s so much freakout, but also that the courts have said, “Hey, look, A, these exist, but B, we don’t want to take away constitutional rights even from our youngest citizens. It really is important there. However, there are many parents who are like, “I’m not going to give my kid access to social media ever.” I joked earlier, I don’t want my kid on social media until after he is married, and I’m only half joking there. Right? Is that really the right strategy? Is that really possible? Or can we teach kids what to do if they see something wrong online?

Josh Withrow:

I think it’s like with everything else in life. There are risks that are inherent with allowing your kid access to social media, whether supervised or not. And it’s up to every parent to evaluate what that means to them and what level of access that they’re going to allow. It’s kind of like the same thing with free-range parenting. A lot of these laws, I think are analogous to the sort of people who are calling the police on somebody for letting their kid wander around in the neighborhood without an adult. And I’m not making the case that you should be a free-range parent and let your kids just do anything unsupervised online. I wouldn’t do that with my kids. I’m not advocating for that. But I think there’s an analogy there in that what do we do to keep our kids safe and help them learn how to be safe, responsible individuals in the real world?

We teach them things, we talk to them. “Don’t play in traffic. Don’t wander into the woods alone. Check in at home from time to time. Be home by sunset. Don’t get into that stranger’s car, especially if he offers you candy.” All of these life lessons that we are to our kids to help them be safe in a world that is not perfectly safe. And our kids are going to grow up to be adults in an information saturated world where computers and the internet are going to be central to their success in life. And at some point in time, they’re going to have to learn how to navigate that, including the pitfalls and how to manage that with their lives and their mental health and be responsible citizens. And the question is, at what age are they ready to start absorbing that? And that’s subjective.

Kelli Pierce:

If you’re talking to a parent and they’re worried about social media, what do you say to them to maybe calm their fears?

Josh Withrow:

I would say one, remember that there’s already an entire generation of kids that grew up extremely online and we’ve turned out reasonably fine. But two, the tools are out there. It’s part of the responsibility of being parents is that we have to take a little bit of time to inform ourselves of the tools that are available to us. And I’m always happy to point people to what those are and try to help them navigate this complex world and controlling their kids’ access to the internet and what they can see. But don’t be afraid of it. There’s a lot of wonders on the internet. I learned many of the skills that have defined my life on the internet. Take the good with the bad, learn to protect them as best as you can, and it’s the same as anything else in life.

[TRANSITION MUSIC]

Shoshana Weissmann:

So I like that Josh had a lot of different sides of the issue here because it is really complex. Like, I myself wrote 12,000 words on this, intending to write 2,000 words max. It’s a really, really big issue. And I think Josh did a good job of hitting a lot of sides of it.

Kelli Pierce:

Yeah, absolutely. And folks, you got to check out Shoshana’s series. I mean, she really looked at, like Josh, all the angles on it. And that’s at rstreet.org. And I also think it’s good that Josh dressed how much we all want to protect kids online. Again, even if you never want kids, we have an interest as a society in making sure they’re protected. Children who access material that’s too mature for their age level, they can develop into adults with problems that society as a whole deals with. The road to hell is paved with good intentions. And if we get the details of these laws wrong, which it seems like we are, freedom of speech is curbed in a big way and kids will be even more vulnerable online. I’m very passionate about that.

Shoshana Weissmann:

Yeah. And I think a lot of these laws trying to protect kids very earnestly are just not going to work the way they intend. And that really matters because when it comes to kids, you have to get the details right. It’s the most important to do it then. And there can be, unfortunately, sometimes the least incentive to at those times.

Kelli Pierce:

Absolutely. We do have to think of the children, but we also have to think bigger. And people can fight back against these companies. A lot of times government’s coming in and saying, “Oh, well, we need to protect you from the big bad Twitter, right?” Actually, people have fought back and Josh talked about that in terms of putting parental controls and things. The market has responded to people, and I think we forget about that as well.

Shoshana Weissmann:

Oh yeah. And even past Supreme Court opinions have talked about parental filters can avoid certain First Amendment problems while being more effective. And I think that’s still true today because you can have filters at all different levels of technology so that your kid is guarded in the right way for you, and it’s going to be different for everyone.

[TRANSITION MUSIC]

Kelli Pierce:

Absolutely. Well, Shosh, how are you feeling? You talked a lot today.

Shoshana Weissmann:

I talk a lot, a lot of days. I like talking especially about Section 230, but I’m very ready for Shoshana quality hiking Marmot time.

Kelli Pierce:

And I hope you get as much hiking and Marmot and whatever time you want here in Utah. And thankfully, you can talk about it online even if you say something completely wrong. Which, to be clear, did you say anything completely wrong on today’s episode?

Shoshana Weissmann:

Not completely. We’ll have to leave that to the fact checkers. Hey, fact checkers, if you hear anything that I said wrong about Section 230, come and let me know over on Twitter and I’ll tell you why you are wrong.

Kelli Pierce:

Oh boy. And she will.

Shoshana Weissmann:

What’s on the next episode?

Kelli Pierce:

On the next episode, we’re talking with two R Street experts about why a lot of people can’t get homeowners insurance anymore. In two states in particular on opposite sides of the country, and with very different politics.

Shoshana Weissmann:

Let me guess the states, Vermont and Utah?

Kelli Pierce:

Okay. We don’t talk about Utah on every podcast episode, but I’m thinking a bit more coastal.

Shoshana Weissmann:

I know. I’m just thinking of hiking states, good states with mountains. I know it’s got to be Florida and California.

Kelli Pierce:

Exactly correct. I’ll be speaking with Caroline Melear in Florida about the challenges of getting flood insurance there. And then we’ll move over to California to talk to Steve Greenhut about how Californians are having a hard time getting insurance for all sorts of reasons. But I’ll leave you with a cliffhanger, so you’ll come back for the next episode to find out what they are.

Shoshana Weissmann:

I’ll be back, Kelli. Don’t you worry. I have to. It’s my job.

Kelli Pierce:

You’re also staying in my basement, kind of like a hostage. And I love that it’s your job to also do this. Until next time, see you Shosh.

Shoshana Weissmann:

See ya.

Kelli Pierce:

Red Tape is produced by R Street in partnership with Pod People.

Shoshana Weissmann:

To learn more about the work we’re doing at R Street, follow us on LinkedIn and on Twitter, and our Twitter is @rsi.

Kelli Pierce:

And for more resources and information on the topics we explore today, you can check out rstreet.org.

Shoshana Weissmann:

Also, if you’ve enjoyed listening to today’s episode, the best thing you can do is share a Red Tape with a friend or an enemy.

Kelli Pierce:

And if you’re an overachiever, please leave a glowing review and rate us on Apple Podcast, Spotify, or wherever you listen to podcasts. It really does help us introduce the show to new listeners.

Shoshana Weissmann:

I’m Shoshana Weissmann.

Kelli Pierce:

I’m Kelli Pierce.

Shoshana Weissmann:

Thanks for listening.

(THEME MUSIC OUT)


Copyright © 2023 Pod People. All rights reserved. 

Pod People transcripts are created on a rush deadline by a Pod People contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of Pod People’s programming is the audio record.