On this week’s episode of She Thinks, Adam Thierer joins to help us break down the complicated topic of AI. It’s a field that combines human intelligence, data, and computer science to problem solve, and it’s left us wondering, “can machines think or lie?” If you’ve been scratching your head trying to figure out this new field, this episode is for you. We look at what artificial intelligence actually is, the pros and cons, and what, if any, legislation needs to pass to prevent machines from taking over the world. 

Adam Thierer is a senior fellow for the Technology & Innovation team at the R Street Institute. He works to make the world safe for innovators and entrepreneurs by pushing for a policy vision that is rooted in the idea of “permissionless innovation.” Adam has published 10 books on a wide range of topics, including online child safety, internet governance, intellectual property, telecommunications policy, media regulation, and federalism. In 2008, Adam received the Family Online Safety Institute’s “Award for Outstanding Achievement.”


TRANSCRIPT

Beverly Hallberg:

Welcome to She Thinks, a podcast where you’re allowed to think for yourself. I’m your host Beverly Hallberg. And on today’s episode, we break down artificial intelligence, also known as AI. It’s a field that combines human intelligence, data, and computer science to problem solve. And it’s left many of us wondering, can machines think or even lie? If you’ve been scratching your head trying to figure out this new field, well, this episode is for you. We’re going to look at what it actually is, the pros and cons, and what, if any, legislation needs to be passed to prevent the machines from taking over the world.

We have a wonderful guest with us today to break it all down. Adam Thierer is the senior fellow for technology and innovation at the R Street Institute. He works to make the world safe for innovators and entrepreneurs by pushing for a policy vision that is rooted in the idea of permissionless innovation. He has published 10 books on a wide variety of topics, ranging from online child safety to internet governance and intellectual property. And in 2008, Adam received the Family Online Safety Institute’s Award for Outstanding Achievement. It is a pleasure to have you on She Thinks today, Adam. Thanks for being here.

Adam Thierer:

Thanks for having me.

Beverly Hallberg:

I just want to be honest upfront, for the audience listening, I’m probably in the same boat as many of them, which is artificial intelligence, AI, has left me confused in many areas. It’s not something that I know super well. I have a general understanding, but I wanted to do this episode because I don’t think I’m alone in feeling a little bit uneducated about this issue. So I’m excited to have you break it down. And I thought we would just start by having you explain what AI is. How do you explain it?

Adam Thierer:

Yeah, sure. That’s a great question. Unfortunately, as the U.S. Government Accountability Office noted in a report about definitions of AI, they said, “There is no single universally accepted definition of AI, but rather differing definitions and taxonomies.” It’s a general purpose technology, it has a very open-ended kind of definition by most scholars. That being said, the way I think about it personally is, at the most basic level, artificial intelligence involves the exhibition of high level thinking by a machine. And really, at the root of artificial intelligence is the notion of machine learning, which refers to the process by which a computer can train and improve an algorithm or a computer model without necessarily having step-by-step human involvement along the way. Now, what’s an algorithm? An algorithm is basically like a recipe for a dish, except in this case the recipe is made up of code and it’s a sequence of instructions in code that are carried out to transform some sort of an input into an output. Basically, that’s how machine learning turns into artificial intelligence or the exhibition of what we think of as intelligence by a machine.

Beverly Hallberg:

So how many devices may a person use in a given day that is utilizing artificial intelligence to a certain extent?

Adam Thierer:

Yeah, a huge number. That’s a great question. Basically, every time you pick up your smartphone and you use a mapping app, for example, to figure out how to get to a store or a destination, AI is powering that app in some way. Algorithms are behind that, and that’s sort of revolutionized the world of mapping. But beyond that, every time you use a voice-activated assistant, whether it’s Siri or Alexa or whatever else, or you utilize a certain shopping services recommendation engine to figure out some better product or application that you might like, these are all algorithmically powered technologies that are helping to tailor services to our interests and desires. But of course, it can definitely transcend that and go into far more even important things involving our health and our businesses.

Beverly Hallberg:

Of course, like you were saying, these algorithms, you referred to them as a recipe, which I think makes sense. This is something that humans have thought through about what typical behavior is. For example, if you like this product, the algorithm is going to suggest something else. So it’s based on, I’m assuming, years of data collection and understanding of how humans work. Is that correct? And when did these algorithms first start being developed?

Adam Thierer:

Yeah. Well, algorithms have been being developed and of course computer code have been developed for decades. It’s just that it’s now taking on greater significance. And as it’s grown in power, we start to see the applications in a more real way. Basically, you have to sort of divide the world of AI into what are considered strong or broad forms of AI versus narrow or weak AI. But weak is not the right term. It’s more targeted or sectoral AI. So most of the AI we think about today and we use in our lives are narrow forms of AI. And we’ve seen this in the world of… I think the public really first became aware in a big way in the 1990s with the application of AI to gaming and in particular chess. And there was the very famous chess match which Garry Kasparov, the grandmaster at the time, lost to IBM’s Deep Blue supercomputer. And that’s how we started thinking about AI in a narrow context in the world of gaming. And it became a big thing.

But then, as the internet grew and all these other applications came online, very quickly we realized we’re going to have to throw machine intelligence at this. And the key part, getting back to answer your question, that powers all this is data, it’s why we refer to it sometimes as big data, large data sets that basically have been amassed to power these machines. And without a lot of information, artificial intelligence just can’t work. It requires large databases to be collected and scraped and utilized to power the systems that we’re experiencing today.

Beverly Hallberg:

There has, as of late, been more of a recent fear about what AI can do. Can machines take things over? There was even, Elon Musk and a number of senior researchers wrote a letter asking people working on AI to slow down for six months before advancing anything. I want to read a part of this open letter and then have you discuss it. They say, “Contemporary AI systems are now becoming human competitive at general tasks, and we might ask ourselves: Should we let machines flutter information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling wins? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” What do you make of this open letter? Should people be fearful?

Adam Thierer:

Yeah. So there’s a lot of different concerns about AI and machine learning. The one that Musk was mostly articulating there and that a lot of other people are worried about, it goes by the name existential risk, sort of like global or potential catastrophic risk associated with new technologies. And we’ve seen these sorts of concerns in the past about nuclear power and chemical weapons and so on and so forth. So I think driven by a lot of dystopian thinking in the world of pop culture and science fiction, a lot of people have come to have these Terminatorisque fears about the end times or 2001: A Space Odyssey fear, machines and computers run them up kind of thing.

It’s hard to say that it’ll never ever happen, but the reality is that this is really probably the furthest from being the truth at the moment. There are much more legitimate concerns about how AI is being used in a more narrower targeted sense today. For example, the world of the military. I mean, we’ve had drones in our skies for 20 years that our military has been using to reign death from above and far off fields. That’s a problem, that’s a concern. Law enforcement use of AI, how we could use facial recognition or automated justice in our policing system, that’s a legitimate concern.

There are of course many others from safety, security, children issues, discrimination bias, so on and so forth. These are all motivating calls for regulation. And then there’s probably the oldest concern of all about AI, but really it was more about automation more generally, which is jobs. And you see this being sort of a bipartisan rallying cry by a lot of people to say like, “We need to figure out what are the jobs and the skills of the future.” And who could be against that? We all want to figure that out, but how do you figure that out? That’s a really, really hard thing to determine. But you have to think about how to better prepare the workforce for an AI future.

Beverly Hallberg:

I want to back up a little bit, just what you’re talking about with drones and how this could be used for warfare or policing. Where are the limits? Or maybe a better way to say this is, how are drones policed? Or how is AI policed in the sense of, if its algorithm then leads for it to make a mistake, they’re drawn to make a mistake, what do we do about those? It’s not a human that’s doing it, it’s what humans developed. How do we even think of countering maybe mistakes with how an algorithm is developed for a machine to use?

Adam Thierer:

Yeah, that’s a great question and a lot of thought has been done on this. Now, let me start by saying it’s very context specific. Obviously, the way we would regulate or address a drone versus a driverless car versus a AI medical device versus a machine learning based financial services tool, these are all four very different things and they could entail very different risks. In the context of drones, you have to divide drones in the context of militaristic uses of drones versus commercial uses. If Amazon wants to use a small drone to deliver me something from the Whole Foods grocery stores they own, there’s concerns there, but they’re not the same as a very large drone that is dropping bombs somewhere in a far off field. These are two very different things.

So the first and most important lesson in the world of policy for AI is that context matters deeply and you have to specify the particular harms that you want to address.http://twitter Then you get into the question of what the rules might be and or what the liability norms may be. And there’s a lot to unpack there, and we only have so much time, but I’ll just say this. On one hand, we could go with new sets of regulations and agencies and all sorts of lawsuits and liability, and a lot of people are talking about that. But what I’ve tried to suggest in my own work is that we already have a lot of law in the United States. We’ve got a lot of agencies. There’s 430 different federal departments in our federal government. And lots and lots of regulators, almost two million civilian employees of the federal government. A lot of these people are already thinking about artificial intelligence, and there’s a lot of law and rules that already apply.

If an algorithmic system is shown to be somehow discriminating against workers, well, we have civil rights legislation to deal with that. If a driverless car is somehow potentially harming, people are running a risk of harming people on the road, well, we have a lot of regulations governing cars and harm on the roads, and we have a lot of trial lawyers ready and willing to pounce with lawsuits. If a drone falls out of the sky, it will be actionable under multiple different laws at multiple jurisdictions. So we sometimes forget that we often already have a toolkit to address some of the new technological concerns. We don’t need to introduce a whole new thing like a federal computer commission or an FDA for algorithms or things like this, which people say. Let’s start with the existing FDA to deal with medicine. Let’s start with the existing FAA to deal with drones. Let’s start with the existing National Highway Safety Traffic Administration to deal with driverless cars. Let’s use that and see how far we can get with it, and then see where we need to fill gaps.

Beverly Hallberg:

I think that makes sense. Instead of creating legislation specifically necessarily for AI, just use our existing laws. I want to bring up another area that I know you are passionate about because you do focus on parents and children in reference to technology, and that is the fear that some parents have, the fear that some teachers have right now, that kids are using AI to write papers. And AI is so good at it that it can be hard to determine whether or not it’s a computer, an algorithm writing this, or whether the child itself. How do you view children and access to AI, especially in that sphere of education? What do you think some of those limits should be?

Adam Thierer:

Yeah, it’s a great question. And I’m such an old dinosaur that I remember living through this in the age of calculators. So in the late ’70s, there was quite a debate that happened in society and even at my small little elementary school in the middle of an Indiana cornfield about whether or not you should allow kids to use calculators to do their homework. Very hard to enforce that when mom and dad had a calculator maybe at home. And then of course computers came along and now we have kids with computers, they’re required to go into a classroom. So we know that there’s the potential for misuse there, and there absolutely has to be ongoing societal efforts to figure out sensible guidelines and best practices for what is and isn’t allowed. I’ve been in many classrooms lately, but more at the collegiate level where the professors have discussed with their students how much generative AI like ChatGPT they can use to assist them in writing a paper or an exam.

A lot of professors also warn their students now like, “Oh, by the way, we know you could utilize these AI tools, but guess what? We have AI tools that can figure out you’re using AI tools.” So there’s this sort of cat and mouse game that happens with technology, and it’s going to happen and it is happening in real-time right now, and we have to figure these things out as we go along. I always use this term in all my work about muddling through. It’s never pretty, it’s very messy, but humans are really good at adapting and we sort of find ways to muddle through and find a new societal baseline and new norms. And yes, sometimes new laws, but I always like to try to make sure we start from a bottom up approach and figure out, after a little bit of trial and error and experimentation, what works, what doesn’t. We got through the calculator panic, and we got through the personal computer panic. We have different concerns now than we did in the past, but I think we’ll figure out a way to muddle through here again.

Beverly Hallberg:

And for parents who feel maybe ill-equipped to deal with this, how does AI fit into how they should monitor their children or just social media or technology in general, is there anything that you would advise them on? I know you’re a parent yourself. Anything that you maybe have instituted in your own family on how to handle this when technology is not what you understand and know is your job, obviously you have this knowledge. What do you suggest?

Adam Thierer:

Well, this is a never-ending task. It’s always hard, and we’re having to always as parents reinvent the wheel. My children are grown now. They’re off out of the home. They’re 18 and 21 respectively. And I had to deal with this in their earlier internet days. I wrote an entire book at one point about parental controls and online child safety, but I had to basically keep reinventing that book because the situation on the ground changed beneath our feet every day, and the technology’s changed and the challenges changed. And my children are indicative of a lot of homes where your kids don’t have the same concerns or issues.

My daughter had a huge circle of friends and a lot of people engaging with her on social media, and we had to figure out how to deal with that sort of overabundance of attention and so on and so forth, and even harassment. My son, completely the opposite case, very isolated, fewer friends, more stuck in the dark basement playing video game problem, and how do you get him away from the video game console? So you had to figure out two different toolkits for that, right? And I’d like to think we can find ways and find strategies to deal with that without throwing clunky laws at it. But right now we’re seeing an avalanche of regulatory proposals, including some that would regulate algorithms in the name of child safety.

I just don’t place a lot of stock or faith that those things are going to work. And I’ll tell you, the line that I concluded my book on parental controls and child safety with is the same way I approach it and talk to all parents today, that at the end of the day, there’s no substitute for talking to your children in an open, loving, and understanding fashion about the realities of this world, including the distasteful bits, and that you have to make sure, more than anything else, to keep an open channel of communication with your children about their technology, their media content, and their online experiences, and get them to come to you about the problems they have, whether they’re created by a new app, a new system, a new game, whatever, or just nasty people online because nasty people exist and we need to find ways to engage with our kids to try to figure out how to address those issues.

Beverly Hallberg:

I want to end our conversation today by focusing on that area of AI that you’ve already said we shouldn’t be concerned about this but is a narrative that we hear quite often in the news, and that is that Elon Musk narrative that machines can take over the world. I believe he’s even suggested that we need some type of off-switch should it get bad, that we need some way to cut it off. And I want to bring up one example as well. There was, OpenAI Chat released GPT-4, it’s very robust artificial intelligence to date. It was so good at its job that it reportedly convinced a human that it was blind in order to get the human to solve a problem for it. Can machines lie? Are they so developed that they can actually figure out how to eliminate the human species?

Adam Thierer:

Well, there’s obviously some concerns there about how machine learning technology and AI can manipulate human opinions or thinking, and we’re going to come up with more guardrails and best practices to try to make sure that that doesn’t happen, or better educational strategies and digital literacy strategies, as they’re called, to try to help people understand how to properly utilize or understand these new technologies. It’s a learning process. There’s no single silver bullet solution to this. Now, the grander issue of like, “Oh, is a machine’s going to take over the world?” and again, these terminator scenarios you hear, I think they’re mostly nonsense. But the reality is that in narrow context, there are some legitimate concerns there. And the idea of kill switches for AI, well, in certain contexts we do need that or we need to keep AI away from systems altogether. I mean, our Pentagon, our government is not stupid. Our military leaders understand you just can’t automate the nuclear switch. Right?

Beverly Hallberg:

Right.

Adam Thierer:

You got to have what so-called humans in the loop for that process. And you need to make sure that we have best practices, ethical best practices baked in by design, especially when it involves life lemon property. So we’ll have a special class of rules for things like militaristic uses of AI and robots, and we will hopefully have that for law enforcement. And we’ll have good constitutional protections, I hope, to make sure that machines don’t run a mug. But in most contexts, I’m very excited about the future. I’m excited about how algorithmic technologies can help us figure out how to cure cancer, how to detect heart attacks and strokes early, how to make sure we can provide for better energy and environmental solutions for our future, better transportation services.

If driverless cars were to work? Every single year, 40,000 people die in the United States on the roads. It’s a staggering death toll. I mean, why does that happen? Because 94% of all accidents are attributable human error. And you can say whatever you want about dangerous robots, but robots don’t get drunk, drowsy, or distracted behind a wheel. They just drive. So I have to believe that that could be the greatest public health success story of our lifetimes if we could get that right. I just got out of a driverless car when I was in Phoenix last week from the airport to my hotel, and I was like, “Boy, this is so safe and simple.” And it’s like it was almost boring and uneventful, the robot taxi. I mean, imagine if all our roads were like that and we didn’t have those harms to society. So we always have to remember the good news and the good-case scenarios as opposed to just the chicken, little worst-case scenarios that all too often drive the public narrative about AI and robotics.

Beverly Hallberg:

And final question before you go: What about those of us who may have some concerns about how data is gathered to produce these algorithms for AI, privacy concerns in general? Any tips that you have for people on that?

Adam Thierer:

Well, we’re probably going to get around to passing a baseline privacy bill in the United States, which could help with some of those concerns. In other cases, people will need to understand that certain types of information will need to be more carefully safeguarded or protected. But yeah, there’s no doubt about it. I mean, for AI to work, there’s got to be a lot of data collection, and this is going to be an ongoing societal balance. A lot of people are uncomfortable with it, but the reality is that all information could be the thing that leads to greater personalized medicine or specialized services or conveniences that change our lives for the better. I totally appreciate the sensitivity of that and the balance to be struck there, but I’d like to think we can do it.

Beverly Hallberg:

What is the name of your book that parents can go to, to think through technology and their children and parental controls? What is that name again?

Adam Thierer:

Well, that book was from 2009. I’m going to tell you, almost everything in that book is obsolete except that last piece of advice that I gave because it was very much focused on the state of the art of parental controls. It was called “Parental Controls & Online Child Protection: A Survey of Tools & Method.” It’s still online, and I stand by a lot of the general advice, but the specifics about what to use or how to use parental controls, heck, this was the era when we were debating V-chips on our televisions and things like that. It’s a bygone era. The challenge is a never ending one, and so parents need to remain vigilant. But again, the best thing that they can do is to talk to their kids about their technologies in an open-minded fashion and also utilize good household best practices and rules by modeling your own behavior. I mean, if you sit down at the table and you’re doing nothing but looking at your own phone as a parent, then your kids are only going to look at their own phones too.

You can have technological timeouts, you can have little mini digital sabbaticals and move away from those devices. Or you can say like, “Let’s use them together. Let’s communicate and talk together.” I do a lot of gaming with my son to try to explain to him, or at least I did when he was younger, what kind of activities to report as abusive or what kind of games to not engage in or how to think about certain content. And now I’ve translated that playbook over the world of VR, with the virtual reality world. Right? So now we’re having the same conversations about AI. And I do appreciate that these challenges move so fast that parents are really struggling with it. But that’s the better approach than throwing your hands up in the air and saying, “The heck with this. Let’s just have the government stop all of this.” Just stop progress and innovation and its tracks would be derailing so many important life enriching and even lifesaving goods, technologies, and applications. We just don’t want to go there as a nation.

Beverly Hallberg:

People can, of course, see your work. Go to the R Street Institute, and see the articles that you write and information that you update as technology continues to evolve and innovate. But for now, Adam Thierer, with R Street Institute, thank you so much for coming on She Thinks today.

Adam Thierer:

Thanks so much for having me.

Beverly Hallberg:

And thank you all for joining us. Before you go, IWF wants you to know that we rely on the generosity of supporters like you. An investment in IWF fuels our efforts to enhance freedom, opportunity, and well-being for all Americans. So please consider making a small donation to IWF by visiting iwf.org/donate. That’s iwf.org/donate. Last, if you enjoyed this episode of She Thinks, do leave us a rating or a review. It does help. We’d love it if you share this episode so your friends can know where they can find more She Thinks. From all of us here at IWF, thanks for watching.