How Should We Govern the Algorithm?

Machine learning is being used in police precincts, schools, courts and elsewhere across the country to help us make decisions. Using data about us, algorithms can do almost instantly what it would take human beings both time and money to do.  Cheaper, faster, more efficient and potentially more accurate -- but should we be doing it? How should we be using it? And what about our privacy and our rights?

Aziz Huq,  Frank and Bernice J. Greenberg Professor of Law at the University of Chicago Law School, is our guide to the new world order.


Transcript

Hannah McCarthy: [00:00:02] Civics 101. It's Hannah here.

Nick Capodice: [00:00:04] It's Nick here.

Hannah McCarthy: [00:00:05] And for now at least, you are listening to our actual human voices and not the machine that learned how to sound like us. For now.

Nick Capodice: [00:00:15] Hannah, that's too scary.

Hannah McCarthy: [00:00:18] Nick. We are one predictive algorithm from being out of a job, my friend.

Nick Capodice: [00:00:23] Well, I'd like to see a machine engage in the kind of chaos I'm capable of. Mccarthy. You think a machine could [00:00:30] sing the score of The Music Man in a Scottish accent?

Hannah McCarthy: [00:00:34] You might just see the day.

Nick Capodice: [00:00:36] Oh, my friends, how can any pool table ever hope to compete with a gold trombone?

Hannah McCarthy: [00:00:40] All right, so today we are asking what the machines are up to and why that matters in America.

[00:00:48]

News Archival: [00:00:52] Artificial intelligence is now everywhere from schools to work. But now the highest court in the land is suggesting it could play a role [00:01:00] even in our criminal justice system.

News Archival: [00:01:01] Board of Police Commissioners is weighing in on the controversial facial recognition software at the center of a recent lawsuit.

News Archival: [00:01:07] A number of software programs used in hospitals across the country are powered by algorithms with racial biases.

News Archival: [00:01:14] England, Wales and Northern Ireland have all announced that A level and GCSE results will now be based on teacher assessments, rather than grades generated by computer modeling.

News Archival: [00:01:24] Employers are then using AI software to analyze candidates facial expressions in their. In their recorded [00:01:30] answers. A candidate who looks off up into the distance might have a propensity to lie, or somebody who smiles a lot during an interview might be somebody who'd be good in a customer facing.

[00:01:40] Role and computer.

Aziz Huq: [00:01:49] My interest in AI was sparked by my work as a lawyer pro bono for the ACLU of Illinois.

Hannah McCarthy: [00:01:58] This is a Zies Hucke professor [00:02:00] of law at the University of Chicago School of Law. We have had him on the show before. He knows a lot. Aziz was working on a case about stop and frisk in Chicago, and.

Nick Capodice: [00:02:12] It's a turn most people know, I think. But just in case, stop and frisk is when a cop stops you for questioning and pats you down. It's super controversial for a lot of reasons.

Hannah McCarthy: [00:02:22] Primarily because it tends to disproportionately target Black and Latino people. And in Chicago, the city was using a machine learning [00:02:30] tool, which, by the way, is a distinction that Aziz makes. It's basically a subset of AI. And this tool was a strategic subjects list that a computer came up with based on data about welfare and criminal behavior. The list essentially predicted who should be stopped.

Aziz Huq: [00:02:49] And digging into that tool, what roughly can be called AI. And it's important to note that that terms [00:03:00] pretty vague and different people mean different things by it. But what roughly could be called I was starting to be used in criminal justice, and that led me to thinking about how it gets used by government and how it's regulated.

Nick Capodice: [00:03:16] Okay, it's my understanding, Hannah, that the answer is it's not not really regulated at all, at least not a lot. So how is the law meeting the digital road?

Aziz Huq: [00:03:27] I think that it's useful to answer your question in two parts. [00:03:30] What is it that we're seeing being adopted in terms of technologies? And then second to ask, well, how is the technology that's being adopted putting strain on the ways that people, including lawyers and judges, have traditionally understood individual rights?

Hannah McCarthy: [00:03:50] Worth noting, Nick, that when we talk about AI, we are not talking about some supercomputer that is eerily person like and about to become the secret shadow [00:04:00] governor of the United States of AI.

Nick Capodice: [00:04:03] So I know this comes up more than it maybe should. And I know the Precogs aren't machines.

Hannah McCarthy: [00:04:09] Okay, now I'm gonna stop you right there. The answer is that we are not talking about Minority Report. As of right now. The future is not. Three mega psychics lying in goop shouting premonitions at the government.

Minority Report: [00:04:20] I'm sure you all understand the legalistic drawback to Pre-crime methodology. Here we go again. Look, I'm not with the ACLU on this, Jeff, but let's not kid ourselves. We arresting individuals who have broken no [00:04:30] law. But they will. The commission of the crime.

Minority Report: [00:04:32] Itself is absolute metaphysics. The precogs see the future, and they're never wrong.

Minority Report: [00:04:35] But it's not the future if you stop it. Isn't that a fundamental paradox? Yes it is. You're talking about predetermination, which happens all the time.

Nick Capodice: [00:04:44] Has anybody else seen that movie or is it just us?

Hannah McCarthy: [00:04:47] I mean, it's it's a Spielberg movie, so.

Nick Capodice: [00:04:50] Okay. Fair point. Uh, based on a Philip K Dick novel, by the way, did you know that he's the sci fi guy who was always warning [00:05:00] us about authoritarian government and its threat to autonomy?

Hannah McCarthy: [00:05:03] Talk about a precog.

Aziz Huq: [00:05:14] Um, the technology that's being adopted is not, uh, what's sometimes called general AI. It's not, uh, some multi-purpose, very, very capable program that responds in human type ways. More broadly, [00:05:30] what we're seeing adopt being adopted are machine learning tools. These are, to be sure, very, very complex algorithms trained upon big pools of data that essentially solve prediction problems. They essentially take one set of data and say, given what we know about the world and triangulating that with what we know about this person, we think X or Y is likely to be the case. So these are these are prediction tools.

Nick Capodice: [00:05:58] Prediction tools sound kind of out of [00:06:00] place in the context of government and law enforcement because, I mean, predicting how people are going to act and making decisions on that instead of making decisions based on how they are actually acting or have been acting okay.

Hannah McCarthy: [00:06:14] In machine learning's defense, though, this is a huge part of what the government already does. It says based on current and past events, this is what we think the future will be. And so here are the laws that we're going to pass and the policies we're going to engage in, and [00:06:30] the things we're going to provide or deny in anticipation of that. Also, Nick, most of us already interact with prediction algorithms all the time.

Aziz Huq: [00:06:41] They're encountered by everybody on a day to day basis who interacts with online retailers, who offer recommendations, who interacts with platforms online, where there are recommendations of things to read or friends to contact, etc..

Nick Capodice: [00:06:57] Like Instagram, which someone [00:07:00] in this room maybe seems to open a lot.

Hannah McCarthy: [00:07:04] Too bad we can't predict whom. Um, too.

Nick Capodice: [00:07:07] Bad, so sad.

Hannah McCarthy: [00:07:09] But yeah, like Instagram and the images, content creators and most importantly ads. It feeds you based on all of the data it collects on you, which is a lot of data.

Nick Capodice: [00:07:21] Yeah, but it's also like how Netflix knows what movies or shows to recommend.

Hannah McCarthy: [00:07:26] Yes. And it's also like Facebook. And it's also [00:07:30] like TikTok. They know where we are, they know what we like and they are selling us something. And by the way, they are also selling us, as in our data, to other people. But anyway, the point is, predictive algorithms are already a part of our lives. It's just different when it's used by the government.

Aziz Huq: [00:07:51] The reason that these tools, when they're used by the state in particular, pose challenges, [00:08:00] is that many important rights that I think most Americans would take for granted have at their bottom a model of human behavior on the side of the state. And when you move from the frontline actor being a human to a frontline actor being a machine that can introduce a whole cluster of [00:08:30] difficulties in. Figuring out whether the right has been violated and figuring out what kind of interests are really being protected or not protected by the right.

Hannah McCarthy: [00:08:46] In other words, our rights are about, enforced by, violated by, etc. human behavior. So what happens when you take the quote unquote frontline human and replace it with a frontline machine? [00:09:00]

Aziz Huq: [00:09:00] The adoption of these tools scrambles the ordinary logic of constitutional law. So the first is, uh, in the context of deciding whether to grant or deny people bail, it is increasingly common for state courts to use a prediction tool.

Hannah McCarthy: [00:09:22] This tool predicts whether or not someone is likely to commit an act of violence while they are awaiting trial, based on a set of data [00:09:30] about people who have or have not committed violence while awaiting trial.

Aziz Huq: [00:09:39] The tool is offering a prediction in the sense that the thing itself hasn't happened. That data has many characteristics about each of those people, and the art of the algorithm is building a mathematical model that links traits to outcomes in the historic data. [00:10:00] Once that model, linking traits to outcomes using historical data is built, it is ported over and applied to a new criminal defendant.

Nick Capodice: [00:10:10] I just want to make sure I understand this. Data points that apply to people who have committed violence are applied to people to decide whether they will commit violence in the future, and then the state makes a decision based on that.

Hannah McCarthy: [00:10:24] Yeah, it's done around the country and was scrutinized in Wisconsin in particular. The claim [00:10:30] against the state was a due process. One, essentially, if a judge relies on this tool first that is inconsistent with due process.

News Archival: [00:10:39] Is this minority report?

News Archival: [00:10:41] That's a great question.

News Archival: [00:10:42] Lynette McNeely is a member of the Elmhurst Chaney Advocacy Board, which worries the state relies on a software program it doesn't fully understand.

News Archival: [00:10:50] We don't know what it's considering.

News Archival: [00:10:52] It's called Compas, and it's owned by a private company. So its calculations to assess risk are secret. But the questionnaire it uses [00:11:00] as the basis of that calculus includes questions like, did a parent ever have a drug or alcohol problem? How often do you have trouble paying bills? How often have you moved in the last 12 months? The questionnaire never asks a defendant's race, but McNeely worries it has a racial impact.

News Archival: [00:11:16] Where they live or other people who've lived in that area and what they've done. I mean, is that being considered as part of my risk assessment?

Aziz Huq: [00:11:24] And the Wisconsin Supreme Court said it's fine for the judge to rely on this tool, [00:11:30] provided that the tool, the interface has a warning that says, no, this is just a recommendation. You've got to use your own judgment.

Hannah McCarthy: [00:11:38] The Wisconsin Supreme Court says this tool is fine as long as you then apply your human judgment. But if you have already seen what the computer has to say, what does that do to your human judgment?

Nick Capodice: [00:11:54] Does the algorithm suggestion shift what a judge might decide?

Aziz Huq: [00:11:58] There's a debate about whether [00:12:00] if you give a warning like that, the judge is actually going to reflect and make a decision based upon their impressions as well as the data before them, or whether there is what social scientists call automation bias, where the judge is so heavily prompted by the machine that, in effect, what's happening is defendants are being detained or not detained based upon the machine prediction.

Hannah McCarthy: [00:12:29] Now, there [00:12:30] are a whole bunch of implications when it comes to the fairness of this kind of algorithm as well, especially when it comes to the balance between white and black defendants and how they're classified. The people who created it assure everyone that it is mathematically fair. But ProPublica looked into it and realized that black defendants were treated more harshly by the courts. And that has to do with how many black versus white defendants are predicted to be a risk by the algorithm. [00:13:00]

Nick Capodice: [00:13:00] So I might be making a leap here, but does this mean that AI has the potential to lead to equal protection violations?

Hannah McCarthy: [00:13:08] That's a pretty reasonable concern. The issue there is that equal protections claims are notoriously difficult to prove in court. Humans have been violating that clause for as long as it has existed. An AI is made by humans. It has been shown to have both a race and gender issue, in part because it tends to be made fed data by [00:13:30] and tested on white men, and in part because. Asking people based on data alone will result in racial bias, because the world itself is racially biased in terms of access to wealth and health and so many other measures of life. But let me give you another example.

Aziz Huq: [00:13:48] The second example is is a lot simpler, but it nicely brings up both a different sense of the word prediction and a different way in which these dynamics unfold in terms [00:14:00] of institutions. One of the things that has happened since the overruling of Roe v Wade is that there is increased activity on the part of states that want to restrict abortion, are attempting to regulate childbearing, and in some instances, impose criminal penalties on the people who are pregnant and who may be seeking to end the pregnancy. And one way [00:14:30] in which that has played out impinges upon what's generally understood as a right to privacy, which is I have certain information. I'm it's up to me to decide whether to give that information up or not. I have a certain sphere that involves my body and my house that the state can't set right. Ordinarily, those words shield a person who is pregnant from revealing that fact to the state.

Nick Capodice: [00:14:58] Wait, is pregnancy not [00:15:00] private information?

Hannah McCarthy: [00:15:02] Well, okay. Your employer, for example, cannot ask if you are pregnant and it would definitely be a legal issue if someone without a search warrant went through your trash, or hacked into your medical records or your email to try and determine if you were pregnant. But the thing is, machines don't need to do that in order to figure out who is probably pregnant.

Aziz Huq: [00:15:29] About [00:15:30] 10 or 15 years ago, the retailer target got into trouble because they were using a predictive algorithm on their consumer data, their customer data that identified people who they predicted, customers who they predicted were pregnant, and sending them coupons for prenatal vitamins and the like. And they sent this with respect to a person who is the father of a teenager. The father protests loudly that there's nobody pregnant in our family. [00:16:00] Predictably, the next day the daughter turns around and says, well, actually, I'm pregnant.

Nick Capodice: [00:16:05] So the machine got it right somehow, and it had real world repercussions.

Hannah McCarthy: [00:16:10] Yeah, the New York Times looked into it, and it turns out that this dad marched into target and was like, you're sending my daughter ads and coupons for maternity clothing and nursery furniture and things like that. Are you trying to encourage her to get pregnant? And the manager was like, no, sorry. So sorry about that. And then the manager [00:16:30] called to apologize again, and that dad picks up the phone and goes, actually, my daughter's due in August. I owe you an apology.

Nick Capodice: [00:16:39] Wait, wait. Hang on. Does he owe target an apology? Though they were predicting that his daughter was pregnant using an algorithm and marketing based on that likelihood, even though she never actually asked for it. And isn't it a violation of privacy to tip someone's family off, [00:17:00] inadvertently or not, when they might not have been planning on revealing that information?

Hannah McCarthy: [00:17:05] Isn't that an interesting question? Because, see, the thing is that target says they were not breaking any privacy laws, but they do acknowledge they were making people uncomfortable to fix it. They started advertising wineglasses, you know, like next to cribs. So it didn't necessarily mean like they were targeting a pregnant person, but they were still sending the mailer [00:17:30] to people who were predicted to be pregnant. It turns out those women would use the coupons, as long as it didn't seem like they were being spied on.

Nick Capodice: [00:17:40] But they were essentially being spied on. And that's legal.

Hannah McCarthy: [00:17:47] All right. So I mentioned that your employer cannot ask you if you're pregnant. They also cannot discriminate against you because of a pregnancy. Now that is because of protections in title seven of the Civil Rights Act and other [00:18:00] more specific federal and state laws. These privacy related laws also apply to other protected demographics.

Nick Capodice: [00:18:08] Such as race, sexual orientation, gender identity, religious affiliation, stuff like that. Yeah.

Hannah McCarthy: [00:18:15] Personal data about you, right? There are lots of federal laws that pertain to those data points. Hipaa, the Health Insurance Portability and Accountability Act, for example, is the thing that allows doctors to say and mean that you can tell them [00:18:30] anything about your physical and mental health, and they are not allowed to tell anyone else, and you're not allowed to be discriminated against because of that data. But major, major. But, Nick, these federal laws mostly do not cover consumer data.

Nick Capodice: [00:18:48] Like if I am buying prenatal vitamins, for example.

Hannah McCarthy: [00:18:52] Or even trickier, whether you're buying unscented lotion and big purses that could potentially double as a diaper bag which, by the way, were [00:19:00] two of the metrics that target checked when it came to predicting who was pregnant. And to be clear, algorithms are used to market certain brands or products all the time. It just becomes a clearer issue when that marketing reveals something private about you. Only 12 states in the US have comprehensive data protection laws, and even within those laws, companies are still allowed to collect and sell your data. Now they can sell that data to other companies. Sure, that's one thing. [00:19:30] It's more ads, basically, right? But they can also sell that data to someone else.

Aziz Huq: [00:19:36] Exactly. That same tool is available to states, but it's available not directly, but through third party firms called data brokers. Indeed, in the wake of the Dobbs opinion, there was a spate of data brokers that started offering lists of people who had [00:20:00] engaged in behavior that made it likely that they were both pregnant and seeking to terminate a pregnancy in states where that was now unlawful.

Nick Capodice: [00:20:10] So I guess the potential here is that those states could use that data to track people who may be attempting to obtain an abortion in a state where that abortion is not legal.

Hannah McCarthy: [00:20:21] Which is something states are already doing. In a way, this would just make it a lot easier to know whom to target. So why does [00:20:30] this example matter when it comes to AI and states generally?

Aziz Huq: [00:20:34] And now that's a useful example for for our purposes for I think three reasons. The first is that here we have a right of privacy over information that's being end run through what we might call AI. That's the first thing I think that's interesting.

Hannah McCarthy: [00:20:52] The most recent Pew Research poll on Americans and data privacy, this is from 2019, tells us that nearly two [00:21:00] thirds of Americans polled understand little to nothing about the privacy laws protecting their data. And even though a lot of us agree to privacy notices on apps and websites, not all of us actually read it. And even if we do read it, do we understand it?

Nick Capodice: [00:21:19] Speaking purely anecdotally and just for myself, I'm going to go with no same.

Hannah McCarthy: [00:21:25] And by the way, these notices that we agree to, they're basically [00:21:30] us giving permission for companies to share our data. They are not informing us of our rights or anything like that. So the reason I point this out is that our rights extend only as far as they are enforced, and not knowing what your rights are. Now, that is a really good way for them to be violated without repercussion.

Aziz Huq: [00:21:52] The second thing that's interesting is, is notice that it's a different kind of prediction. It's not a prediction about what's happening in [00:22:00] the future. It's what a social scientist would call an out-of-sample prediction. I know X and Y about this person. I don't know Z, but given that I know x and y, I can make a pretty good guess at Z, right? Z is true now. It's not something that happens in the future. That's a kind of prediction, and it might be a really important kind of prediction as the abortion criminalization context suggests.

Nick Capodice: [00:22:25] So basically, there's a difference between predicting what is true right now and predicting what [00:22:30] might be true in the future. Yeah.

Hannah McCarthy: [00:22:31] So let's say that your state has a law that says if you own a yellow hat, you must wear that yellow hat at all times. You're not allowed to take it off. And let's say your state has access. To an algorithm that can basically predict for the state who has a yellow hat right now. Now, you could see possibly that the state could use the information about who has a yellow hat to make sure that they are always wearing their yellow [00:23:00] hats and punish them if they take those hats off. An algorithm that simply predicted who might acquire a yellow hat that's less efficient, that isn't as useful. It's not telling the state what is going on right now. So there are certain applications for algorithms that predict what's going on right now, and certain applications for algorithms that predict what might happen in the future, like in those bail hearings.

Aziz Huq: [00:23:25] The third way in which I think this is telling what we see here, is the state [00:23:30] relying or intertwining itself with firms, with actors in the private sector to achieve a goal that we think of as being distinctively something the state does punish people. So one of the things that I think we're seeing, and I think that's not fully appreciated, is the advent of AI in its sheer usefulness, is leading to [00:24:00] new ways of braiding together public and private behavior.

Nick Capodice: [00:24:05] Which I guess we already do a bit right. Like we contract third parties for military and defense stuff all the time.

Hannah McCarthy: [00:24:13] That's true.

Nick Capodice: [00:24:13] We do. So looking to outside tech and services. That's typical for our government. It's just that this AI is more likely to interact with us. You know, you and me and other quote unquote normal people on a daily basis. [00:24:30]

Hannah McCarthy: [00:24:30] I mean, normal is a stretch. But yeah, basic people like we folk.

Nick Capodice: [00:24:34] So how sure are we that this is something we actually need to worry about? Hannah. Is it really imminent?

Aziz Huq: [00:24:42] The first thing is that I is is being adopted in narrow but important sectors of the private economy and is widely understood to have scale related efficiencies in those areas. Second, either [00:25:00] the companies that are creating I or subsidiaries or competitors are serial contractors with the government and are aggressively selling, uh, I tools. This is particularly true in the policing and in the military context. And then I think the third factor is there's a couple of my examples have pointed to one of the reasons that I [00:25:30] is useful from the perspective of the governmental actor is that it dramatically lowers the cost.

Hannah McCarthy: [00:25:42] For example, if you can get a week's worth of someone's location data from their cell phone provider, that is way cheaper than physically trailing them for a week, the state can do way more with way less. Also, last thing is, he's mentioned other countries [00:26:00] are doing it.

Aziz Huq: [00:26:01] The other thing that I would just flag is our geopolitical moment, which is a moment in which there is perceived and some actual conflict with China in particular, and where the relative military power of the United States and China in part depends upon technologies. So in that world where I is dual use, where its adoption is going to be driven [00:26:30] first in the military sector, then we'll see spillovers in other sectors. Again, it's a reason for thinking that it's really unlikely that AI is going to linger on the sidelines.

Nick Capodice: [00:26:42] Okay, so the private sector's already doing it. They're selling it to the government aggressively. It's cheaper and faster and other countries are doing it. And we've got to keep up with the Joneses. So if it's happening, Hannah, how are we dealing with the legal implications? [00:27:00] Has the Supreme Court said, whoa there. We need to adjust our rules here.

Hannah McCarthy: [00:27:05] We're going to talk about that after the break.

Nick Capodice: [00:27:08] But before we do the break, how's this for data collection? In exchange for your email, we'll send you a newsletter every other week so that you can learn that much more about American democracy. Or sometimes it's just Hannah ranting about a movie or a TV show or a long buried but albeit interesting moment from her childhood. And that's okay too. I kind of like that better, to be honest. We promise [00:27:30] to never, ever sell your data. We'll just send you the fun newsletter. And yes, the occasional fundraising plea, because that is how we keep the lights on. Okay. That's it.

Hannah McCarthy: [00:27:55] We're back. You're listening to Civics 101 and Hannah.

Nick Capodice: [00:27:58] Just before the break, you promised we'd [00:28:00] talk about what courts are doing when it comes to Westworld getting a little closer to being reality. So how are the courts dealing with all this new tech?

Hannah McCarthy: [00:28:09] Here's Aziz Huq again, professor at the University of Chicago School of Law.

Aziz Huq: [00:28:13] In the United States. We have federal courts, at least, that are historically minded and are generally, but not always, resistant to recognize and account for new technologies. I think it would not be accurate to say that courts [00:28:30] don't ever account for new technologies in the context of privacy. Under the Fourth Amendment, the court has, in piecemeal and small but not inconsequential ways, expanded the notion of what counts as an interference by the state in line with changing technologies.

Hannah McCarthy: [00:28:52] So basically, technologies have been drastically changing the legal landscape in the United States for two centuries. [00:29:00] But the courts themselves have not always, shall we say, kept up with the times. On occasion, however, they'll look at some development and say to themselves, okay, whoa, this actually changes how things work and we need to make a decision about it.

Aziz Huq: [00:29:18] And so I think the best example of this is a case from I think it was 2012 called Carpenter, in which the court said, well, a person is searched by the government [00:29:30] when the government asks a cell phone provider for a week long record of their locational data, now under the traditional long standing Fourth Amendment doctrine that would not have counted as a search regulated by the Fourth Amendment and the court, it really interestingly for a court that generally styles itself as being small c conservative and originalist, says, well, but in practice this [00:30:00] is the same as as following the person for a week.

Nick Capodice: [00:30:03] Should just jump in here quickly and affirm that, yes, over the course of its long history, the US court system has been predominantly conservative. But anyway, okay, the government says give us the location data of this person's cell phone for the past week. And the court says, well, otherwise in the before times, you could really only get that data by following that person. Yeah.

Hannah McCarthy: [00:30:29] And in this case, [00:30:30] Carpenter v United States, the Supreme Court ruled that police must obtain a search warrant to access these records. Following someone, by the way, does not require that warrant, so long as that person is in plain view, like walking or driving in public.

Aziz Huq: [00:30:46] And we should be more worried about this because it's so much cheaper. To acquire the locational data than it is to set a team of agents on a person and to follow them for a week. The fact [00:31:00] you can get efficiencies is marvelous if you're McKinsey. It's arguably deeply worrying if you're a right holder, confronted by a state that's able to leverage the scale effects of AI.

Nick Capodice: [00:31:14] Mckinsey, one.

Hannah McCarthy: [00:31:15] Of the three largest management consultancies in the.

Nick Capodice: [00:31:18] World. What does it mean when they say they're a management consultancy?

Hannah McCarthy: [00:31:22] Yeah, they take a good look at their clients. And by clients I mean giant corporations and whole countries. And they tell them how to spend their [00:31:30] money and how to operate. So like using AI to look at huge amounts of data about consumers or maybe the population of a country that is potentially very useful to McKinsey. But we are not McKinsey, right? We are not trying to figure out how to better help the authoritarian regime get its stuff done.

Nick Capodice: [00:31:52] Wait. Like for real?

Hannah McCarthy: [00:31:53] Oh yeah. For real. That thing we are worried about here is our rights.

Nick Capodice: [00:31:57] And are people saying to the courts, hey, [00:32:00] this AI over here violated my rights.

Hannah McCarthy: [00:32:03] Well, they're trying, but so far it's been pretty piecemeal.

Aziz Huq: [00:32:07] There was a state in which disability benefits were being allocated on the basis of predictions of fraud or not, and that was challenged in Michigan. There was a lawsuit challenging their unemployment insurance allocation system called Midas, which turned out to have an extraordinarily high rate of errors. There was a challenge in Houston [00:32:30] to the use of a machine learning prediction tool for evaluating teachers on the basis of the likelihood of the prospect that teachers were improving students standardized test performance. There's a series of cases that are before the Supreme Court now, which are not quite on point, but are in different ways about whether and how the state can regulate the recommender and content moderation tools used by social media platforms. So that's [00:33:00] not a suit challenging what the state can do. It suits challenging the state's power here in Florida and Texas's power in particular, to regulate how private actors use machine learning tools in constructing a public sphere.

Hannah McCarthy: [00:33:17] And of course, we already heard about the case with the bail question. Was that algorithm ordered to be better regulated? No. The court said the algorithm was fine as long as the people [00:33:30] using it apply their own judgment.

Nick Capodice: [00:33:32] Yeah, but a person's judgment isn't exactly a slam dunk all the time.

Hannah McCarthy: [00:33:36] Yeah. That brings me to this very interesting example.

Aziz Huq: [00:33:41] There's a well-known case in which police officers in, I think it was New York City, had a piece of footage from a store camera, looked at the footage, said, hey, we think this guy looks like an actor. And they said, the person looks like ex.

Hannah McCarthy: [00:33:57] Woody Harrelson, to be exact.

Aziz Huq: [00:33:59] They call incorrect [00:34:00] matches.

News Archival: [00:34:01] Cops say a suspect stole beer from a CVS in New York City, and when they ran his face through their database, nobody popped up as a match. But a detective noticed the guy kind of looked like actor Woody Harrelson. So they tried running Harrelson's picture through their facial recognition system, and they got a few matches, and it even led to an arrest. Now, while the Georgetown University report shows that facial recognition has helped the NYPD crack about 2900 cases in more than five years, it also points to the possibility for [00:34:30] mistakes, saying using wrong data increases room for error. The NYPD also uses a technique which involves replacing facial Woody Harrelson.

Hannah McCarthy: [00:34:38] The cops were using facial recognition software to track someone down who stole some beer. The image in the security footage was too pixelated. No problem, the cop said. We think that this guy looks like Woody Harrelson, so instead of using this hard to see image, we will just run the software to [00:35:00] match people to Woody Harrelson.

Nick Capodice: [00:35:04] Was Woody really stealing that beer? Hannah.

Hannah McCarthy: [00:35:06] No. And in fact, the cops did make an arrest using one of the matches they got out of this machine, but they also received other incorrect matches of people who looked like Woody Harrelson.

Aziz Huq: [00:35:19] That is not a problem about facial recognition technologies, technical capacity or its specification. [00:35:30] It's a problem about how it's used. And we could multiply the flawed ways in which a technology was used, or catalog of flawed ways is probably limited only by our imagination about how stupid people can be.

Nick Capodice: [00:35:46] So I take it we need really good broad rules for people. Using this technology because people are going to do foolish things, and we need to anticipate that as best we can.

Hannah McCarthy: [00:35:58] Yeah, we're not just not [00:36:00] angels, Nick. We're also really not geniuses.

Nick Capodice: [00:36:04] Hey, Hannah.

Hannah McCarthy: [00:36:05] Yannick.

Nick Capodice: [00:36:07] Is anything sacred?

Hannah McCarthy: [00:36:10] Tell me about.

Nick Capodice: [00:36:10] It. I know, but but I do mean that in this case, like, is anything, any decision, something that should not come across machine eyes or whatever. Machine analysis.

Hannah McCarthy: [00:36:23] Well, Aziz did make the point that machine decision making is less variable than human judgment [00:36:30] and probably going to be more accurate. But that doesn't mean that we are going to always want a machine's help.

Aziz Huq: [00:36:37] You can imagine a world in which humans are good at making some subset of judgments, and those subset of judgments are really important. I don't know whether this is true in the world, but you can imagine. Um, you have a violence prediction tool. The violence prediction tool works really well for 90% of the population, but it turns out to work really badly, let's say for women [00:37:00] who are in situations of domestic violence. Right. And you might say, well, look, you know, because that tool has this blind spot and the blind spot is really important and we can't section off the blind spot because we don't know in advance who those people are going to be. We don't use the tool at all. So there might be there might be kind of practical reasons why you wouldn't use a tool because of something about the nature or the distribution of the errors it makes.

Nick Capodice: [00:37:26] Can I just ask one more human version of this question? [00:37:30]

Hannah McCarthy: [00:37:30] Yeah, go for it.

Nick Capodice: [00:37:32] Are there any decisions, regardless of how much more accurately they might be assessed by machines that should only be left up to human minds?

Aziz Huq: [00:37:45] The other thing is, is that maybe there are some decisions that you just never want to be made by a machine. Judges will often say, well, there are just decisions about what counts as the law and what doesn't count as the law, and that those are necessarily human decisions. [00:38:00]

Nick Capodice: [00:38:00] The Supreme Court would most definitely assert that claim.

Hannah McCarthy: [00:38:03] Yeah, but Aziz actually pushed this question a little. He basically said, you know, that he understands what they mean there when they say that deciding what is law is necessarily human.

Aziz Huq: [00:38:17] However, I recognize the force of those arguments, but I have a really hard time figuring out what I think of them, and here's why. One of the early conversations I had about [00:38:30] when do you have a right to have a human making decisions was with a colleague who's a woman who's from a non-Western background. And the colleague said to me, you know, while 30 years ago the person I would have married would have been selected by my parents and through through matchmakers. And today matchmaking happens through an algorithm, it happens through Bumble or whatever. Uh oh my gosh, I'm [00:39:00] so glad that we've moved from a world of human matchmakers to machine matchmakers, because that gives me a kind of agency that I didn't have before. I can see that. And I understand that there are arguments against not Bumble in particular, but online dating.

Hannah McCarthy: [00:39:18] Aziz makes the point that picking a life partner is one of the most profoundly intimate decisions that you can make. And the first couple of steps finding, connecting with, and interacting [00:39:30] with that person is often left up to a machine. Now, the whole process, but the screening process.

Aziz Huq: [00:39:40] And if you're willing to trust the question of who will be your intimate life partner to a predictive machine, what exactly is the core of decision making that you cannot trust to a machine? I give the example because I think it illustrates to me how hard the question is, but I genuinely [00:40:00] do not know the answer. And it's not a question that you kind of it's a moral question. It's not an empirical question. It's a question where I think I continue to think I have some of the resources necessary to think it through, but I don't have all of it, and I don't really know what the answer is.

Nick Capodice: [00:40:20] A moral question about what? We're going to let machines decide who. All right, one [00:40:30] last question, Hannah. There are the one off court cases, the cops and Woody Harrelson's the administrators who want to do more with less. But is anybody actually taking in the bigger picture? Is the Constitution being interpreted anew for this new world order?

Aziz Huq: [00:40:50] I think we're seeing most of the important legal action occurring, not at the level of constitutions, but at the level of new statutes or similar [00:41:00] regulatory frameworks. The most crisp examples of those are in Europe, which has a pending what's called AI act. China actually has a really dense and interesting set of regulations that at once aims to shore up Communist Party control, but at the same time is genuinely focused and genuinely makes strides on issues such as the use of deepfakes, which I think is a [00:41:30] serious and gravely harmful phenomenon, and does so better than probably anything that you'll see in the United States in the near tum. So you have two regulatory models of regulation, neither of which are constitutional in Europe and in China. I think those are going to be more and more influential around the world. They'll kind of indirectly shape what Americans experience because many products are made for global markets.

Nick Capodice: [00:41:53] This comes back to the like the whole world doing it thing, right. Like like the whole world will be [00:42:00] trying to access similar or the same technologies. And so the companies making it will probably, I guess, make tech that works with Chinese restrictions or European restrictions, and we'll buy it too, though I will say the US sure does have a track record of having its own special version of stuff.

Hannah McCarthy: [00:42:21] Yeah, but it also has a track record of states stepping in where the federal government does not.

Aziz Huq: [00:42:27] Maybe states will start to fill in the [00:42:30] gap. Maybe in particular California. California has been very aggressive on data privacy. Interestingly, since you'd imagine that the presence of big data hungry firms in California would lead the state to air in the other direction. But California seems to be pretty aggressive as a regulatory state. But I expect we'll see more state level responses to these problems. I expect we'll see, for example, more efforts to push bans on [00:43:00] facial recognition technology. I think that we'll see efforts to introduce due process rights, rights to a human decision in certain contexts. I think that we'll see more and more efforts to allow people to control their own data, particularly biometric data. But we'll see a patchwork in the US, and we'll see these spillovers from Europe and from China, leading to a very complicated and uneven [00:43:30] pattern of legal protections.

Hannah McCarthy: [00:43:35] All right, one last thing I do want to add, because of course, we know here at Civics 101 that there is one super quick route to rule making that evades Congress entirely. In October 2023, President Joe Biden signed the executive order on the safe, secure and trustworthy development and use of artificial Intelligence. He established eight guiding principles for AI policy one. [00:44:00] It must be safe and secure. And this provision, by the way, promises to essentially label things as AI generated so the public knows when they're consuming AI. Two the government will promote responsible innovation, competition and collaboration because, hello, capitalism, it's not going anywhere. Three workers will be supported.

Nick Capodice: [00:44:23] Uhhuh. Okay.

Hannah McCarthy: [00:44:23] But even Biden wrote the words collective bargaining in that section, aka unions, which seems [00:44:30] pretty serious. He also says that I should not undermine rights, worsen job quality, encourage undue worker surveillance.

Nick Capodice: [00:44:40] Oh, man, that's that's kind of spooky.

Hannah McCarthy: [00:44:42] Uh, it should not lessen market competition, introduce new health and safety risks, or cause harmful labor force disruptions. Countries are still full of people, Nick, and people are constituents, and constituents are political power. So yeah, we got to think about people. All right. Four [00:45:00] AI policies have to be consistent with the Biden administration's dedication to equity and civil rights. Basically, I cannot be used to further denial of equal opportunity and justice. Five consumers who use and interact with AI need to be protected against fraud, bias, discrimination, and privacy violations.

Nick Capodice: [00:45:19] Yeah, honestly, that one seems like the one that could be the most pervasive. Hannah.

Hannah McCarthy: [00:45:23] Um uh six Biden doubles down despite AI, we need to protect privacy and [00:45:30] civil liberties. The government, this order says, will make sure that data gathering is legal essentially. Seven this one is so interesting as the government uses AI, which it will. It will hire and train people the right way to make sure that AI is safe and understood. And eight USA number one, go on. Biden said, quote, the federal government should lead the way to global societal, economic and technological progress as [00:46:00] the United States has in previous eras of disruptive innovation and change, unquote. To him, this means being ahead of the curve and promoting AI's regulation around the world.

Nick Capodice: [00:46:12] Just keep on spreading that democratic promise, I suppose.

Hannah McCarthy: [00:46:17] So yep. That's it. Of course, that's an executive order. And what's that thing about executive orders, Nick?

Nick Capodice: [00:46:26] They just go away when a new president doesn't want them.

Hannah McCarthy: [00:46:30] Bingo. [00:46:30] So stick around. We'll be here watching and waiting and telling you what the machines are doing and whether we are doing anything about it. This episode was produced by me, Hannah McCarthy with Nick Capodice. Christina Phillips is our senior producer. Rebecca Lavoie is our executive producer. Music in this episode by Wave Saver, Christoffer Moe Ditlevsen, Yi Natiro, Christian Nanzell, William Benckert, Rolla [00:47:00] Coasta, Oomiee, Lexica, HiP CoLouR, Eight Bits and Quarter Roll. You can find everything. We are at our website civics101podcast.org. That's transcripts, all of our episodes, how to connect with us, everything. Civics 101 is a production of NHPR New Hampshire Public Radio.


 
 

Made possible in part by the Corporation for Public Broadcasting.

Follow Civics 101 on Apple Podcasts, Spotify, or wherever you get your podcasts.

This podcast is a production of New Hampshire Public Radio.