The perils and promise of AI

In the last year, programs like ChatGPT, Dall-E and Bard have shown the world just how powerful artificial intelligence can be. AI programs can write hit pop songs, pass the bar exam and even appear to develop meaningful relationships with humans. 

This apparent revolution in AI tech has provoked widespread awe, amazement — and for some, terror. 

But as Brown Professor of Data Science and Computer Science Suresh Venkatasubramanian explains on this episode of Trending Globally, artificial intelligence has been with us for a while, and a serious, nuanced conversation about its role in our society is long overdue. 

Suresh Venkatasubramanian is the Deputy Director of Brown’s Data Science Institute. This past year, he served in the Biden Administration’s Office of Science and Technology Policy, where he helped craft the administration’s blueprint for an “AI Bill Rights.” 

In this episode of Trending Globally, Dan Richards talks with Suresh about what an AI Bill of Rights should look like and how to build a future where artificial intelligence isn’t just safe and effective, but actively contributes to social justice. 

Read the blueprint for the AI Bill of Rights

Learn more about Brown’s Data Science Institute

Learn more about the Watson Institute’s other podcasts


[MUSIC PLAYING] DAN RICHARDS: From the Watson Institute for International and Public Affairs at Brown University, this is Trending Globally. I'm Dan Richards. In the last year, we've seen just how powerful artificial intelligence has become.

Programs like ChatGPT can write a college essay and pass the bar exam. Programs like DALL-E 2 can create incredible images based on any prompt a user gives it. It's all been enough to make many people ask lots of questions about the future. The future of education, the future of work, the future of humanity.

SUBJECT 1: It has the potential of a civilizational destruction.

SUBJECT 2: You have runaway AI that would become more intelligent than humans and we wouldn't be able to turn the switch off.

DAN RICHARDS: It feels like all of a sudden science fiction has become reality. But as our guest on this episode explains, despite the newness of this moment, artificial intelligence has actually been with us for a while. And a serious nuanced conversation about its role in our politics and society is long overdue. As he puts it, the current freak out might be hearing on the news right now--

SURESH VENKATASUBRAMANIAN: It's the chatter that happens when the tip of an iceberg hits the surface, not when an iceberg is formed. It's been around for a long time.

DAN RICHARDS: Suresh Venkatasubramanian is a Professor of Data Science and Computer Science at Brown University and Deputy Director of Brown's Data Science Institute. This past year he served in the Biden administration as part of the Office of Science and Technology Policy.

He helped the administration craft a blueprint for an AI Bill of Rights, which was published this past November. It's a document that will guide the administration and future lawmakers as they navigate the uncharted waters of policymaking around artificial intelligence.

And on this episode, I talked with him about how to ensure that artificial intelligence works for all of us now and in the future, and about how we should think about AI from someone who actually understands it. Professor Suresh Venkatasubramanian, thank you so much for coming on to Trending Globally.

SURESH VENKATASUBRAMANIAN: Thank you for having me.

DAN RICHARDS: Maybe before we get into more of the recent work you've been doing, if we could define a couple terms for people who aren't super familiar with the details of all this. So how would you define AI?

SURESH VENKATASUBRAMANIAN: Oh, boy. Part of the battles right now are exactly that, how to define it. So maybe what I would say is if you think about this AI is an academic discipline. So AI, artificial intelligence, a cleverly chosen name. As an academic discipline, it is about making computers or computer systems or robots that fundamentally can either do what people do or behave like people behave in ways that we think of as human behavior.

So systems that can see the way we do, computer vision systems that can talk the way we do and understand language, natural language processing. Systems that can plan the way we do, robots, Roombas. So things that can do what we do is broadly what you think of as AI.

DAN RICHARDS: I have a feeling maybe all of these definitions are going to be big asks to give condensed versions of, but let's go with the next one, algorithms.

SURESH VENKATASUBRAMANIAN: So an algorithm is very simply a kind of recipe. It's a set of instructions to perform a task, and they should be trying to solve some problem. So if your problem is, how do I bake a cake?

A recipe would be, well, here are the inputs, the ingredients. And this is how much of each you need. Step one, mix the flour and the other things together. I don't know how to make cakes and so I'm not going to do-- to even try. But mix this up together, turn the oven on at this temperature, put it in for so much time. It's a recipe. An algorithm is just like that.

And it's important to say it's a recipe because it doesn't depend on which computer you're using or which architecture, we're using Windows or Mac or on a phone, it's a recipe. Just like a recipe is a printed on paper, every cook will use their own instruments to implement it. Same thing an algorithm is a recipe that's implemented in different settings by different kinds of machines.

DAN RICHARDS: And before we get to the third term I'd like us to define, how are algorithms and artificial intelligence related?

SURESH VENKATASUBRAMANIAN: I think of AI as the mission, make systems that function like humans in various ways. But the expression of that intent is through an algorithm because the computer system only understands the language of algorithms. An algorithm is a base object. It's the language they speak. Do they speak AI? Do they speak compilers? Do they speak databases? But they all speak algorithms.

DAN RICHARDS: And the final for now, machine learning.

SURESH VENKATASUBRAMANIAN: Machine learning is a subfield of artificial intelligence. So it's a piece of it that tries to design algorithms that can learn from data. For example, a machine learning algorithm might be asked, hey, here are a bunch of questions on the SAT and here are the correct answers for each of these questions. Build me an algorithm that given a new question on the SAT, would produce the correct answer for it.

Now, that's actually a very hard problem. That's much harder. But there are simpler versions of that. But fundamentally-- and so the learning here is the idea that you can learn from past data to try and predict what's going to happen in the future.

DAN RICHARDS: All right, wonderful. And there won't be a quiz for listeners, but I think that might be helpful as we dive into this whole topic a little bit more. So before we get into some of the stuff that's really captured people's attention about artificial intelligence over the last year or so since programs like ChatGPT became widely available, I want to go back a little to the types of artificial intelligence and algorithms that already are underlying our daily life. What are some examples of artificial intelligence or algorithms being used in a way that is adjacent to artificial intelligence? What are some examples of that in our daily life?

SURESH VENKATASUBRAMANIAN: It would frankly be easier for me to try and find places where they are not being used.

DAN RICHARDS: To illustrate the point, Suresh took me on a tour through a rather typical American morning routine in the year Twenty Twenty-Three.

SURESH VENKATASUBRAMANIAN: Imagine you get up in the morning and look at your newsfeed on Google or something. There is an algorithm there that is learning from which articles you've clicked on in the past. And it's trying to supply you with articles it thinks that you're interested in. Now I use the word thinks, I'm lying. These systems don't think. But the predictive model has been trained on past data from you to produce a new article that you might be interested in.

DAN RICHARDS: After scrolling on your phone for a little too long, you finally get up. Now getting out of bed, that doesn't involve AI, not yet at least. But after that--

SURESH VENKATASUBRAMANIAN: Maybe listen to some music while you're getting ready for work. You're playing Spotify, you're playing some kind of recommended list. Spotify has an algorithm based on things you've clicked on and it's shown interest in the past.

DAN RICHARDS: Next-- and this is where Suresh's imaginary routine and my daily life sharply differ-- while you're getting your breakfast ready, maybe your Roomba is out there cleaning your room.

SURESH VENKATASUBRAMANIAN: Your Roomba is a robot that's using I to plan a path through your living room having built a map of your living room, and then clean the area and then go back to its stall and recharge itself.

DAN RICHARDS: After that you leave your house.

SURESH VENKATASUBRAMANIAN: You get into your car, you do Google mapping. That's an algorithm. It's an algorithm that decides how you should get to work based on current traffic patterns.

DAN RICHARDS: You get the idea. Even before computers could write your college essay or pass the bar exam or do whatever they're able to do as of the time you're listening to this, AI technologies and the algorithms that power them have been underpinning our daily life for a while. Suresh did not enter computer science looking to study algorithms and the effects they have on societies and individuals. He certainly didn't get into the field to work on an AI Bill of Rights. As he puts it--

SURESH VENKATASUBRAMANIAN: So I always joke that was a very unwoke computer scientist. I just did my stuff that I thought it was interesting. I still I'm very fascinated by the field of just computer science as a whole. And, specifically, my training was in theoretical computer science which is the mathematical basis of computer science, which is the area I still find the most fascinating and love the most.

And I worked in a subfield of theoretical computer science called geometry where you looked at how to design algorithms for geometric objects. So a lot of these algorithms show up in graphics, they show up in visualization, they show up when you're manipulating geometry. There's a lot of very clever geometry underlying when you go watch a Pixar movie, for example, and those kinds of things.

DAN RICHARDS: The underlying technology Suresh was pioneering, it actually shared a lot with other types of technologies that were made for making predictions. As Suresh put it--

SURESH VENKATASUBRAMANIAN: We think about data as geometry. If you think of a way I would describe you in a computer, I would assign a series of numbers to you. One number might be your age. One number might be your birthday. One number might be the number of books you've read in the last two weeks. Another number might be a 1 if you've listened to this particular album by Taylor Swift, and so on so. I have a whole bunch of numbers that describe you in some database.

Sort of like how we have latitude and longitude to coordinate to describe where we are, or latitude, longitude, and altitude, I can describe you by-- I don't know-- 150, 200,000 numbers, let's say, for a particular problem. I can think of those numbers as representing the coordinates of a point in a-- this will blow your mind-- a 1,000 dimensional space, and now I can do geometry. And finding of patterns is fundamentally a geometry problem.

At some level all the machine learning is based on that idea, that you think of people as points in a very, very high dimensional space. And what you're trying to do when you find this predictor is find a pattern in that space. So in some sense theory led me to geometry, geometry led me to data, data led me to machine learning, and that led me to where I am.

DAN RICHARDS: Suresh was working in this field of geometry around the same time that algorithms were finding homes in every aspect of our lives. But Suresh and many other people started to see major issues with how these technologies were being used. To understand what they saw, you first have to remember the fundamental way these systems do what they do.

SURESH VENKATASUBRAMANIAN: The first thing to keep in mind that since most of these algorithms are based on machine learning-- which means they collect data and they find patterns in data-- they will reproduce whatever patterns we train these algorithms to find.

DAN RICHARDS: But what if it finds patterns or learns lessons from the data that we don't want it to learn? Here's a theoretical example. Let's say it's the twenty-tens and people are trying to use algorithms for everything, and you decide to create an algorithm to help banks decide who should qualify for a loan.

SURESH VENKATASUBRAMANIAN: You need enough data-- so you collect it from many, many years.

DAN RICHARDS: Data of who received past loans or who defaulted on previous loans, who paid them back on time.

SURESH VENKATASUBRAMANIAN: This data comes from society. It comes from our world. It comes from a world where there's already lots of biases, for example, built into who gets a loan and who doesn't. An algorithm that's trained on this data will be very good at reproducing and amplifying those very biases.

Those biases could be gender biases. They could be demographic biases. They could be biases towards people who already have a lot of provable wealth guarantees that make them a reliable investment. And these algorithms might be biased against people who are starting out in the world and who don't have a huge track record of credit. So the algorithms merely reflect, amplify, and distort what's in the data.

DAN RICHARDS: In other words, if the data reveals patterns that are racist or sexist or represent any type of bias, then the algorithm will learn to replicate those biases. And just as people started to notice that these biases were built into the technology, they also started to find that they are really hard to remove from the technology.

SURESH VENKATASUBRAMANIAN: One interesting example of this that came up a few years ago, Amazon was trying to build a tool to help them with hiring. And so they did the thing that you would do, you collect a lot of data on people you've hired, look at how well they perform, and you run this through a system that is trained to produce a predictor.

DAN RICHARDS: This tool would then recommend hires based on the data of past successful applicants. But the people who designed it started to notice something.

SURESH VENKATASUBRAMANIAN: They discovered that the system had this weird bias against giving female candidates a high score.

DAN RICHARDS: So what was going on? Well, here's what they realized. First off, the system was scoring candidates based on how well they matched previously successful candidates in Amazon's history. In other words, the algorithm was trained on people Amazon had hired before.

SURESH VENKATASUBRAMANIAN: But it turns out Amazon hadn't done a very good job hiring women in that data set that they were collecting. So they didn't have many good many examples of women candidates who actually were hired for whom they had data.

DAN RICHARDS: Which is troubling in itself. But here is the craziest part. The people who built the system, they had removed gender from the training data that the algorithm used possibly for this exact reason. But that didn't stop the system from replicating the bias. It's kind of clever in its own sad way.

SURESH VENKATASUBRAMANIAN: It turns out that there was enough data on the CV, things like, oh, I went to Smith College which we know is a women's only college, but it's not explicitly mentioned as such. And the system started to pick up on little details like that, and use that to essentially infer gender.

DAN RICHARDS: This is just one example of how difficult it can be to keep an algorithm from replicating the biases that exist in our society.

SURESH VENKATASUBRAMANIAN: So this idea that you can just collect some data because we have so much data around and then build a model from it presupposes that the data is reliable in many, many dimensions that they tend not to be.

DAN RICHARDS: Thankfully Amazon caught this problem before they actually implemented the technology. But over the twenty-tens, more and more companies and even many government organizations started using this type of technology, and not just for hiring.

SURESH VENKATASUBRAMANIAN: There are algorithm deciding what kind of insurance you should get and how much premium you should pay. There are algorithms that decide whether you should be released on bail if you've been arrested for something or not, or whether you should be-- what kind of sentence you should be given or whether you should be given parole.

There are algorithms that decide what kind of health care you should get and what kind of treatments might be applicable to you. There are algorithms that describe what kind of care hospitals should provide and the diagnostics of whether you have a disease or not. The list goes on and on. And we don't see most of them.

DAN RICHARDS: Like I said, ideas of fairness and transparency in how computer algorithms are used on people and communities, these were not issues Suresh was particularly passionate about throughout most of his career in computer science. But in Twenty Twelve, he had a conversation that started to change that.

SURESH VENKATASUBRAMANIAN: It was the beginning of the deep learning revolution. We weren't quite at the point where everyone was talking about AI, so it was still seemed like it was a bit in the future. I was talking to a sociologist-- hurray for interdisciplinary conversations-- who mentioned one of the disparate impact cases that the Supreme Court held up to basically formulate the doctrine of disparate impact the idea being that if a system that looks like it's neutral has some impact, that could be a cause for concern. And discrimination investigation should look into that.

DAN RICHARDS: Disparate impact is a legal concept that has court biased lawmaking in a wide range of American society, from housing law to labor law. And this idea that bias needs to be addressed and rooted out even if it's not explicitly in the laws we read and write, even if it's just in the effects of these laws, that resonated with Suresh.

SURESH VENKATASUBRAMANIAN: This is a way to think about algorithmic bias of a kind. That it's not necessarily only intent-- although there is that-- but there's also the impact of using opaque systems to make predictions-- like we have described-- that could have discriminatory outcomes that we need to understand better. And one thing led to another. I've been thinking about this since then.

DAN RICHARDS: Suresh started to work more and more at the intersection of computer science, data, and civil rights. He became a rare and increasingly valuable type of expert in our society. He is fluent in concepts like disparate impact, and he understands how artificial intelligence works. In Twenty Twenty-One, he was appointed to a subdivision within the White House office of Science and Technology Policy called the division of science and society.

SURESH VENKATASUBRAMANIAN: We had sociologists, we had political scientists, we had two computer scientists as well.

DAN RICHARDS: And in Twenty Twenty-Two, they released a blueprint for an AI Bill of Rights. Now this would not be a Bill of Rights for robots. We're not there yet. Instead, it's more of a declaration of the rights that individuals and communities should have in the United States, with respect to how AI is used in ways that affect them.

SURESH VENKATASUBRAMANIAN: And so the idea of the AI Bill of Rights, much like the original Bill of Rights, was to say, what are the protections that people need to make sure that the systems that we put out there-- whatever they might be-- do right by us?

DAN RICHARDS: The document is made up primarily of five overarching values or principles.

SURESH VENKATASUBRAMANIAN: First of all, systems should be safe and effective. Which means if we put a system out there in the world that affects people in any material way, it should be safe, it should not hurt us, and it should actually do the thing it claims to do. It's a reasonable thing.

DAN RICHARDS: And yet according to experts like Suresh--

SURESH VENKATASUBRAMANIAN: You'd be surprised how many systems don't have that. They're not safe and they're not effective. They are literally make claims about what they can do that are not true. So that's number one.

Number two, we shouldn't be discriminated against by algorithms. We should not have algorithms that propagate, amplify biases of various kinds. Number three, machine learning is fueled by data. Data is being used willy nilly without any kind of controls or checks. That needs to stop. Our data is ours. And it should be-- if at all it should be used, it should be used minimally, sparingly, and only for specific purposes. And not just sold to the highest bidder.

Number four, if we don't even know algorithims are being used, we can't talk about them, we can't talk about our concerns. So we should know. And if you don't understand how they're working, we can't do anything. We need to know that they exist and that there are explanations for them. Number five, systems will fail. My phone will crash. Things happen, that's OK. We're not claiming we need 100% certainty. But we need backups.

DAN RICHARDS: And part of that is that when systems do fail--

SURESH VENKATASUBRAMANIAN: We need a way for people to appeal a decision. We need to backup systems. If you're going to check in at the TSA and they only have an automated system to scan your face, and the lighting isn't great, your skin is too dark, and the system doesn't work, you can't be told you can't fly.

You need a backup system with someone else who can inspect and make sure you have the right documentation. So this idea of human alternatives, of recourse, and consideration to make sure that we are still being treated as people and not as high dimensional vectors is an important part. And these are the five things.

DAN RICHARDS: How do you envision these rights manifesting? Does it mean laws, regulatory agencies? Like, what are ways to ensure that these rights are maintained even in private companies?

SURESH VENKATASUBRAMANIAN: All of the above. You need laws. You need regulatory agencies to have the capability, the resources, and the willingness to enforce and go after entities who are not complying. But, again, a lot of that depends on how their scope of activities is circumscribed by Congress and by the Supreme Court, which, again, is a topic of discussion right now.

So laws are important. Regulatory authorities are important. Laws of the states are important. We all know that if California and Texas and the big states make some laws, everyone else is essentially forced to comply because they're the biggest clients, the biggest customers base. So having large states do something and having a lot of states do something is also very effective.

DAN RICHARDS: But Suresh thinks we also need to push for change at a deeper level.

SURESH VENKATASUBRAMANIAN: We don't just need laws, we don't just need market incentives, we just don't need voluntary action. We need a culture shift in expectations. If we relied on laws to prevent people from breaking into buildings, we would need way more police than we have. But there was a general sense that we don't go breaking into buildings.

DAN RICHARDS: I wonder if we could put these principles in this Bill of Rights onto that example you were discussing about hiring at Amazon. What are some ways that this Bill of Rights, if enacted, could correct for those issues you described?

SURESH VENKATASUBRAMANIAN: So let's take the first one, safe and effective. In this case, Amazon did what we call document pre-deployment testing. And they tested it and found it was failing in some way. And at that point they didn't deploy it. That's a success.

More systems should be under that kind of guidance. That they should be tested before they're deployed. We are not the guinea pigs for these systems. We don't let people put out drugs in the market without testing, we shouldn't let systems go. So that was good. They get a point for that.

DAN RICHARDS: Point for Amazon.

SURESH VENKATASUBRAMANIAN: Yeah. Number two, data. Where did they get the data from? Whose data was it? Was it bought from a broker? How are they collected? Maybe it was internal data. If it's internal data and it's data from their people, did their people give consent? Were they OK with it being used? Again, because they only collected data from their people, that means that their system is-- going back to safe and effectiveness-- was it really safe and effective at that point if it skewed data in that sense?

DAN RICHARDS: The other parts of the Bill of Rights don't really apply in this Amazon case since the system Amazon created wasn't actually deployed for wider use.

SURESH VENKATASUBRAMANIAN: But if it had been deployed and if I had applied for a job and hadn't got Amazon, I would want to know why. Is there an explanation? Did I even know that an algorithm being used to screen me? Right to notice.

And if I was denied because of some score on the test that the system evaluated me on, do I have a chance to ask for recourse? Do I have a chance to ask for, well, can I understand how you came across the score? Is there a way for me to verify that the score wasn't computed incorrectly based on incorrect data? Is there a process for either getting alternative or having some recourse? And so on.

In this particular example, one might say, well, they're a private company, they can do what they want. Up to a point. There are laws even about hiring and how you should do with hiring as well. So I would say the fact that it didn't get diploid means we kind of succeeded early on. And I think that's the point of the first principle, don't deploy things without testing them because you never know what you'll discover once you test them.

DAN RICHARDS: One of the challenges of crafting the AI Bill of Rights was that as we've seen in this episode, artificial intelligence is used in so many different ways in our society and affect so many different parts of our daily lives, which actually led to an interesting decision by the people who wrote the blueprint.

SURESH VENKATASUBRAMANIAN: One of the little secrets about the AI Bill of Rights is that we don't talk about AI in it. What we say in it is that we are concerned with automated systems that have a material impact on people's civil rights and civil liberties, opportunities for advancement or access to services. If you use a spreadsheet to do that, the Bill of Rights wants to have a word with you.

DAN RICHARDS: The blueprint was released by the White House in October of Twenty Twenty-Two. In some ways, it could not have been better time. The month before, DALL-E 2, the image generating AI program, was made public. And the month after in November of Twenty Twenty-Two, ChatGPT was released. In March Twenty Twenty-Three, Google's Bard was released. In other words--

SURESH VENKATASUBRAMANIAN: We are in the middle of an AI arms race.

DAN RICHARDS: Which according to some people, including many experts, is at risk of spinning out of control.

SUBJECT 3: Hundreds of executives, researchers, and engineers from top AI companies are saying in a statement, quote, "Mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war."

DAN RICHARDS: How did the adoption of these new technologies and the sort of panic surrounding them change the issues Suresh has been working on for years? To understand that, let's take a quick step back and look at how things like ChatGPT or DALL-E 2 differ from the types of algorithms and technologies that we've been talking about so far in this episode.

SURESH VENKATASUBRAMANIAN: So systems like ChatGPT and DALL-E and Midjourney are systems that are distinct from prediction algorithms because their goal is to generate. In other words, with these kinds of systems, you're not saying here is a person's profile. Tell me if they will do something. You're saying, give me a picture of a frog playing with a cat. Or I'm going to ask you a question, give me an answer that sounds plausible.

ChatGPT is designed to be a conversational engine that it responds using language, English language responses, in a way that is compelling and plausible. Which it is. You ask-- I once asked ChatGPT to write me some AI regulation legislation, it did a pretty good job of it.

DAN RICHARDS: And with these generative systems as opposed to the predictive models we've been talking about--

SURESH VENKATASUBRAMANIAN: You're not talking again now about accuracy or truth or reliability. You're talking about generative authenticity, which itself is a loaded term because these systems essentially copy and absorb content from others and then reproduce it. So authenticity is a loaded term there.

DAN RICHARDS: So how does the AI Bill of Rights then speak to these types of technologies?

SURESH VENKATASUBRAMANIAN: So to the extent that these systems would be used to have a material impact on people's civil rights and civil liberties opportunities for advancement or access to services, then the Bill of Rights would consider them within scope. So, for example, if these systems are used to control or infiltrate in any way the way you post things on social media and so on, if they impinging on your free speech in some way, then this could be an issue.

If they were used as-- if a system like ChatGPT was used, for example, to process health records and help a doctor make decisions about your care, OK, now we're getting into territory where this could be relevant. If ChatGPT is used as a conversational tool for you to have fun conversations with, then that's fine, go ahead and do it. Don't think the Bill of Rights or any system would be concerned about it.

DAN RICHARDS: And as for the concern that these types of technologies are going to leave us all unemployed.

SURESH VENKATASUBRAMANIAN: The concerns around things taking away people's jobs, I think that's always been a concern with technology. I don't think that's particularly new. It just seems more pressing right now because everything seems more pressing right now, but it's not a new concern. It's always been a concern.

DAN RICHARDS: And I think it's especially one of these things where it's maybe the first time people who write for a living are feeling concerned. And they're the ones who write everything we read about technology, so.

SURESH VENKATASUBRAMANIAN: And that's fine. That's reasonable, I think, for people-- I mean, and, yes, if you have the pen, you can talk about whether the pen is going away. But it's a real concern. Don't get me wrong.

And I think like with any disruptive new technologies, whether it's ChatGPT or AI generally, what people don't know, I think, is that all the years that AI slowly crept into the world, behind the curtain, making decisions about us, the arguments there have always been, oh, these systems are more accurate. Oh, they can work at scale.

There's always been a concern about, essentially, taking away people's jobs and replacing it by an automated system. It just become much more salient now because I think everyone is seeing this. Everyone can play with ChatGPT. Everyone can imagine.

DAN RICHARDS: What concerns you most when you think into the near future about AI's role in society? Whether it's this type of ChatGPT learning or algorithms, what keeps you up at night?

SURESH VENKATASUBRAMANIAN: Two opposing sentiments worry me at the same time. Either we won't do anything or either we'll do something will be totally wrong. So I think both these things worry me. The first worries me for obvious reasons. I think the second worries me right now, especially because there's so much-- as you mentioned-- an overcorrection to the new shiny object, the tip of the iceberg that we're not paying attention to the iceberg.

And I worry that too much overindexing on policies around ChatGPT will miss the mark in two years when the next iteration of AI appears, while having failed to capture this moment we're in right now where there's potentially a political will to do something about AI more broadly and the effects it's having in society.

DAN RICHARDS: So that we might focus but we might think too small or too specifically on AI.

SURESH VENKATASUBRAMANIAN: Too specific, and then the moment will go because these things only happen in right moments. For years many of us have been struggling in the trenches trying to get traction on these issues. We're in a moment right now where people are paying attention. And these moments don't come very often.

DAN RICHARDS: And the flip side, what are you most excited about with the future of this technology?

SURESH VENKATASUBRAMANIAN: I mean, ChatGPT is cool. It's very cool. The things you can do with DALL-E are just cool. It just as a computer science it's like, whoa, this is amazing stuff. How does it do it?

DAN RICHARDS: Suresh also thinks that while this current moment might be intimidating, there's also a tremendous potential.

SURESH VENKATASUBRAMANIAN: We have a chance here to change and expand our imagination of what computer science can be. It's hard because to do that we have to connect as computer scientists with people, something that computer scientists are traditionally very bad at.

It's a joke in the field like if I wanted to talk to people, I wouldn't have gone into computer science. But, in fact, I often argue that by understanding different needs that people have, you get all kinds of new problems to study and solve, which is fun.

DAN RICHARDS: A lot of what we've been talking about today and the principles in the AI Bill of Rights seem to be designed around making sure AI isn't used in ways that harm people or communities. And I guess I wonder, for my final question, do you see a role for AI in actively making a more just society and in advocating for social justice?

SURESH VENKATASUBRAMANIAN: There absolutely is. And coming back to Brown as a computer scientist, that is one of the questions I want to think about. Not just only identify concerns with the deployments that are out there-- which is important to do and we've been doing for a long time-- but provide an alternative way forward. And this is where I think computer science holds the most potential excitement to do this, but we have to be willing to imagine a different way of doing our work.

You asked this question about advocating for social justice. There was a group that put together-- so there's all these maps of like crime heat maps and so on. This group almost as a joke put together a white collar crime map. Like across the country, where is the highest prevalence of white collar crime? You have these big red blotch in Manhattan, for example, near Wall Street.

It was a joke. But it was also an illustration of we don't have to just ask for where is drug crime or where is burglaries and murders. We can ask for all kinds of different questions. We build risk assessments to judge whether people should be released on bail.

One group said, can we build a risk assessment, decide which judges are likely to put more people in jail versus releasing them on bail? We can do all of these. The technology in itself is not limiting. We have to expand our imagination, our willingness to ask different kinds of questions. We have to make the world we want. We can't just wait for it to happen.

DAN RICHARDS: Professor Suresh Venkatasubramanian, thank you so much for coming on to Trending Globally.

SURESH VENKATASUBRAMANIAN: It was my pleasure. Thank you very much.

DAN RICHARDS: This episode of Trending Globally was produced by me, Dan Richards. Our theme music is by Henry Bloomfield. Additional music by the Blue Dot Sessions. If you want to read the AI Bill of Rights, we'll put a link in the show notes.

If you like Trending Globally, make sure you subscribe to us wherever you listen to podcasts. And if you already are subscribed, please leave a rating and review. It really helps others find us. And better yet, tell a friend about us.

If you have any questions, comments, or ideas for guests or topics, send us an email at trendingglobally@brown.edu. Again, that's all one word, trendingglobally@brown.edu. We're releasing episodes a little less frequently over the summer, but we will be back in August with a new episode of Trending Globally. Thanks for listening.


About the Podcast

Show artwork for Trending Globally: Politics and Policy
Trending Globally: Politics and Policy
The Watson Institute for International and Public Affairs

About your host

Profile picture for Dan Richards

Dan Richards

Host and Senior Producer, Trending Globally