Skip to main content

AI Horizons: Ethics, Risks, and the Road Ahead

We’re at a historic inflection point in AI regulation, but from algorithmic bias to privacy issues, the ethical concerns grow. In this panel, moderated by WIRED’s Khari Johnson, we explore how tech companies and lawmakers are examining future existential risks and building safety measures in a rapidly evolving technological landscape.

Released on 12/07/2023

Transcript

Hey, there's two people. [chuckling]

Well, it's my great honor to be here today

with two women whose advocacy

and research have contributed heavily

to my work over the course

of the better part of the past decade

covering how artificial intelligence can harm humanity.

Their efforts continue to inform how policymakers

and regulators in the U.S. and abroad,

you know, are taking steps to ensure

that AI does not discriminate or violate human rights,

and to avoid the reification of power.

Dr. Joy Buolamwini is founder

of the Algorithmic Justice League

and author of the new book,

got a copy here, signed copy,

thanks to my partner, Kaylee, Unmasking AI.

She's also co-author of Gender Shades.

Gender Shades was a research project which concluded

that commercially available face recognition algorithms

misidentified women with dark skin more often

than other people.

That research and Dr. Joy's advocacy

led to bans on face recognition

in several major U.S. cities, including San Francisco.

Margaret Mitchell is Chief Ethics Scientist at Hugging Face.

Previously, she worked at Microsoft Research,

and was the founder and co-lead

at the Ethical AI team at Google.

Her work on the stochastic parrots research paper,

critical of the negative impact of large language models

on individuals, the environment, and society.

Time Magazine named her, earlier this year,

one of The Most Influential People of 2003.

Please join me in welcoming our panel.

[audience applauding]

Margaret, or excuse me, Meredith Whittaker,

the President of Signal Foundation,

was scheduled to join us today,

but she was feeling under the weather and unable to attend.

Margaret, something that stood out to me

in getting to know you better and other members

of the ethical and responsible AI research community

is hearing stories

about how there was sort of a seminal event

that led people to feel motivated to enter this field.

What was that moment, or series of seminal moments, for you

that woke you up to the urgent need

to address how AI can harm people?

Yeah, that's interesting.

I hadn't realized that most people had a seminal moment.

But for me, I was working at Microsoft Research

on a computer vision-to-language generation system,

so translating images to language

in order to describe it or tell stories about it.

And I gave it a sequence of images about a massive explosion

that had happened at an oil factory,

where 30 workers were wounded, actually.

So I gave it the sequence

and said sort of, you know, Tell me about this.

I'm anthropomorphizing a bit.

And it took a look at the sequence.

It saw the view

that the photographer had taken in the image,

where there were purples and pinks in the sky

with this massive explosion, all this fire,

all this, you know, harm, all this intensity.

[Khari] Yeah.

And it said, This is a beautiful view.

This is awesome!

And I realized at that moment

that it was completely disconnected

from the realities of what an explosion would mean.

And that's because of the data.

Because the data had lots of, like, pretty sunsets, right?

Everyone takes pictures of beautiful sunsets

or awesome sunsets.

And so it had learned

that if you see these purples and pinks,

and these, like, really intense colors in the sky,

it's something beautiful and awesome.

And it made me realize,

people weren't thinking about the connection

between what was in the data

and what the system would learn and say.

And the system was very disconnected

from the realities of human life.

And no one was thinking about it at the time.

Everybody was just trying

to make this technology get better, right,

like, not paying attention to the data.

So I was sort of like,

Well, if no one's thinking about this, I guess I have to.

And then, I turned a page

to really examining the kinds of biases

that are in data and how they affect people.

[Khari] And Dr. Joy, what was that moment like for you?

Well, that is the moment

that is on the cover of the book, as you see.

I was at MIT as a graduate student.

I had no intention

of creating anything like the Algorithmic Justice League.

I wanted to make a fun art installation.

Long story short, the face tracking software

I got to make that project work didn't really work so well

on my face until I, literally, put on a white mask.

And even before the mask was already on my face,

it was being detected.

But when I took the mask off,

my dark skin face wasn't being detected.

And so that was the moment,

this moment of coding in white face

at this epicenter of innovation.

Going to MIT was my dream school, after Georgia Tech.

Go Yellow Jackets, right? [Khari chuckling]

And so I was fascinated, amused,

annoyed, many mixed feelings.

But I thought, You know, I can't make a conclusion

just based on this one experience.

But it was the motivation

to really start testing out AI systems, some open source,

some from tech companies that all of you have heard of,

to see if my experience was unique

or indicative of a much larger pattern.

And it was what I call my first knowing encounter

with the coded gaze.

Yeah. Yeah.

It's been a real busy year for AI.

I think we could say that. I think that's safe to say.

What existential risk do you see on the horizon,

and how can we get ahead today?

So when I think of x-risk, I think more of the excoded.

Those who stand to be exploited,

extorted, even exterminated.

You now have AI systems that are being used

to select military targets with known collateral damage,

meaning precious human lives being knowingly decimated

with algorithmic precision.

And so, we're already in a world where AI is dangerous

when we're thinking about that context.

And I also think about the ways

in which AI can kill people slowly,

so when you think of this notion of structural violence.

So we know the bombs. We know the guns, right?

We know that type of violence.

But there's also a certain type of violence that happens

when you don't have access to adequate healthcare.

There's a type of violence

when you don't have the economic opportunity

to actually gain the resources that would better your life.

And so, this also impacts people's lives,

and impacts people's livelihoods,

and those types of examples.

Let's say,

somebody not getting the medical insurance coverage

they need.

So, they get kicked out

of the hospital sooner than is necessary.

We're seeing that happen

with the lawsuit against UnitedHealthcare,

where they're saying the algorithm's, you know, inaccurate

about 90% of the claims.

[Khari] Right.

And so that's what I think about, the excoded,

the people being harmed right now.

Yeah, that's the nH Predict AI tool, I believe. Yeah.

Margaret, any thoughts?

Yeah, I mean, Joy always says what I'm thinking,

and says it much better than I could ever say it.

[Khari chuckling]

So, I guess I'll just try and add onto that to say

that, you know, you said on the horizon.

And I think this is a little bit speaking

to this misunderstanding

that there's gonna be some future point

where suddenly there's an existential risk.

You know, suddenly everyone in the world gets blown up

or something like that.

But what that misses

is that there's paths leading up to that point.

It's not like something is just gonna suddenly,

fundamentally change and a lot of people die.

People are dying now, today, due to deployed AI drones

that are making the decision to kill people, right?

That's happening now.

And so it might be on a smaller scale than the entire world.

And actually, you know, disproportionately,

it's, you know, people of color,

people who are in lower poverty areas.

But we need to just pay attention

to the relationship between how we might be at a point

where everything is exploding,

or, you know, everyone's being bombed,

or whatever it is, and work backwards,

something that I like to call back sight,

to what is the steps that lead to that?

What's the path?

And it turns out we're on that path,

once you work through it.

So, you know, there's existential risk now,

and unless we, you know, take it more seriously,

it's just gonna get worse.

Sorry.

Yeah, I know there was a lot of schism, rift,

whatever words you want to use,

between existential risk and existing risk this year,

a lot of debate around that.

Do you feel like for members of the ethical

and responsible AI community

that the word existential has become like a loaded term?

Well, I think it might be.

I'm pretty sad about that,

'cause my dad was an existentialist, actually.

So I'm like, I want to. [chuckles]

[Khari chuckling]

I don't want to lose this term.

It's an important term for me.

But I do think that, you know, like everything in language,

it evolves with respect to different populations

and how they sort of claim it.

And I feel like that terminology is being claimed

by, predominantly, privileged, white men

to mean a certain kind of view of technology

where they ignore, you know, the work

of sort of everyone else.

And so, that's really problematic.

But I, actually, just put out a piece today

where I tried to sort of explain

if we think about this as a right to existence,

then we can actually unify, I think, a lot of the views.

And maybe, we don't have to lose the term existential

to mean something, [chuckles]

something that only a privileged class can kind of speak to.

So I hope to not completely lose it.

Yeah, can you talk a little bit more about the,

I know that we had talked previously

about, you know, this piece that you released today.

Can you talk a little bit more

about a rights-based approach to AI regulation?

What that looks like?

Yeah, well, there was a moment where I realized

that a lot of large tech corporations were advocating

for a risk-based approach to AI.

[Khari] Mm-hmm.

And as soon as you have a lot of corporations

aligning and agreeing on something,

it should sort of set up red flags.

Like, What am I missing? Right?

'Cause usually, they're motivated

by somewhat different goals than helping humanity.

So I was sort of thinking through it a bit more

and the sort of disconnect

between what regulators are aware of in terms of being able

to like, anticipate risks of given technology,

even if they're not working on that technology,

and how that like fuels narratives

from larger tech corporations

that regulators can't understand.

So if it's in the realm of just risks,

then it might be a little bit easier

to make the argument that like,

Oh, well you can't actually say what the risks are,

because you're not in the weeds of the technology.

But governments are set up to protect rights, right?

Human rights, civil rights, cultural rights.

[Khari] Yep.

And so I started connecting the descriptions of risks

to the kinds of rights

that governments are set up to protect,

and realizing that they mapped together relatively well.

And then working through it further,

it just sort of became clear to me that a lot of the harms

that are coming from AI, that will come from AI,

are directly tied to risks.

Like, you know, or sorry, directly tied to rights,

like a right to non-discrimination,

a right to equality of opportunity,

a right to existence, these kinds of things.

The UN recently put out a piece

talking about the different rights

specifically relevant to AI.

So it just sort of, everything fell up from there.

It became really clear

that regulation might be more straightforward

from a rights-based approach.

Awesome.

Dr. Joy, I saw you on conversation

with OpenAI CEO, Sam Altman, last month

here in San Francisco.

Did any of his responses surprise you?

I don't think I was necessarily surprised

by his responses.

He reminds me of so many people

that I meet in the tech world,

and kind of this privileged optimism

about what AI can do and how AI can be beneficial.

And in that conversation,

I was really looking for more specificity.

And I think we got there in some areas.

One area, ironically enough, in that conversation

we were talking about was the future of work.

You know, and that was 10 days

before all of the interesting changes

[Khari] that happened. Yes, yes.

And a piece that we started to explore a bit

was something I brought up called the apprentice gap.

And so, bear with me.

I'd recently picked up playing guitar again.

[Khari] I heard. [chuckling]

I was in Rolling Stone, and I was feeling good.

So I went and I got a little yellow guitar

to match my dress [Khari chuckling]

in the Rolling Stone theme.

And as I was playing it,

I realized I still had my calluses.

Right? You know?

And I realized the work that I had put in years prior

was still helping me out.

So it made me start thinking of this notion

of professional calluses and what we gain

when we're doing, maybe, the scaled

or the mundane essentials within any profession,

and then what happens when we have AI systems

automating some of this work, the entry level work.

Do we risk, potentially, being in the late age

of the last masters or the last experts?

And also understanding our own human development

as not just being output, particularly as an artist.

What does it mean to go through that process of creation,

finding the right word, figuring out how to express yourself

and learning something in the struggle to do it?

So that was part of the conversation that we were having.

And I was also happy to hear, and we've been hearing this

from many [chuckles] tech organizations,

saying they want to be,

they don't want to be self-regulated.

They want to have regulations.

And so we have to, of course, think

about corporate capture there,

but we know self-regulation isn't going to work.

We saw it with social media.

And so it's helpful to, at least, not have that resistance

that has been a signature of the past.

Yeah, I know something that stood out to me,

I don't recall if it came up in that conversation,

but certainly in the book, in learning more about you,

is that your dad is a scientist and your mom's an artist.

And that makes so much sense.

You're a poet and a computer scientist.

You know, I'm interested in,

I think we've talked a bit about this,

but you know, how should tech companies be held accountable

for the impact AI systems have on society?

So I can speak to sort of the rights-based approach,

where you want to be able to demonstrate

that you're respecting people's rights,

so showing that your systems don't discriminate,

showing that your systems give rise

to an equality of opportunity for different people

from different subpopulations.

And I think that if you don't show that,

you have to be fined enough

for it to really matter for your bottom line.

'Cause currently, when there's any sort of wins

against, you know, the tech machine,

they're fined like peanuts.

And this is the kind of amounts

that they likely budget for already, right?

So if they're budgeting for $4 million in some fine

for, you know, you know, Q1 or whatever,

and it's only $2 million,

then it's like they've gained $2 million.

So you really need to make it matter

in terms of their bottom line,

because that fundamentally drives

what tech companies will do.

Yeah.

And you know, so often, I think the question,

and I think this is a great, you know, panel to ask this,

but so often the question in tech, I think,

is what we should build.

But I'm interested in asking both of you

sort of what form of AI would you like to see go away?

[chuckling]

What needs to get eradicated?

Yeah, probably no surprise coming from me,

facial recognition being used for face surveillance,

lethal autonomous weapons where we're giving machines

and automated systems the kill decision.

I do write about the campaign to stop killer robots,

because it's really about valuing human life.

And the more abstracted it becomes,

the less proximate we become

to actually understanding what it is to take a life,

the easier it becomes to do.

And so I would continue to urge

that we absolutely put a ban on lethal autonomous weapons.

And I remember when I first started getting into issues

of AI and society, you know,

there'd been a thousand researchers who'd signed a letter

saying we should ban lethal autonomous weapons,

well before, you know, I started this kind of research.

And so the escalating conflicts

around the world put this on the map even more.

Because the future of peace, as some would call it,

or the future of war,

means that we have AIs as angels of death at the moment.

And so, I think we should do everything to stop those.

Well said. Margaret, any thoughts?

Yeah, I mean, I can plus one that,

and add on, add an on,

and add on that there's a branch of AI

called effective computing

that deals with things like predicting people's emotions.

[Khari] Mm-hmm.

And then, there's related work

on predicting people's personality,

trying to predict things like criminality.

[Khari] Right.

And any AI work that is trying

to predict people's internal characteristics

shouldn't be advanced.

Not only because it probably doesn't work, [chuckles]

and it's really just reflecting the biases in the data

based on surface level characteristics,

but also because it only opens up more avenues

for discrimination.

Yeah, yeah.

You know, each of you have spent, you know,

the better part of the past decade

working on critical analysis of AI.

And, you know, this is the last question.

But how can people work together

to protect their communities, their families, and society?

What gives you hope?

And do you have any favorite stories of resistance?

There's a lot of things to do,

and it, like a lot of things in AI,

[Khari] I think it depends on context. Mm-hmm.

So I would say that one thing

that actually really changes what happens in AI

is headlines, and this is something

that I've learned from Joy really well.

I'd be fighting for something at Google and nobody cared.

And then, Joy would like make a headline about it

in a way that was much more powerful and much more clear,

and suddenly everyone at Google cared.

So they value the input from PR and headlines,

sometimes a lot more than their experts internally.

So to the extent that you can support the free press,

and if you're in tech, talk to the free press

to the extent that you can.

That makes a really big difference

in terms of what technology companies will focus on.

And it's part of why I appreciate your work so much,

[Khari] as well. Thank you.

Because you really like to dig

into, you know, the sort of serious problems

where we want to reshape how technology works.

And so I think that's a critical piece of it.

I don't know about hope.

Can you come back to me for hope, maybe?

[Khari and Margaret laughing]

[Khari] Think on that for a sec. Oh no.

[group laughing]

I'm happy to jump in here

[Khari] while you buffer on hope. Okay. [chuckling]

So hope buffering over here. [Margaret chuckling]

I am so much so thinking about the right to refusal.

Because oftentimes,

it can feel like technology's inevitable,

or we just have to use it because it's been released,

that kind of thing.

One area people can exercise the right to refusal

is at airports.

You have TSA rolling out facial recognition.

It's supposed to be opt in.

That's what it says on the website.

We at the Algorithmic Justice League

have people submit reports.

You know, http://report.ajl.org,

or in this case, http://fly.ajl.org,

where people are saying

that they didn't even know they could opt out.

And so sometimes, you have these situations

that in practice become coercive consent.

So I do think the right to refusal is important to exercise,

and it's important for people to know.

I also think it's worth exploring ways

in which AI tools can be beneficial.

Today, I was reading about PaidLeave.ai initiative

from Moms First U.S., headed by Reshma Saujani.

And it is talking about the ways

in which many people fear retaliation

or just feel overwhelmed

when it comes to actually getting benefits

or seeking ways to have paid leave,

whether it's for bereavement or other reasons.

And so now they've released a tool, right,

where you don't have to go through the PDFs

or try to hunt down that policy.

But you can ask, through this interface,

questions you might otherwise feel afraid to,

particularly, if you're an immigrant

or you have some other type of status

that might make reaching out a bit more precarious.

So when I see those sorts of examples, I'm inspired.

I know I look youthful. [Khari chuckling]

There are those who are younger, right?

So when I see organizations like End Code Bias

with high schoolers, you know,

talking about algorithmic justice,

pushing back on school surveys,

asking, Do we even need these systems in the first place?

That very much inspires me.

When I see the work

of, honestly, people like Meg Mitchell, right?

So, we have Dr. Mitchell here.

When I think of the work

of, you know, our colleague Dr. Timnit Gebru

and so many others who have put their careers on the line

to speak truth to power, to warn so many of these warnings

that people lost their jobs for.

Now, it's taken for granted as known risk. [chuckles]

You know, that wasn't always the case.

I'm hopeful that I was sitting, you know, one seat away

from President Biden talking about the story

of Robert Williams being falsely arrested

in front of his two young daughters,

and that we have governments around the world

that are actually attending to how we prevent AI harms.

I can assure you, when I started this work,

this was not getting that level of attention.

And oftentimes, I'd be like,

Oh, now you tell me a computer is a race?

Oh, come on. [Khari chuckling]

You know, that kind of thing. [chuckling]

So now everyone has to at least say,

And we know there's bias,

and we're dealing with the discrimination.

Now, we have to make that mean something.

But those are incredible narrative wins in the space

that ought to be celebrated

while knowing there's so much more work to be done.

Thank you for reminding us

of ways that we can empower instead of exploit,

and for ending on a hopeful note.

Join me please in thanking our panel, Dr. Joy Buolamwini

and Dr. Margaret Mitchell. [audience applauding]