WIRED Videos

WIRED25: Ethical AI: Intel's Genevieve Bell On Living with Artificial Intelligence

Intel Vice President & Senior Fellow Genevieve Bell spoke at WIRED25, WIRED’s 25th anniversary celebration in San Francisco.

Released on 10/15/2018

Transcript

00:00
(soft music)
00:03
We're gonna change gears just a little bit.
00:07
Just this much.
00:08
And I thought what we'll do is talk a little bit
00:10
about a different kind of future and a different way
00:13
of thinking about that future.
00:14
But I realize as I've come from Australia I actually
00:17
wanted to start with the thing we do
00:18
at all Australian events,
00:19
which is acknowledge that we meet on
00:21
the traditional lands of the all are one people
00:23
the indigenous people of this area
00:24
and I wanna pay my respects to their elders,
00:27
past and present and their leaders emerging.
00:29
And the reason we do that in Australia
00:30
and I wish we did it here,
00:32
is it's a way of saying we have conversations about
00:34
the present and the future in places
00:36
that have been continuously occupied,
00:38
in this instance for more than 10,000 years.
00:40
There's something really powerful about acknowledging
00:42
that our conversations about the future have a past.
00:45
And so, I wanna pay my respects that way.
00:48
(loud applause)
00:53
Which should immediately give something else about me away.
00:56
I'm not a technologist I'm an anthropologist!
00:59
I'm also the child of an anthropologist,
01:01
I grew up on my mum's field sites in central
01:02
in northern Australia in the 1970s and 1980s.
01:05
I spent my time living with Aboriginal people
01:07
who remembered their first sight of Europeans,
01:09
their first sight of cattle and their first sight
01:11
of fences not always in that order.
01:13
And I spent most of my childhood not wearing shoes,
01:15
speaking Warlpiri and getting to kill things .
01:18
Basically best childhood ever!
01:19
(crowd laughs)
01:21
'Cause I also ate those things
01:22
'cause I know people worry about what that would mean.
01:25
So I grew up in Alec Coorong in Central Australia
01:27
and it's an incredibly long way
01:29
from Alec Coorong to Silicon Valley.
01:31
And along the way still it turns out
01:33
from having a PHD in culture anthropology at Stanford
01:36
to being at Intel.
01:37
Although they're not very far apart physically,
01:39
they're very far apart in terms of the imagination.
01:42
It's a long story to tell about how I ended up at Intel.
01:44
The short version of it is I met a man in a bar.
01:47
(crowd laughs)
01:51
For those of you under the age
01:52
of 30 this is not career advice.
01:54
(crowd laughs)
01:56
I met a man in a bar, he introduced me to Intel
01:59
and I've spent the last 20 years there.
02:01
My job there has been to bring stories about people
02:04
and what people care about,
02:05
what frustrates them, what they're passionate in,
02:07
and what they're passionate about,
02:09
into the ways in which we make new technology.
02:12
So, companies like Intel have always built the future.
02:13
We built them because of what we can do technically.
02:16
My job was to make sure we also built them because
02:18
they were the things that people would care about.
02:21
So 20 years I was the little voice going,
02:23
wait, what about people?
02:26
And then about two years ago,
02:27
I moved to Australia to take up a new role
02:29
at the Australian National University
02:30
where I find myself inexplicably
02:32
as a distinguished Professor
02:33
of Engineering and Computer Science. (crowd laughs)
02:36
Don't tell my father because
02:37
while my mother was an anthropologist My father
02:39
was an engineer and he is horrified
02:41
that anyone thinks I might be.
02:43
(crowd laughs)
02:44
But that means I get to spend my time thinking
02:46
about the future and thinking about how we
02:48
would constantly put people in it.
02:50
And I know over the course of today
02:51
and the last three days and the rest
02:53
of the afternoon you're gonna hear people talking
02:55
about artificial intelligence and robotics
02:57
and all the technical pieces.
02:59
I want to talk a little bit about the social pieces
03:01
and about the questions we should ask
03:02
and how we might prepare for that future.
03:05
Two years ago the World Economic Forum published this graph.
03:08
It's great, it tidies up the last 250 years of history.
03:11
Like for those of you who weren't paying attention
03:12
in high school you need to know. (crowd laughs)
03:14
There were steam engines then there was electricity
03:17
and there were computers, well done!
03:19
Now of course you should immediately know I
03:21
have some issues with this chart,
03:23
like it doesn't have any people in it.
03:25
(crowd laughs)
03:26
Problem number one, problem number two it kind
03:27
of suggests this is linear and most of us know
03:30
that it really wasn't quite like that.
03:32
And of course the third thing it never says right is
03:34
that in indexing on each one
03:36
of those technologies it didn't talk
03:38
about the socio technical systems we've built.
03:41
It talks about the steam engine but it doesn't talk
03:43
about the railway.
03:45
Talks about computers but it doesn't talk
03:47
about digitization and the ecosystem that we all sit in.
03:51
So what would it be if you were to unpack those
03:53
and say what's really going on in there.
03:55
Well the reality is each one of those technologies,
03:58
required more than just technologists.
04:00
It required all these other people to bring
04:02
those technologies safely to scale.
04:05
And for that scale to be one that we could manage
04:07
we could live with and that we found ways to be safe with.
04:11
And it's that last wave that interests me.
04:14
What the World Economic Forum calls cyber physical systems.
04:18
What you should think those are is
04:19
A.I. inside stuff that isn't computers.
04:22
So every time you hear someone talk about a robot
04:24
or a drone or autonomous vehicle or a smart building
04:26
or a smart lift,
04:27
they're talking about cyber physical systems.
04:30
So what's the challenge there?
04:32
Well the challenge and the opportunity frankly,
04:34
is about how we get to that moment,
04:36
how do we go from A.I. as the steam engine?
04:41
To the metaphoric railway to the cyber physical system.
04:44
What will it take to do that safely and at scale.
04:48
Well I think it actually means there are five questions
04:50
we should be thinking about and I wanna rehearse
04:52
them for you really briefly.
04:54
Good news is three of them start with A.
04:55
so you should be able to remember them.
04:57
The first question we need to ask is,
05:00
will those systems actually be autonomous?
05:02
We talk about it a lot.
05:03
We talk about autonomous vehicles, autonomous software,
05:06
turns out if you ask anyone who is building
05:07
those systems they all have a different definition
05:09
of what they mean by autonomous.
05:12
And the problem for us who aren't engineers,
05:14
the humans amongst us,
05:15
every time I say autonomous in your head,
05:18
semantic slippage happens.
05:20
I say autonomous you think sentient,
05:24
conscious, self-aware.
05:27
And if you grew up with science fiction you know
05:29
what happens next.
05:30
(crowd laughs)
05:34
Frankenstein someone looking over there.
05:36
Now the reality of course is systems can
05:38
be autonomous without being sentient or conscious.
05:41
Autonomy merely means operating without reference
05:44
to a prearranged set of rules in this instance, right?
05:47
But how we architect it? How we build it?
05:50
Who gets to decide what's autonomous and who isn't?
05:53
Who gets to decide how that's regulated?
05:56
How it's secured? What its network implications are?
05:59
Those are not just questions for computer scientists
06:01
and engineers right?
06:02
Those are questions for philosophers
06:05
and people in the humanities and people who make laws.
06:08
And frankly for all of us in the room.
06:10
What does it mean to have systems that will be autonomous?
06:12
How will we feel about that?
06:14
What will it mean?
06:15
How are we even signal it?
06:16
How will you know it's an autonomous system?
06:19
Should it have a little red A on it?
06:22
Does it have to announce every time it turns up,
06:24
hi I'm autonomous!
06:25
'Cause that's gonna get really irritating really quickly.
06:27
But trust me we need to know because I'm willing
06:29
to bet I'm not the only person in the room
06:31
who has gotten into a smart elevator recently
06:34
and watched people around me freeze in blank horror
06:37
when they realized there were no buttons.
06:39
(crowd laughs)
06:40
And you're now in a lift over which you have no control of,
06:42
and no one warned you in advance.
06:45
So how are we going to create a grammar
06:47
for autonomous systems.
06:48
First set of questions.
06:50
Second set of questions about how we put humans back
06:52
into this story is to ask,
06:53
Who's gonna set the limits and controls on these systems?
06:57
The A here is agency.
06:58
How we determine how far the systems can go without us?
07:02
Who gets to determine those rules?
07:04
Are they built in the object or outside the object?
07:07
If we think about autonomous vehicles we know
07:10
they have rules sitting inside them.
07:11
But are we going to want to have an ability
07:13
to override those things?
07:15
If you were the emergency services,
07:16
do you want to be able to say take all cars off the road
07:18
so we can get a firetruck through.
07:20
Of course you do.
07:21
But who gets to decide when you use that?
07:23
How are those rules going to work across boundaries,
07:26
across countries, across cultures?
07:31
How are they gonna get updated?
07:32
How are they gonna get made visible?
07:34
Again, there're technical questions but
07:36
there're also social and human and cultural questions.
07:40
Third set of questions.
07:42
Are what I call the insurance questions because all
07:44
the words are complicated.
07:45
Risk, liability, trust, privacy, ethics,
07:49
manageability, explicablity, ease of use.
07:52
It's easy to blow past all of
07:53
that but we're talking about systems
07:55
that have some degree of autonomy and some degree of agency.
07:58
How are we gonna decide who is responsible for them?
08:01
How we decide how much risk we can tolerate?
08:03
And who's gonna decide that?
08:05
What does it mean to think about the systems being safe?
08:08
Are they safe for the occupants inside the systems,
08:10
for the people outside of the systems,
08:11
for the cities in which they operate?
08:14
Who gets to decide what the ethics are?
08:17
Who gets to litigate the ethical dilemmas
08:19
and indeed the rules?
08:21
How do we think about its applicability?
08:23
We have legislation that's unfolding in Europe,
08:25
the GDPR, which asks the capacity for any algorithm
08:30
to explain itself or at least the companies that build them.
08:33
That's a technical question of how we create back tracing?
08:35
How we create exploit the ability?
08:37
How we unbox algorithms?
08:40
There're technical questions
08:41
but there're also social and cultural questions.
08:43
What's it gonna take to feel safe?
08:46
And will that change over time?
08:49
Will there be a moment when you don't worry
08:50
when you get into the lift that there are no buttons?
08:53
'Cause you know what to do?
08:54
And how long will that take?
08:56
And who will be the people making us safe in the meantime?
09:01
Fourth set of questions.
09:03
About how we measure these systems?
09:05
So when AI goes safely to scale,
09:08
how we decide if it's a good system or a bad system?
09:10
And I know that sounds banal,
09:12
but for 250 years the Industrial Revolution preceded
09:15
by us saying, was that system efficient and productive?
09:19
Did it save time or did it save money?
09:21
Did it use less labor?
09:23
What are the metrics we wanna use here,
09:26
for autonomous vehicles, we're told that it's about safety.
09:29
For the lifts that so preoccupied me,
09:31
it's 'cause I read a lot of Douglas Adams as a child,
09:33
I'm convinced those lifts are prescient.
09:35
Those lifts we're told, those where about saving energy
09:38
and electricity at the expense
09:39
of where humans sit in the ecosystem, right?
09:41
But what are the metrics gonna be?
09:43
And how do we wanna think about that?
09:45
We know that, all of this computation,
09:49
whether it is algorithms, whether it is machine learning,
09:51
whether it's augmented decision making,
09:54
it all requires processing power
09:56
and it all requires electricity.
09:59
So how are we gonna decide how those things are unfolded
10:02
in a manner that is sustainable,
10:04
in a manner that is manageable?
10:08
And by the way, I know this because
10:10
I've spent 20 years at Intel.
10:12
You will measure the things that matter,
10:14
and conversely, what you measure is what you make.
10:18
So we have an opportunity here to think about
10:20
what the metrics are in advance rather than after the fact.
10:23
I'm always willing to bet that if
10:25
what a newcomer had been asked 250 years ago about how
10:28
to build a better steam engine
10:30
and then realize that the one they had built
10:31
was gonna chop down every tree in Britain,
10:33
they might have thought about it differently.
10:34
So what are the metrics we want here?
10:36
And last but by no means least,
10:38
I think this open question,
10:39
the fifth question for me about,
10:41
how are we gonnao be human in that world?
10:44
What are gonna be the ways we interface with these systems?
10:47
I grew up in Silicon Valley.
10:49
I've spent 25 years here and it's been an exquisite
10:51
and an extraordinary privilege.
10:53
But I also know in that period of time the way we
10:55
interacted with computing was pretty narrow,
10:58
keyboard, screen, a little bit of voice, some gesture later.
11:02
I'm not sure we should be interacting
11:03
with these systems that way
11:05
and I'm pretty certain most of the UX's that me
11:07
and my teams have been building for the last 20 years,
11:09
isn't what we want to drag into the future with us.
11:12
You do not want to get into an autonomous vehicle
11:13
and have to remember whether
11:15
it's a 10 to 12 character password with an uppercase
11:17
and a lowercase and alphanumeric,
11:19
(crowd laughs)
11:20
and which system you are using.
11:21
And you probably don't wanna
11:22
be constantly using your biometric systems either
11:24
and you may not wanna talk to everything
11:26
because everyone else be talking to it too.
11:28
So, the metaphors may not work,
11:30
how we gonna think about all that stuff differently,
11:32
and what's it going to feel like to be human
11:34
when systems are making decisions that we used to make?
11:37
When they're doing things we're used to doing?
11:39
When we can't always see what's happening?
11:41
And by the way when some of those systems are talking
11:43
to each other not even about us?
11:47
Great human fear, irrelevant.
11:49
Machines don't wanna kill us,
11:50
they're just not interested in what we're thinking.
11:52
Different problem.
11:54
So how are we going to imagine,
11:56
what the interaction is going to be here?
11:58
What will that look like?
12:00
So five big questions.
12:02
Scaling AI and doing it safely means we
12:04
have to answer those questions.
12:06
What does it mean to think about will
12:07
these systems be autonomous?
12:09
And if so which version?
12:10
What will agency look like?
12:12
How do we think about safety and assurance?
12:14
What are the metrics we want to measure it by?
12:16
And by the way ,how we're gonna engage with
12:18
these things or not engage with them?
12:21
Here's the problem, I can ask you five questions
12:23
and I don't have answers to any of them,
12:25
which is the great disappointment
12:26
of being an anthropologist.
12:27
We know how to ask questions.
12:29
We don't always know how to answer them.
12:31
But two years ago, I stepped out of my job at Intel
12:33
and went back to Australia with the explicit purpose
12:36
of trying to work out how to answer those questions
12:38
'cause I think answering them gets us some are important.
12:41
I started with that chart from
12:42
the World Economic Forum and I said each one
12:44
of those previous waves generated things.
12:46
But part of what they generated
12:47
was new bodies of knowledge.
12:50
Mchines, mechanization brought us engineers,
12:54
electricity and mass production
12:55
brought us electrical engineers,
12:57
computers brought us computer scientists.
13:00
What're cyber physical systems gonna bring us?
13:03
Well, that's what I said that I was gonna build
13:05
And the joy of being right here right now on the stage,
13:08
is it's one of the few places in the world I can say,
13:10
you all are gonna laugh at me.
13:12
'Cause when you stand up and say,
13:13
I thought I'd build a new branch of engineering?
13:15
There are very few places except
13:17
Silicon Valley you can say that. (giggles)
13:19
So my plan is to build a new branch
13:21
of engineering or a new applied science.
13:23
I think we actually need to answer those questions
13:25
and we need to find a new way of approaching the problem.
13:28
So a year ago we launched a new institute
13:30
at the Australian National University in Canberra.
13:32
We recruited a small team.
13:34
And six weeks ago we put out a call
13:36
for our first cohort of students
13:39
'cause the university wanted me
13:40
to wait three more years and I went.
13:42
(stamps feet)
13:43
Hell no.
13:44
It's like I think we can go faster.
13:45
And so we decided in the grand tradition
13:48
of Silicon Valley to build a curriculum in real time
13:50
and iterate it,
13:51
like it was a prototype build it invite
13:53
the people who want to learn with us
13:55
and iterate in real time with those people.
13:58
So we put out a call on Twitter mostly I have
14:00
to say, six weeks ago
14:02
and said You got five weeks, go.
14:05
And I didn't think we'd get that many applicants
14:07
and I usually can think pretty big.
14:10
We closed it out, a week ago.
14:12
We had 173 people from around the world who were willing
14:15
to put their hands up to take a year out
14:16
of their life to come to a degree program
14:18
that has no name, in a town that if you're
14:20
from Australia it's not very compelling,
14:23
(crowd laughs
14:23
to build something new.
14:24
And I thought that was a pretty good sign
14:26
that we're on to something interesting.
14:28
So here's my ask.
14:30
It's daunting to say you wanna build
14:31
a new applied science when your branch
14:33
of engineering it's frankly crazy.
14:35
What I know from my time in the Valley is
14:37
that you never do these things once
14:38
and you never do it alone.
14:40
So if for any of you in the room at any moment in time
14:42
you thought, okay she sounds crazy but I kind
14:45
of like what she's saying.
14:47
Will you come find me.
14:48
Track me down.
14:49
Send me an email, find my team 'cause we're gonna
14:53
need all the help we can get, to get to scale.
14:56
So with that, what I want to stop and say to thank you.
14:58
(loud applause)
More from wired.com
WANNA BUILD A ROBOT? NASA to Give Away a Mountain of Its Code
WANNA BUILD A ROBOT? NASA to Give Away a Mountain of Its Code
MONUMENT VALLEY An iPad Game to Make M.C. Escher Drool
MONUMENT VALLEY An iPad Game to Make M.C. Escher Drool
FRIEND REQUEST DENIED Classic Paintings, for Millennials
FRIEND REQUEST DENIED Classic Paintings, for Millennials
WELCOME TO SILICON VALLEY The Satire of Our Dreams With Mike Judge
WELCOME TO SILICON VALLEY The Satire of Our Dreams With Mike Judge
A BATTLE TO THE DEATH Uber vs. Lyft: The $500 Million Battle
A BATTLE TO THE DEATH Uber vs. Lyft: The $500 Million Battle
COBRA iRAD 230 Dodge Cops With This Radar Detector
COBRA iRAD 230 Dodge Cops With This Radar Detector