The Future of AI: Our Bias & Our Challenges - Sumo Logic

Bill Welser: Bias and Artificial Intelligence

Partner, Red Associates
July 24, 2018 34:55
"Most of the data that we have is really opportunistic in nature."
Bill Welser is a long-time expert in Artificial intelligence, and has spent years thinking about how our humanity - and our bias - creeps into it.

The Future of Artificial Intelligence: Data, Bias, and the Challenges We Face

Welcome to the Masters of Data podcast, the podcast where we talk to the people in the front lines of the data revolution about how data affects our businesses and our lives.

Our guest for this episode has had an amazing journey. He was a captain in the Air Force, had a decade long career at the Rand Organization—a world renowned nonprofit research organization, and now he’s a partner at ReD Associates—a strategy consulting company based on the human sciences.

Bill Welser (@WilliamWelserIV) has done groundbreaking work around privacy, artificial intelligence, industrial ecosystems, commercial drones, and cryptography. We had a lot of fun talking together, and I hope you enjoy it as well.

Bill Welser’s Journey Into Tech

I love to get an understanding of how our guests got into technology—why you ended up where you were, in particular coming from the Air Force into Rand. Tell us a little bit more about your story. Where did you come from? How did you end up where you are?

Sure. So, I’ve always been a little ambitious, and when I was graduating from high school, I knew I wanted to be sort of engineer. I went and ran my own little set of interviews and found that chemical engineering always seemed to rise to the top of list of the hardest engineering discipline or the most challenging. So I decided that was a good idea and tackled chemical engineering at the University of Virginia.

At the same time really wanted to pursue an Air Force career as an officer, so I participated in ROTC and got out of my undergrad degree, graduated, and went to Los Angeles Air Force Base.

As a chemical engineer, falling into Los Angeles Air Force Base—where they built satellites, and missiles, and what not—I was kind of like a kid in a candy store. There were lots of things that you could look at in terms of how to use different fuels for propulsion, how to use different sensors, and how to manufacture those. But what I got into right away was building high power lasers: chemical lasers.

That sounds like fun.

It was neat to knock rockets and missiles out of mid air. And we were going to do this from space and then also do this from the front of 747. So right away, I just fell in love with tinkering and building large systems that were really pretty ambitious in nature.

My career in the Air Force went from lasers to actually building a bunch of different types of sensors for satellites—some of which are flying today—and then into cyber systems. There was so much overlap between running these space systems and needing to understand the cyber environment around them, how to keep them secure, etc.

That makes sense.

It was really fun. At the same time as working on these cyber systems, I decided that I was a little bored with engineering.

Everybody at the time was getting MBAs, so I went and got an MBA at Boston College. I decided that wasn’t specific enough, so I went back and got a Masters of Finance with the intention of going and working on Wall Street. That was in 2007, so that intention—while well-placed—wasn’t well-timed. I ultimately decided that maybe finance wasn’t in my future, at least not near term, and wanted to find a way I could contribute to my community at large. I went and found the Rand Corporation.

I spent ten years at Rand. I did really detailed technical analysis in the beginning and then became the director of engineering and applied sciences. After running that department for about six years, and kind of getting into all the different things that I’ve gotten into personally from a research standpoint, I realized that there was a challenge that kept popping up. That was how to engage the commercial sector in a realistic way. And I found the ReD Associates.

ReD Associates is a social sciences firm right under a hundred people, and they charge themselves with finding the unknown unknowns in the human system. What are those things that we do all the time, that we don’t realize we’re doing, but that dictate our actions and decisions? And how can we understand those better so that we can make better decisions?

“What are those things that we do all the time, that we don’t realize we’re doing, but that dictate our actions and decisions? And how can we understand those better so that we can make better decisions?”

They do this for Fortune 100 companies, and I saw that as a really exciting space to jump into and to build on. They needed more detailed knowledge of technology, so a colleague and I from Rand decided to join them as partners. We’re building them a technology practice to go alongside their world class social sciences practice.

That sounds really exciting.

The Human Context of Big Data

We got connected through Christian Madsbjerg, author of Sense Making, and that connection between the human context of data and the cultural and sociological context of the technology is amazingly interesting.

Looking back on some of the things that you’ve done before, that’s been kind of a focus for you, hasn’t it? Looking how technology affects humans and how those kind of spaces intersect, right?

Yeah, so the way that I would describe my research focus for the past decade, since I’ve left the Air Force, has been looking at emerging technologies and trying to understand how they affect and impact the human condition or the human system.

“My research focus for the past decade has been looking at emerging technologies and trying to understand how they affect and impact the human condition or the human system.”

A good example of this is when, about four or five years ago, I walked into the office of one of my colleagues at Rand. He’s a world class machine learning expert. He’s from Nigeria. He is one of the most brilliant people that you’ll ever meet. And I said,

“How is it that all of this wonderful AI capability that people discuss is coming out of one place in the world right now? (Silicon Valley) And most of those software engineers and machine learning experts came from a very similar socioeconomic background. There’s got to be something there. Because they’re baking in their implicit biases. They’re baking into these automated systems their assumptions about the world.”

It started as a simple question about, “Is there something there?” and it led to us digging deeply into, “What is the potential for bias in AI?”

It turns out that it’s actually a lot deeper than just thinking about the fact that a lot of it’s coming out of Silicon Valley. It’s, instead, this idea that, as humans, we’re developing an intelligence that is kind of modeled after our own. With that comes the fact that we’re capturing the good things about our own intelligence, but we’re also capturing some of our maybe not-so-good things. We’re developing an intelligence in our own image, and that’s really not a compliment.

So that’s an example of taking an emerging technology space and really thinking about how it’s affecting the human condition or the human system.

Bias in Artificial Intelligence Development

As artificial intelligence—and computer science and technology in general—is developing, how these explicit or implicit biases shape that development is so important. What are some of the things that you uncovered as you were going through that? What concerns you about bias? What are some of the implications you saw?

Well, for starters, the term ‘artificial intelligence’ is kind of problematic. It’s problematic because … well I don’t want to get into the use of the word ‘artificial’ because I have issues with that.

But the term as it stands means something different to most everyone. To some people, it means automating some widget in their house. Right? Their fridge knowing that it’s time for them to buy more milk. For other people, it’s automating a total system that’s running at a chemical refinery, or something like that, that’s moving lots of different widgets. Obviously those are two different types of systems.

And there’s a discussion of AGI, the idea of a general intelligence. It’s talking about emulating or recreating everything that we do in a very organic manner.

That spectrum of things—from the simple algorithm that would need to be written to tell me to buy more milk and maybe order it for me, all the way to whole brain emulation—that’s a huge space. And yet we lump them into one term: artificial intelligence. That’s a problematic thing. I like to just raise it in the beginning of conversations like this just to get it out there.

In my space even, AI even gets conflated with what really amounts to statistics. Where people are effectively doing statistics, or even basic machine learning, that gets branded as AI because it’s the popular term. It then waters down the actual conceptual idea of AI.

I like that the public perception of AI is killer sentient robots.

But the commercial impression of AI is not that. It’s automating your credit score evaluation, or automating how I interact with certain aspects of my smartphone, things that aren’t killer sentient robots. But because they don’t rise to that level of risk and sexiness, we kind of forget about these other things.

But if we’re automating whether or not I can get a home loan, that’s a big deal. And if we’re taking humans out of the loop for something like that, it’s an even bigger deal.

What it means—and this actually is a real thing today—is that for some of these systems, not even the developer can go back in and tell you why it made a particular decision. They can posit a guess, but they can’t tell you exactly why. So, if I get disapproved for a home loan, and I go and ask the local teller or agent, “Why did that happen,” do they really know? And was it really fair? All these sorts of questions raise up. That starts hitting you where it hurts.

So that’s why this bias-in-AI thing matters: because we have a lot of these implementations of AI already in our communities and we’re not really clear on it. They don’t look like killer sentient robots, but they do impact some of our day to day activities and decisions.

“That’s why this bias-in-AI thing matters: because we have a lot of these implementations of AI already in our communities and we’re not really clear on it.”

AI, Big Data, and Consumer Responsibility

I did run into this a little bit recently when our chief security officer over here was telling me this story about Alexa recording stuff and sending it to a third party. He unplugged Alexa so now the one piece of technology like that in my house, I unplugged.

And a lot of people could jump to the conclusion that large corporations are nefarious actors. “Amazon is just trying to collect every piece of information on me,” or, “Google: look at what they’re doing.” Or this recent Facebook and Cambridge Analytica discussion.

But the market forces support those companies developing capabilities like this. That’s the interesting rub: that uninformed consumers are actually feeding the market space that’s building a lot of these capabilities.

Then, by the time they realize what they’ve fed and build, consumers are like, “Whoa, whoa, wait a minute. Step back.” And that’s a little bit unfair.

I don’t like reading about how the Facebooks of the world—or the Amazons, or Googles, or fill in the blanks—how they’re vilified, because they didn’t just wake up one day and say, “Wouldn’t it be great if we had all this information?” They said, “Well, clearly consumers want their product delivered in X amount of time, or to be able to talk with their friends across distance, or whatever, so we’re going to build a system that helps them do that in a seamless way.”

When the robot overlords take over, we have no one but ourselves to blame.

A lot of these things, they creep up on you, and we, as consumers, are demanding a certain set of experiences. We’re demanding a certain level of convenience. We want the technology to do all this for us. So, where do we go from there?

The Realistic Future of AI and Big Data

You’re really looking into this, and actually talking to companies that are dealing with this and how they work through the implications. What do we, as a society and as an industry, need to do to work through these kinds of thorny issues?

One of the first things is to recognize the environment that we’re in today. What I mean by that is for the past decade or more, there has been a huge push to collect as much data as possible. You had this big data movement: collected the data, organized the data, stored the data. And when that was all done, it was like, “Now what do we do with the data?”

I’m sure there were some very messy boardroom discussions where people were challenged as to, “Well, we just made this investment. We’ve got terabytes of data. What are we going to do with them?”

You have to justify it.

Which has now led to this resurgence in AI, because what does AI need? It needs a data diet. It needs data to eat, to learn from, and then to move forward and adjust.

So part of the current environment is that most of the data that we have is really opportunistic in nature. It was the fact that we had a system that was already collecting this set of data, so we just grabbed it. And we said, “Well, it’s data, so it must have some amount of value.” We’ve jumped to a conclusion, and this conclusion is actually a bias of ours.

The bias is toward what might be considered sunk cost. We don’t want to admit that we collected something or that we have something available to us that might not be worth very much, so we try to make into something.

“We don’t want to admit that we collected something or that we have something available to us that might not be worth very much, so we try to make into something.”

Right now, this is where a lot of the hazard lies. We take this opportunistic approach towards the implementation and employment of AI, and that might not actually give you the outcomes that you want.

What might be better is to say, “Okay, now we see everything that we have, data wise, and we see what we can do with it. But it would be wonderful if, instead of these ten fields of data, we actually had five more. Let’s go build those sensors to collect those five more, and now let’s add value via an AI system.”

In talking with organizations that have access to large amounts of data—and that are looking to automate the evaluation, or assessment, or treatment of that data—we’re talking about how to take a fresh look, let a sunk cost be a sunk cost, and get toward an instantiation of AI that actually delivers value, versus just as something that you can point to and say, “Look at us, we’ve got AI.”

Don’t just throw some AI in the mix with data. Think, beforehand, about what you’re actually trying to accomplish and the questions you’re trying to answer, and design for that.

Absolutely. You have to take a rigorous problem solving approach. You have to follow the scientific method, and not just hop to the assessment phase because you have a bunch of data available.

This is where I really want to see the community push. I don’t think all is lost by any means. I think people are pushing in this direction, but it is very hard, when you’re trying to deliver value to shareholders, to admit that maybe some of that activity that you spent collecting data in the past wasn’t as wonderfully productive as you might have hoped.

AI in Public vs Commercial Organizations

Is there a gap—in a good or not so good way—between how public organizations view this compared to commercial organizations? Or are commercial organizations maybe less aware of some of the implications of what they’re doing and are coming to it now? What have you seen in kind of comparison?

The gaps that I’m seeing are in terms of the knowledge set that leaders within the commercial space have about these technologies, versus what the leaders in the public sector know about these technologies. And that discrepancy is worrisome to me.

I’m not saying this is happening, but I think the conditions are ripe for this: If I can sell something to someone in government that they don’t quite understand but that sounds really good—and that they see referenced in every newspaper stand that they go to in an airport, because AI is everywhere—that’s a problem. I see a huge differential in terms of just understanding what these technologies do, so that’s one piece I think should be reconciled a bit.

I also see a huge push toward near-term instantiations of these technologies in the corporate world. It makes total sense, but we’re maybe not thinking through what the long-term implications might be.

Go back to the example of the bank loan. If I automated the bank loan process—and have the decision being made by an AI and have it just being delivered by a human—what happens when GDPR takes hold in Europe?

Now there’s a whole set of rules and regulations that are not supported by the existing technologies. To say that you have to go back to scratch is probably an overstatement, but in some cases, you really do have to go back to the drawing board with some of these systems. That’s what I’m talking about: the short term versus long term.

It’s a wonderful, complementary relationship between the public sector and the private sector if they talk and if they work to share their perspectives equally with one another. The public sector explains exactly why they need it right now and what they’re going to have done, and have the context of the longer term considerations. The private sector shares their expertise that the public sector just doesn’t have.

“[There can be] a wonderful, complementary relationship between the public sector and the private sector if they talk and if they work to share their perspectives equally with one another.”

Where do you see the leadership coming from? Because in some sense, there have to be a few organizations and people that are stepping up. Where do you see the innovation in this area coming from? Does it come from the public sector? Does it come from some of these companies? Is it here in the US? Is it in Europe, particularly with GDPR? What do you think?

I’m biased [but] I really believe that the leadership has to come from organizations like ReD, and also like Rand, that take a multidisciplinary approach to problem solving—that get the technologists in the room with the behavioral scientists, and the social scientists, and the political scientists, and the economists [to] think through all of the various aspects of the problem and come up with a solution that’s robust in all those spaces.

I don’t think that we can continue, as a society, to say, “These are the technical experts, and they’re going to solve the technical problems. And then we’re going to have these economists over here that solve the economic aspects. And these behavioral scientists are going to be over here, and they’re going to think about how it all affects us.” Right?

I don’t think they can be standalone anymore, and unfortunately, our entire university system is built for them to be standalone. I know that some universities have worked to create multidisciplinary centers, but the incentive structure within the universities is not built to support that system.

I’m not meaning to rail on the university system, but it just is a very good example. We need to think about how to bring various experts together to tackle very hard problems and give really widespread, wide-spanning solutions that incorporate all these different perspectives. That’s where the leadership has to come from.

“We need to think about how to bring various experts together to tackle very hard problems and give really widespread, wide-spanning solutions that incorporate all these different perspectives. That’s where the leadership has to come from.”

Beyond Artificial Intelligence: The Challenge of Multidisciplinary Problem Solving

It seems that what you’re talking about is not just a problem within the field of artificial intelligence. This is something that’s developed over the last several decades: this tendency to separate out technology and science from sociological and the cultural implications.

We need to encourage the grassroots, cross-functional, cross-disciplinary collaboration because that’s how we’re going to come up with the best solutions. And that’s really where a lot of innovation is going to happen in the future, not just in artificial intelligence but overall. How are we going to bridge those gaps and connect these multiple different disciplines that maybe haven’t collaborated as much in the past?

One of the big challenges—and AI is, again, a great example—is what happens when the development in each of those disciplinary spaces is happening at different speeds.

If we take the AI case, there was a lot of available data. Many of the algorithms that have been employed to date are decades old in terms of the theories behind them. There just wasn’t enough data to feed them. And the computing power was pretty limited. But now that we’ve caught up in computing power and available data, we’re kind of running rampant with building new AI-focused or AI-enabled solutions.

Behavioral science can’t move that fast. It’s not meant to move that fast, because you’re supposed to actually see things over a period of time. You’re supposed to observe, because in observing, you gain so much context and perspective to then inform the technological side of things.

When one side races ahead and leaves the other side behind, or leaves their other partners behind, that’s a problem. So part of the challenge to multidisciplinary problem solving is just that speed aspect. And in some cases, one discipline may just have to wait and say, “Okay, let’s hold on. Let’s let these guys catch up. And let’s see what they have to say.” Because we’re going to all be better off.

This also seems to be a bit of an international phenomenon too. Let’s say that the US or Europe does a better job of this, but then you see some other countries across the world that maybe don’t take quite the same care. They don’t have the same data ethical framework.

It seems like this is going to be a real challenge, because maybe we take the time to do that collaboration over here, but then there’s another group somewhere else that doesn’t take that same careful approach, right?

This is one of the challenges of the global network that we live in today. And one thing that’s been bothersome to me.

I’m not going to jump into a political discussion right now, but if we just observe the fact that the EU, from a rules and regulatory standpoint, is diverging from the US (and not to say that the EU and the US are the only two entities out there but just to take them as a case study).

What does that mean for technology development? What does it mean that there are going to be, in some cases, radically different standards? I’m not weighing a value judgement as to which standards are better, or worse, or anything like that. I’m saying it is hard to ignore the fact that this divergence is happening, and in a global community, that makes a huge difference.

“It is hard to ignore the fact that this divergence is happening, and in a global community, that makes a huge difference.”

Maybe the entity that’s ahead will just have enough influence that they will pull everyone else along with them, but we haven’t seen that happen yet. There are a lot of case studies we could bring on similar things, but I do think that’s an important thing to track.

It’s a rapidly developing area. And the reality is that you can’t always spread out innovation. A lot of times, it tends to concentrate in a few geographical areas, and if those areas aren’t in lock step, and they’re not on the same page about importance ideas like ethics and privacy, it will create friction down the line.

Driving Innovation

What’s next for you? What are you focusing on next?

My personal goal is always about finding the thing that others aren’t paying attention to and bringing light to it, helping bring it into the conversation. I’m looking for opportunities to raise awareness, about some of these things—privacy is a great example—that are pretty abstract to most people. Because if it stays abstract, we’re never going to get anything done.

How do we push toward creating solutions and testing them out in the market to see how people react and then to iterate? That’s what is so exciting about being at a place like ReD; they move swiftly.

That increased pace and attention is what excites me around topics like, “How do you instantiate a machine learning system to support the future of mobility?” or something like that. There’s a lot of rich opportunity out there to make the world a better place. So, that’s what drives me right now.


The Guy Behind the Mic

Ben Newton
Director, Product Marketing

Ben is a veteran of the IT Operations market, with a two decade career across large and small companies like Loudcloud, BladeLogic, Northrop Grumman, EDS, and BMC. Ben got to do DevOps before DevOps was cool, working with government agencies and major commercial brands to be more agile and move faster. More recently, Ben spent 5 years in product management at Sumo Logic, and is now running product marketing for Operations Analytics at Sumo Logic. His latest project, Masters of Data, has let him combine his love of podcasts and music with his love of good conversations.

LinkedIn

Listen Anytime, Anywhere

Available to stream or download via these and other podcast apps