The top qualification for an AI governance officer? Courage

The top qualification for an AI governance officer? Courage thumbnail

Artificial intelligence is starting to permeate corporate America. Company executives responding to a McKinsey Global Survey reported a 25% year-over-year increase in use of AI in standard business processes, and most CEOs believe AI will significantly change the way they do business in the next five years.

Cathy Bessant [Photo: courtesy of Bank of America]

But Cathy Bessant, chief operations and technology officer at Bank of America, worries that the computer science underlying AI is advancing faster than companies’ ability to build processes and rules for deploying it. She spoke with Fast Company editor-in-chief Stephanie Mehta about BofA’s role as a founding donor of the Council on the Responsible Use of Artificial Intelligence at Harvard Kennedy School, how she is encouraging her employees and colleagues to take a holistic approach to applying AI, and her effort to hire a senior director to help lead AI governance for the bank. Edited excerpts follow:

Fast Company: What does the term “responsible AI” mean to you?

Cathy Bessant: When people hear it or hear me talk about the risks, there’s a misperception that I think AI is something to be avoided. It’s actually the contrary. AI is going to drive a huge amount of growth. That said, the legal, social, ethical, the framework around AI really doesn’t exist. If your model gets something wrong, who do you blame? The person who created the model, the company that sold it to you . . . [or] the person who built it inside your own organization?

In the realm of both data and artificial intelligence, that technical tools and data sharing, whether intentionally or unintentionally, are way ahead of the ethical and legal—and responsible—framework. We’ve all given away data far in advance of knowing how or when it was going to be used. I think we’re playing catchup in terms of responsible AI. One of the reasons we wanted to be the founding donor of the Harvard Council [on the Responsible Use of Artificial Intelligence] is that it can’t just be one sector talking. It can’t just be Big Tech. It can’t just be fintech. [We need] multiple constituents around the table so those perspectives are balanced.

FC: As you deploy AI inside Bank of America, who needs to be around the table?

CB: Well, the businesses, obviously, because the importance of AI is to make something better, to do something for a customer or a client that’s better, faster, stronger, or cheaper for them. We don’t have some separate AI group in the firm that’s doing “black box” innovation; we embed the demand for AI tools into the businesses. [BofA includes] risk management, because independent oversight is extremely important. And then our technology teams, because we’ve got data to protect.

FC: What, if any, is the role of human resources or the chief people officer as a company like Bank of America starts to undergo pretty dramatic changes in the way work is done as a result of these tools?

CB: The whole issue of workforce transformation is what got me focused on responsible AI in the first place. Three years ago my team of people at the bank—and I manage almost 100,000 people between our own people and contractors—they started asking questions that the company didn’t have perfect answers for: How is my job going to change? Is it going to be eliminated? And if it isn’t eliminated, how are you going to equip me to adapt as my job changes? That was one call to action. The other call to action? Pick any area of technology, there are jobs that society, at large, doesn’t have the capacity to fill. So there’s a potential skill gap as AI becomes so important to growth. The imperative became very clear.

FC: Do you think that ultimately over time there will become an accepted set of global best practices for the deployment of AI?

CB: You’re starting to see standards emerge. People are setting standards, companies are setting them themselves. Industry groups of all kinds are coming together. You’re starting to see principles emerge. I think the challenge is, that in order to get everybody to agree, sometimes they are set or stated at such a high level that they almost lose their ability to drive action.

I worry that we’re going to set standards in order to get agreement at a level that makes them very difficult to operate around, or toothless.

FC: I would imagine that there is a system within a place like Bank of America whereby a really seasoned loan officer could override AI that denies a loan to a person or business that’s deserving.

CB: Absolutely.

FC: But I’m not sure I would want my teenager to have the ability to override the intelligence in a driverless car!

CB: One of the things we talk about at the council is where the power of the tool ends, and where judgment has to happen. You can have very powerful artificial intelligence that locates the position of a target that you may want to consider destroying, but that is very different than the AI making the decision on whether or not to launch the missile.

FC: Does “responsible AI” become a subset of your job? Does that fall under the legal department or HR? Or is there a new role for someone who has eyes on AI and is obsessively thinking about it?

CB: We do have a job right now that we’re trying to fill in AI governance [called the Enterprise Data Governance executive]. It’s a new role. The important thing about AI governance is that it exists, and it will sit in a different place for every organization. In our organization that person will report in my shop, but we’ll work closely with our chief risk officer to model governance. Fortunately in financial services we’ve been in a modeling business for a long time. Everything we think of as AI that involves a model, and we’ll go through our model risk management process and our AI process.

FC: Who is the ideal candidate for this job?

CB: The most important attribute has to be courage. Not everyone believes AI governance is a necessary thing. Say a small fintech company sees someone somewhere in our firm and shows him this great model, the person who wants to buy that artificial intelligence oftentimes doesn’t want it to be governed. All companies and people like shiny objects, and there’s nothing wrong with that. I like them, too. But given the importance of the decisions or the uses of AI, taking that momentary step to say, “A) is it effective? and B) should we do it?” is really critical.

So it takes courage. Obviously [it takes] someone super smart who knows data, knows modeling, but has the courage to stand with highly motivated executives, salespeople, relationship managers who want to take care of their customers and clients in the very best possible way, and say: “Wait a minute, we’re going to pause and look at this through a very strict governance lens.” Because whether they’re requiring it today or not, regulators will really start to pull for this. Another reason to be thinking about AI governance now is to make it contemporary with the way we use the tools instead of behind, but also to make sure that we’re in advance of our regulators who expect us to be well-run and disciplined.

FC: As a consumer, how does AI touch your life?

CB: There’s a randomness in your mind, but the machine thinks it is a pattern. My Mount Kilimanjaro playlist (Bessant climbed the mountain in Tanzania in 2018) is called Kilimanjaro Energy. It was the playlist I put together to get me up the mountain, and it has a lot of country on it. [I included] a huge number of songs about boots or dusty boots as a “keep moving!” sort of thing. Now I’ll get [recommendations for] some random song and it’ll be about shoes and boots because of the conclusions the software drew that are very different than my motivation for putting the boots songs on the list. And it creeps me out when I scan a site and then 20 minutes later there’s an . . . ad for it on my feeds. I understand how it happens, but yeah, that creeps me out.

Read More