"Human-centric innovation is about responsibility," Harris says. "If you think about where we are with AI and ML - and we believe the market is continuing to grow at a significant rate - we are empowering our customers to make models on behalf of communities and people.”
"So the question is, what is the responsibility chain between us," he asks. “Our brand stands for a responsible approach to AI and ML, and how we can responsibly innovate.”
"You can have a good unbiased AI model - but feed it the wrong data and you have a biased outcome,” Harris says.
This is a fundamental problem and, like the old joke, a human can make so many errors a minute, while a computer can greatly magnify that and make many millions of errors a minute. While nobody dies if, say, Amazon’s AI recommends the wrong book to you, the reality is in a world where data science and all that it contains - AI, ML, analytics, predictions - are being used to make decisions on health, housing, finances, even objects in front of an autonomous vehicle there can be dire, fatal consequences if it gets it wrong.
Harris takes this seriously, and during the course of an interview with iTWire he says variously, “maths doesn’t know our goals,” “analytics don't understand our societal goals,” and, reflecting Uncle Ben, “with great power comes great responsibility.”
These aren't throwaway lines or sound-bites; it's clear from the passion and intensity with which Harris speaks that this is personal. He’s written previously on the topic of who is responsible when AI acts irresponsibly. He’s formed a cross-functional SAS data ethics practice. SAS sits on the board of Equal AI, and is a member of the US President’s National AI Advisory Council. The company is at the highest levels where policy is being formed, while the data ethics practice aims to identify where AI and analytics are delivering biased outcomes and work backwards to understand how the bias crept in and deal with it at its root to remove disparities.
iTWire has spoken with Harris previously, not long after he moved from SAS senior vice president of Engineering to the chief technical officer chair. Originally, a young Bryan Harris saw his future in music but being as much a scientist as he was a virtuoso, he became fascinated by the relationship between analogue and digital and ended up studying Electrical Engineering and taking up a career in the intelligence community. It was in these roles that Harris took on big data challenges like natural language processing and signals analysis and streaming analytics. Back then, it wasn’t “big data” nor even machine learning - it was simply working on “a whole lot of data” but this solid foundation prepared him to lead SAS’ DevOps function, then engineering, and now its entire technology focus as CTO. Almost 18 months in the role Harris has clearly made it his own and is laser-focused on what he sees as the sober mission and responsibility of the firm in building better outcomes for society through better data and better models.
For example, Harris explained, the United States has a concept of “redlining” where zip codes can be used as customer segmentation for insurance rates or loan rates. “Inside that zip code is people making good and maybe not so good decisions. Making a decision at the zip code level can penalise marginalised communities and others,” he says. In this scenario, Harris wants to see improved decision-making that includes other factors, even using proxy data whereby the data might not have all the fields or categories required but includes items highly correlated to those missing pieces of information that can be used instead.
Or, when an autonomous vehicle drives and misclassifies an object and kills someone then who is responsible? “We can’t just progress tech and assume we’ll have casualties along the way. That’s not acceptable,” Harris says.
Or, he says, "the death rate with black women giving birth in a hospital is a real issue” with research showing three times the death rate of white women. In this scenario, there is a real risk if data from the past is used to train models that perpetuate the past. The challenge, Harris says, is to optimise new states of society and equitable outcomes. While history is filled with examples like the shockingly increased death rate of black women giving birth, the challenge is to close that disparity by identifying the indicators that cause this and cater for it in the product.
“There is no magical button,” Harris said. Removing bias starts with figuring out what we are trying to achieve with our goals and essentially working backwards to identify disparity and re-engineer to remove it.
In another example, Harris refers to a hackathon performed in Milwaukee in conjunction with Citi Group to explore New York City housing, zip code analysis, and lending rates. “A home is a big step to multi-generational wealth,” Harris says, asking again, “What do we want to get out of society?” before answering his own question - “We want communities to grow, investments to come in, banking systems that aren’t tied to medical systems, we want access to food, and to create an impact on the world.”
This is the importance Harris sees in fighting bias and discrimination in data. He mentions a company that is in the news making AI-driven loans. “There are a lot of upsides,” he says - “but what about the unintended consequences?”
It's a grave issue that cannot be understated; loans that are based on biased data will drive biased outcomes and while it could be easy to downplay one person getting rejected for a loan at one point in time, the fact is entire swathes of the community can be negatively impacted for years and for generations.
It's that serious. "People are creating the models, so people have to ultimately be responsible - or we are beholden to models that do not understand the world around us,” he says. “Maths doesn’t know our goals. Analytics don’t understand our societal goals.”
"Most people are not deliberately working on bad outcomes," Harris notes. Yet, at the same time Gartner's research indicates synthetic data is a big growth area, and “if synthetic data is used to train models then that model is biased,” he says.
Thus, the problem is a people issue, and it's a policy issue. And it’s one SAS has positioned itself as a leader in, with Harris’ mind on the issue, and with the data ethics practice he has established. “We drink our own champagne,” he said - a more polite twist on the concept of eating one’s own dog food. “SAS is externally-focused and internally-focused. We look at how to take away bias ourselves and we receive a lot of requests including RFPs from governments on our strategy for responsible AI and machine learning.”
One doesn't have to look far to see stories - myriads of stories, even on iTWire - spruiking the power of cloud computing and how scalable, elastic, on-demand computing power has enabled rapid decision-making and analytics. Yet, we’ve progressed so far that today “so much analytics is happening throughout society where the stakes are high. We have to hold the accountability chain on this. We can’t have people making models and letting them out in the wild,” Harris says.
This is the impetus for the SAS data ethics practice to raise awareness. The practice is seeking all stories in the world showing where there are inequitable outcomes in AI or where other improvements can be made, with the goal of “knowing all the problems and providing strategies to overcome them,” Harris said.
“People study engineering failures and we should have similar stories,” he said.