Big Data Market Segment LS
Big Data Market Segment RS
Sunday, 15 May 2022 21:53

Biased data is anathema to society says the SAS CTO who has made it his mission to stamp bias out Featured


The SAS chief technical officer Bryan Harris is on a personal mission. He wants humans at the centre of the innovation process, and he wants ethics in data. AI applied to healthcare and surgery may need to know someone’s gender and ethnicity, but AI applied to home loans should not use that as a factor at all. Harris is making clear irresponsible AI is right in his crosshairs.

"Human-centric innovation is about responsibility," Harris says. "If you think about where we are with AI and ML - and we believe the market is continuing to grow at a significant rate - we are empowering our customers to make models on behalf of communities and people.”

"So the question is, what is the responsibility chain between us," he asks. “Our brand stands for a responsible approach to AI and ML, and how we can responsibly innovate.”

"You can have a good unbiased AI model - but feed it the wrong data and you have a biased outcome,” Harris says.

This is a fundamental problem and, like the old joke, a human can make so many errors a minute, while a computer can greatly magnify that and make many millions of errors a minute. While nobody dies if, say, Amazon’s AI recommends the wrong book to you, the reality is in a world where data science and all that it contains - AI, ML, analytics, predictions - are being used to make decisions on health, housing, finances, even objects in front of an autonomous vehicle there can be dire, fatal consequences if it gets it wrong.

Harris takes this seriously, and during the course of an interview with iTWire he says variously, “maths doesn’t know our goals,” “analytics don't understand our societal goals,” and, reflecting Uncle Ben, “with great power comes great responsibility.”

These aren't throwaway lines or sound-bites; it's clear from the passion and intensity with which Harris speaks that this is personal. He’s written previously on the topic of who is responsible when AI acts irresponsibly. He’s formed a cross-functional SAS data ethics practice. SAS sits on the board of Equal AI, and is a member of the US President’s National AI Advisory Council. The company is at the highest levels where policy is being formed, while the data ethics practice aims to identify where AI and analytics are delivering biased outcomes and work backwards to understand how the bias crept in and deal with it at its root to remove disparities.

iTWire has spoken with Harris previously, not long after he moved from SAS senior vice president of Engineering to the chief technical officer chair. Originally, a young Bryan Harris saw his future in music but being as much a scientist as he was a virtuoso, he became fascinated by the relationship between analogue and digital and ended up studying Electrical Engineering and taking up a career in the intelligence community. It was in these roles that Harris took on big data challenges like natural language processing and signals analysis and streaming analytics. Back then, it wasn’t “big data” nor even machine learning - it was simply working on “a whole lot of data” but this solid foundation prepared him to lead SAS’ DevOps function, then engineering, and now its entire technology focus as CTO. Almost 18 months in the role Harris has clearly made it his own and is laser-focused on what he sees as the sober mission and responsibility of the firm in building better outcomes for society through better data and better models.

For example, Harris explained, the United States has a concept of “redlining” where zip codes can be used as customer segmentation for insurance rates or loan rates. “Inside that zip code is people making good and maybe not so good decisions. Making a decision at the zip code level can penalise marginalised communities and others,” he says. In this scenario, Harris wants to see improved decision-making that includes other factors, even using proxy data whereby the data might not have all the fields or categories required but includes items highly correlated to those missing pieces of information that can be used instead.

Or, when an autonomous vehicle drives and misclassifies an object and kills someone then who is responsible? “We can’t just progress tech and assume we’ll have casualties along the way. That’s not acceptable,” Harris says.

Or, he says, "the death rate with black women giving birth in a hospital is a real issue” with research showing three times the death rate of white women. In this scenario, there is a real risk if data from the past is used to train models that perpetuate the past. The challenge, Harris says, is to optimise new states of society and equitable outcomes. While history is filled with examples like the shockingly increased death rate of black women giving birth, the challenge is to close that disparity by identifying the indicators that cause this and cater for it in the product.

“There is no magical button,” Harris said. Removing bias starts with figuring out what we are trying to achieve with our goals and essentially working backwards to identify disparity and re-engineer to remove it.

In another example, Harris refers to a hackathon performed in Milwaukee in conjunction with Citi Group to explore New York City housing, zip code analysis, and lending rates. “A home is a big step to multi-generational wealth,” Harris says, asking again, “What do we want to get out of society?” before answering his own question - “We want communities to grow, investments to come in, banking systems that aren’t tied to medical systems, we want access to food, and to create an impact on the world.”

This is the importance Harris sees in fighting bias and discrimination in data. He mentions a company that is in the news making AI-driven loans. “There are a lot of upsides,” he says - “but what about the unintended consequences?”

It's a grave issue that cannot be understated; loans that are based on biased data will drive biased outcomes and while it could be easy to downplay one person getting rejected for a loan at one point in time, the fact is entire swathes of the community can be negatively impacted for years and for generations.

It's that serious. "People are creating the models, so people have to ultimately be responsible - or we are beholden to models that do not understand the world around us,” he says. “Maths doesn’t know our goals. Analytics don’t understand our societal goals.”

"Most people are not deliberately working on bad outcomes," Harris notes. Yet, at the same time Gartner's research indicates synthetic data is a big growth area, and “if synthetic data is used to train models then that model is biased,” he says.

Thus, the problem is a people issue, and it's a policy issue. And it’s one SAS has positioned itself as a leader in, with Harris’ mind on the issue, and with the data ethics practice he has established. “We drink our own champagne,” he said - a more polite twist on the concept of eating one’s own dog food. “SAS is externally-focused and internally-focused. We look at how to take away bias ourselves and we receive a lot of requests including RFPs from governments on our strategy for responsible AI and machine learning.”

One doesn't have to look far to see stories - myriads of stories, even on iTWire - spruiking the power of cloud computing and how scalable, elastic, on-demand computing power has enabled rapid decision-making and analytics. Yet, we’ve progressed so far that today “so much analytics is happening throughout society where the stakes are high. We have to hold the accountability chain on this. We can’t have people making models and letting them out in the wild,” Harris says.

This is the impetus for the SAS data ethics practice to raise awareness. The practice is seeking all stories in the world showing where there are inequitable outcomes in AI or where other improvements can be made, with the goal of “knowing all the problems and providing strategies to overcome them,” Harris said.

“People study engineering failures and we should have similar stories,” he said.

Read 2759 times

Please join our community here and become a VIP.

Subscribe to ITWIRE UPDATE Newsletter here
JOIN our iTWireTV our YouTube Community here


Thoughtworks presents XConf Australia, back in-person in three cities, bringing together people who care deeply about software and its impact on the world.

In its fifth year, XConf is our annual technology event created by technologists for technologists.

Participate in a robust agenda of talks as local thought leaders and Thoughtworks technologists share first-hand experiences and exchange new ways to empower teams, deliver quality software and drive innovation for responsible tech.

Explore how at Thoughtworks, we are making tech better, together.

Tickets are now available and all proceeds will be donated to Indigitek, a not-for-profit organisation that aims to create technology employment pathways for First Nations Peoples.

Click the button below to register and get your ticket for the Melbourne, Sydney or Brisbane event



It's all about Webinars.

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site and prominent Newsletter promotion and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.


David M Williams

David has been computing since 1984 where he instantly gravitated to the family Commodore 64. He completed a Bachelor of Computer Science degree from 1990 to 1992, commencing full-time employment as a systems analyst at the end of that year. David subsequently worked as a UNIX Systems Manager, Asia-Pacific technical specialist for an international software company, Business Analyst, IT Manager, and other roles. David has been the Chief Information Officer for national public companies since 2007, delivering IT knowledge and business acumen, seeking to transform the industries within which he works. David is also involved in the user group community, the Australian Computer Society technical advisory boards, and education.

Share News tips for the iTWire Journalists? Your tip will be anonymous




Guest Opinion

Guest Interviews

Guest Reviews

Guest Research

Guest Research & Case Studies

Channel News