There is growing concern that AI models can incorporate existing biases such as those relating to age, ethnicity, or gender.
Pega's ethical bias check detects unwanted discrimination by using predictive analytics to simulate the likely outcomes of a given strategy.
It generates alerts when the bias risk exceeds user-specified thresholds, for example if the audience for a particular offer skews toward or away from specific demographics.
This allows organisations to identify the relevant algorithm and make adjustments to help ensure fair and more balanced outcomes.
The process operates across all channels, for example, web, email and contact centre.
Bias thresholds are set by the user to allow for situations where an organisation is trying to reach a particular segment. For instance, there is no point asking younger people whether they are interested in taking advantage of benefits offered to Seniors Card holders.
Ethical bias checking can be included as a matter of course when simulating strategy results.
"As AI is being embedded in almost every aspect of customer engagement, certain high-profile incidents have made businesses increasingly aware of the risk of unintentional bias and its painful effect on customers," said Pegasystems vice president decisioning and analytics Rob Walker.
"With the ethical bias check, we're empowering businesses with tools that help reduce AI bias to improve how businesses interact with customers and increase customer lifetime value."