The cloud is no longer "the next big thing", with companies now looking at machine learning and artificial intelligence to make their products smarter, more efficient, and to release human operators from manual tasks where they can be intelligently automated. AI is becoming more accessible, breaking new ground, permitting new ways of working and increasing productivity.
However, Adam-Gedge says organisations need to establish an ethics framework for guiding our journey into an AI-first world.
Just as we think about the benefits, we must also consider the impacts these new technologies can have, and how removing human operators from machinery such as weaponry and healthcare can compromise end results. AI introduces uncharted moral and ethical dilemmas and a new scale to risks.
Adam-Gedge has been running Avanade Australia for three years. The firm was formed 17 years ago and is jointly owned by Accenture and Microsoft.
"Our heartland is working with clients across all industries in the digital and cloud spaces, largely around the Microsoft ecosystem, in 23 countries and a variety of client experiences," she says.
"We do a lot of work with technology, but increasingly we're in the consulting space. We provide advice to clients on digital transformation, how to move to the cloud, how to make best use of data, and what data they need to inform business decisions. We help clients on their journey to digital maturation."
This increasing move to transformation consulting led Adam-Gedge and her team to reflect on intelligent automation, and what its capabilities are for the organisation, as well as what it means for leadership, people and customers.
"We looked closely at what the implications are as our world becomes digital," she said. "As we introduce more and more technology into the world in which we live as consumers, students, and workers, the ramifications of what technology and digital technologies can do are far-reaching."
"We have a view that technology can do almost anything you wish. So if we look at things like facial recognition, driverless cars, or any number of other technologies, we need to think about the ethical elements or implications. As businesses and organisations, we need to think about our responsibilities in how our developments will impact the future."
Adam-Gedge says there isn't a "one size fits all" in the challenge of digital ethics, but says you have to have the end-user in mind, citing three building blocks necessary to establish a digital ethics framework for any provider.
- Start with the customer. This guarantees the standards and ethics accompanying your approach will be shaped and informed by what your business can do best for its customers.
- Data transparency. Organisations must be transparent with its customers and employees around how much data is gathered, how data is used, how it is treated, and what is and is not acceptable.
- Ethics at all levels. The standards, guidelines and processes around ethics should not just be left up to software developers or engineers to figure out. A regular ethics review cycle built-into business processes at all levels is critical.
Having the end user in mind is vital, Adam-Gedge states. "Whether helping employees collaborate, or implementing a medical system, you have to think about the consumer, employee or customer need."
"Then, think of the ethical implications of what is being designed," she says. "We can often think of the intended consequences, but with AI and machine learning the challenge is contemplating how the algorithm may act based on how it adapts to the information it processes. There isn't one set of rules, so we have to really think about the end user."
"As businesses, we often think about ethics in terms of compliance and look at technology as the art of the possible. In this case we need to consider the morally responsible choices for our design," she said. "What are the ramifications of a self-driving car hitting a pedestrian? Who is responsible? Should the car drive into a wall to avoid this?" she asks.
"Let’s consider the insurance industry. What are the consequences of the car accident above? Should an AI-enabled car automatically alert the insurance company? What about ambulance, police and other aid services? Driverless cars and the algorithms associated with them are just one of the areas we need to consider on this topic.
"Ethics is a very big social topic – it's nationwide and worldwide, across business, government and consumers. It's very pervasive with different inputs and views on the right approach."
For an organisation, Adam-Gedge says, "we think it's really important that digital ethics are thought of from the board down to senior management and every person in an organisation. You can't have one part being responsible, it needs to be part of the entire organisation’s approach to design.
"ASIC recently made a comment that financial services organisations need to have accountability and responsibility points for what is being designed around AI and advanced analytics.
"We can’t wait for an issue and then claim 'the algorithm ate my homework' - we're in a whole different sphere of what can happen, with intended and unintended consequences. We can't just design something and put it in place without being responsible for what happens."