The journey to ethically-aligned artificial intelligence in healthcare policy

30 April 2020
SERIES OTT Annual Review 2019: think tanks and technology 14 items

[This article was originally published in the OTT Annual Review 2019-2020: think tanks and technology on March 2020.]

The idea of intelligent machines performing intelligent tasks seems exciting. However, it opens up a can of worms around ethics, governance, and cultural implications. Some of these concerns include privacy issues around where the data comes from to train these algorithms. Others include the potential for bias in how these algorithms are designed or trained by the people creating them. And in sectors like healthcare, the stakes are high. So, where do you start in creating ethically-aligned artificial intelligence (AI)?

Many of the big technology and consulting companies are spurting out ethical frameworks left, right and centre. These well-articulated (and sometimes beautifully designed) frameworks serve as great starting points.

However, they are often missing the next step. How, exactly, do you take these ethical principles and turn them into practical steps in the process of creating AI? How do you operationalise and implement ethical standards into technology creation?

Governments are trying to understand these questions too. In an effort to keep up with the rate at which technology is being developed, they are looking to find a governance balance that keeps people safe while still allowing for innovation.

A really interesting example of this is Singapore’s Model Artificial Intelligence Governance Framework. It was released in January 2019 as a guide for organisations to create explainable, transparent, and fair AI technologies. They released their second edition in January 2020, with additional insights and learning from working closely with many of the companies.

They describe their approach as human-centric, offering four areas to consider: (1) having internal organisational governance with clear roles and responsibilities; (2) determining the right level of human involvement in AI-augmented decision making; (3) managing operations for explainability of the algorithm; and (4) communicating to stakeholders.

This offers concrete steps for companies to create supporting processes for ethically aligned technologies. It is also a huge value for the Singaporean Government to learn and iterate their governance and policy as challenges and issues arise.

In some ways it seems to be a move for policy to act more like technology itself: starting with a MVP (minimum viable product, or in this case policy), launching, testing, and iterating based on validated data.

However, some industries – like healthcare – need more scrutiny. In healthcare, the risk of a medical error potentially causing harm to a patient raises the stakes for the level of quality of technology being used. If AI systems are to help diagnose an illness or disease, they will need to be classified as a medical device which requires high rigour and potentially a clinical trial.

In the UK, many AI systems attempting to be applied in healthcare have many more rounds of clinical research to go before being utilised.

In an effort to safely speed up this process, the UK National Health Service (NHS) formed NHSX, in 2019. NHSX will lead the way in developing policy and setting standards for modern technology being supported and used by the NHS.

In the same year that it was created, NHSX published a report highlighting the many opportunities for utilising AI, but also laying out the many risks. The report also outlines a code of conduct with considerations around data sharing, privacy, transparency and efficacy of the algorithms.

Yet all of this still leaves many open questions around broader cultural implications. For example, the possibility of AI systems becoming so advanced that they eventually de-skill healthcare practitioners or other parts of the workforce. Or considerations around health inequalities or systematic racism embedded in the data being utilised for algorithms.

Many of these algorithms need more and more data, but we have to ask ourselves where does this data come from? Is it diverse enough? Does it account for things we never considered or valued before?

There may be no immediate solutions, but a way to understand the context and impact of these implications is through user research.

While the policy world may not be familiar with this type of research, the tech world uses it as their secret weapon. For companies like Amazon and IBM, user research enables them to design better services, to find opportunities of innovation, and to understand the context and impact their products and services have on people.

IBM defines the role of user research as helping to ‘understand how people go about performing tasks and achieving goals that are important to us. It gives us context and perspective and puts us in a position to respond with useful, simplified, and productive design solutions.’

Connecting this to policy and the role of governance and regulation of technology itself, this could look more like ethnographic research to inform better policy decisions.

The UK Government website shares how user research can be used to design government services. And the UK Policy Lab has been using ethnographic research for many years. They describe the role of ethnographic research in policymaking as a way to ‘help reframe government’s understanding of its purposes and how the world in which it exists and which it shapes is changing.’

Ultimately, many of these new technologies are forcing us to ask tougher questions of ourselves. They are requiring us to look through a new lens, to have more open conversations, and to develop our emotional intelligence as we create artificial intelligence. Ethnographic research is a way to keep a pulse on the ethical and cultural implications of technology on society, and as a way to better understand the role of policy in keeping these technologies safe and fair in the midst of that.