How to build a more ethical, unbiased AI system

An interview with Giles Lane about building unbiased AI systems

How to build a more ethical, unbiased AI system

“In the construction industry, cutting corners in materials and safety inspections often leads to dangerous vulnerabilities… Such dangers exist too in failing to adequately address bias in source data and training sets… A systemic approach to ethical assessment across an entire organisation would help reduce both potential incidences and future liabilities”

Giles Lane, author of the UnBias AI For Decision Makers (AI4DM) framework which is a practical toolkit for assessing ethics and governance of Artificial Intelligence / Machine Learning systems, and automated decision making systems

Key take-aways

  1. Systems based on Machine learning and AI are proliferating and, in the rush to deploy those systems, the fundamental place in R&D for ethics is being disrupted. This is generating systems that exacerbate our problems and entrench inequalities and injustices.
  2. Making better AI systems is not out of reach. But it does need a change in mindset from secrecy to transparency that can help us all as society to generate AI systems that are -as much as possible- free from inherent biases.
  3. A whole systems approach is crucial to building the cross-discipline, -field and -sector understandings of collective responsibility to deliver safe, reliable and trustworthy products and services.

INTERVIEW

Omar Valdez-de-Leon (OV): First, a little bit about yourself and your work

Giles Lane (GL): I founded and lead a small non-profit creative studio in London, UK called Proboscis. I am also co-founder of the Manifest Data Lab at Central Saint Martins, University of the Arts London and is research associate of the Human Centred Computing group at the University of Oxford. Over the past 26 years we have used artistic practices to explore inventive and innovative ways for people to identify what they value and to share those values with others. I have collaborated with grassroots communities across the world, as well as with government departments, industry, academia, and civil society NGOs. I have co-created experimental technologies and platforms, applying them in social and cultural contexts that have been both ground-breaking and pioneering. Much of our focus is on combining traditional tools and techniques with emerging technologies to form hybrids that are inclusive of difference and diversity as well as capacity and capability. In so doing, we have developed a degree of expertise in engagement and strategic design that fosters collaboration between partners who would not usually come together.

“Ethics in AI is hardly new or newly important, but what has changed in recent years is the massive growth in practical applications of machine learning and “AI” systems in the everyday world”

OV: Why is ethics in AI suddenly so important?

GL: Ethics in AI is hardly new or newly important, but what has changed in recent years is the massive growth in practical applications of machine learning and “AI” systems in the everyday world. From relatively simplistic ‘recommendation engines’ to much more powerful, and consequential, automated decision-making systems in government, public services and business. Their deployments have, regrettably, often paid insufficient regard for either laws and regulations or ethics; and the deficiencies and harms which have subsequently ensued have been readily and comprehensively identified by many researchers and activists.

It is not that ethics in AI is suddenly important, so much as the rush to develop and deploy such systems has recently disrupted the fundamental place in the chain of research and development that ethics historically occupied. Restoring ethics’ key role in the processes of tech development has become urgent because we are facing multiple, existential challenges to the future of human societies and civilisations. Poorly considered solutions will only exacerbate our problems and entrench inequalities and injustices.

OV: AI is becoming ubiquitous. It is not only sorting out spam emails but becoming a big part in business, government, and consumer products (from recruitment systems through to face recognition to insurance pricing and medical devices). What are in your view the biggest challenges for leaders in these organisations in trying to understand and remove bias from such systems?

GL: One of the biggest challenges is transparency. All systems contain biases – the challenge is to make the system as transparent as possible so that when harms are identified in or, indeed amplified by, prejudicial data sets or algorithmic processes, they can be remediated as soon as possible. But this will require a systemic change in embracing openness at every level, and not just in businesses but in public sector organisations too. When we can perceive how a bias arises, we can work around it and remedy its outcomes. But when it is hidden beneath layers of “commercial sensitivity” or “secrecy” that are often there for reasons of protection from liability, we experience harms unjustly perpetrated and persist in the system.

Such secrecy is often used to hide problems even when they have already been identified. Or to frustrate the processes of remediation. Or simply to cover up failures in process and mask responsibility. Transparency – at all levels – is critical to enable the tracking back of bias where it causes harm and to facilitate its correction.

Those at the top of organisational hierarchies need to promote transparency internally and externally, to promote a collective sense of responsibility for the outcomes of their products and services, and to enable experts to speak truth to power when problems are identified – without fear of reprisal. Corporate leaders need to acknowledge responsibilities and obligations not just to shareholders but to the whole of the societies that their products and services have an influence on. Transparency is a fundamental step towards mutual respect.

“The challenge is to make the system as transparent as possible so that when harms are identified in or indeed, amplified by, prejudicial data sets or algorithmic processes, they can be remediated as soon as possible”

OV: In a very practical sense, how can companies make sure they embed ethical considerations in their innovation process when speed and other factors may take priority?

GL: The problem, of course, is speed itself – the deployment of poorly or untested products and services without due diligence in anticipating consequences of harms or unintended outcomes. Only systemic and sustained change can wean organisations from their obsession with speed, and embed key processes like the precautionary principle and duty of care at the heart of their business practices. A whole systems approach is crucial to building the cross-discipline, -field and -sector understandings of collective responsibility to deliver safe, reliable and trustworthy products and services.

In the construction industry, cutting corners in materials and safety inspections often leads to dangerous vulnerabilities and structural weaknesses in buildings causing catastrophic outcomes. This has led to criminal prosecutions for failing to meet responsibilities for standards and duties of care. Such dangers exist too in failing to adequately address bias in source data and training sets, or in assessing past harms and injustices to stop them from being amplified by the scale and speed of algorithms and automated decision making. A systemic approach to ethical assessment across an entire organisation would help reduce both potential incidences and future liabilities.

OV: Do you have any examples of companies that have got it right that can offer some valuable pointers?

GL: At this moment in time I feel at a loss for such an example. Others who are better versed in the corporate world may well be able to offer examples. However the past decade in particular has been marred by a steady stream of high profile cases of egregious harm caused by indifference to the consequences of algorithmic bias – both to individuals and groups as well as to democracy itself, with plenty of “ethics-washing” tactics offered up as a sop to social responsibility while the business of extraction and profiteering continues apace.

About Giles Lane

Giles Lane is an artist, designer and the founder of creative studio Proboscis. His work focuses on story making, social engagement and co-creative participation. Giles is co-founder of the Manifest Data Lab at Central Saint Martins, University of the Arts London and is a research associate of the Human Centred Computing group at the University of Oxford.

http://gileslane.net | http://proboscis.org.uk

Share: