FEATURE2 December 2019

Sandra Wachter in seven

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

Data analytics Features Impact Legal People UK

Sandra Wachter is associate professor and senior research fellow in law and ethics of AI, big data and robotics at Oxford University, and a fellow at the Alan Turing Institute and of the World Economic Forum’s global futures council on values, ethics and innovation


1. How can artificial intelligence most benefit society?

It can be extremely beneficial in sectors such as health, where you can augment medical decision-making by preventing or spotting diseases, and having good treatment plans. Similar things can be seen with climate change, but we must keep in mind the risks of machine learning and artificial intelligence when we create these systems. It’s making sure the technology brings us closer together, rather than sets us apart.

2. Does the legal profession understand AI well enough?

It is crucial that disciplines talk to each other; there’s not much interdisciplinary research going on. It’s very important that we get people from different backgrounds – I work with an ethicist and a computer scientist. We have a research programme on the governance of emerging technologies, and we think about the legal and ethical challenges that arise with those systems.

3. How can big tech behave more ethically?

With big data comes big responsibility because you could potentially harm people. If you’re trusted with sensitive information, you have a responsibility to be ethical and transparent about it. Organisations must anticipate possible risk. Companies and regulators want proactive decision- and policy-making and to think about the ethical consequences. They see it as a pathway to responsible innovation rather than something that hinders progress.

4. How has your work on assumed affinity developed?

It was born of a research project I’m working on – a right to reasonable inferences – looking at whether we, as citizens, have a right over how we’re assessed by algorithms making decisions about us. You can think about me however you want, I have no way of controlling your mind – but it’s different when algorithms make those decisions. They use data and infer so many sensitive things about you that you’re not aware of – your sexual orientation, your ethnicity and religious background.

We should have a right to be reasonably assessed by algorithms, and affinity profiling is a crucial part of that. When you access an app, search on Google maps, buy something online, all that meta data is being collected. It’s usually being used to infer interest and assumed personality traits. Do I have a right over how I’m being seen, and do I have enough protection against discrimination – because privacy and discrimination are sister disciplines?

5. You have talked about anti-discrimination laws only working for groups we already know have been discriminated against – so how do we know or predict who else might be?

We haven’t found solutions yet; what we can’t do is just add another category into non-discrimination law; because the list will be endless. We should be moving toward outputs and bias testing. It’s about trying to test before you release the product into a critical market.

6. How can organisations ensure ethical principles are embedded in the way they do business?

Discussion around AI principles is a good starting point. The next step is how we implement this in practice. You can say that privacy is a principle, fairness is a principle – but what does it actually mean? What does it mean to be fair and fill that term with meaning, because you have to go to the engineer and say this is what you do today that’s different from last week? And that’s the hard work.

7. Can the global nature of technology companies versus the local nature of regulation be overcome?

It can work in the same way human rights developed. Sometimes it’s on a local level, a national level, an international level; you have the European Human Rights Convention – it started somewhere. It would be ideal to start with an international framework, but often that’s not possible.