OPINION4 September 2020

We are responsible for the algorithm

AI Opinion Technology UK

This summer’s exam results brought the question of algorithmic bias to the fore. But algorithms do not exist in a vacuum. 

Tetris building blocks career_crop

I’ve been thinking a lot about my A-level results of late. I had one of the ‘old-school’ style re-mark situations, back in the time before The Machines got involved. I dramatically under-performed on two of my best subjects. In my re-mark, one of my papers went up by 32 marks, getting me the grades I needed for my first choice university. The other didn’t change. Against all expectations and predictions, I really had just mucked it up. 

It’s hard, then, not to think about what would have happened were I to be 18 again this year. Would my predicted grades have spared me the heartache of that week? Would the silence of my teachers, so hard-felt at the time, translate into the actual betrayal of a lower grade? The answer to both of these questions is that it doesn’t matter. I went to one of the top schools in the country, I performed well, I got on with my teachers. I would have been fine.  

I tell this story because this is what I see being pitted against each other in the media and political cycle around the Exam Algorithm Scandal – the cold, unfeeling machines against the tragedy of personal anecdote. Time and again, teenager is wheeled out onto the news, mother by their side; ‘I feel betrayed, this shouldn’t have happened, it’s not fair’. So, we look for someone to blame – and there is the dreaded Algorithm.  

I’m very conflicted about this algorithm. On one hand, the results it produced were clearly untenable – the large amounts of students downgraded, the patterns of who was downgraded where, in what schools, from what backgrounds.  

However, if my work in AI has taught me anything, it’s that algorithms do not exist in a vacuum. They are built by people at one end and should be interpreted by people at the other end. If the former isn’t scrutinised, and the latter doesn’t happen, the result can so often be something everyone is unhappy with. This is what claims of a ‘mutant’ algorithm are missing. The algorithm was not given a life of its own. We built the algorithm and we are responsible for it. 

Claims of a ‘mutant’ algorithm tap very easily into more primal fears about technology. Some of these are dramatised – the hyper-intelligent AI that is going to turn evil and take over the world, the robots that are going to steal our jobs.  

But some of these fears are more complex. About five years ago, our company’s machine learning partner built an algorithm to mark A-level exams, which was about as accurate as a human marker, but much faster. However, there was a deeply negative response. There was a sense that whoever was in the small percentage mis-marked by a machine would be so much more unhappy about it than those mis-marked by a human, so it was not deemed worth taking forward.

This raises an interesting question. Had we gone with teacher recommendations, but the results had been exactly the same as the machine produced, would we have accepted it more readily? Does the very fact of the word ‘algorithm’ incite fear and stress? 

Using an algorithm to mark a paper is different from using it to predict a grade. But when so many algorithms are, essentially, prediction models, something quite telling is revealed. A set of parameters were created in order to model, as accurately as possible, the educational world we've created – and the results which came out showed a world rife with inequality. It held a mirror back up to ourselves and we did not like what we saw. 

To call this algorithm a ‘mutant’ is to absolve not only the government but society from responsibility – not just to the students of this year but to all the students whose potential is failed by the education system, year after year. It is taking advantage of the emotional fears that technology is able to incite, to hide what is truly mutant in our society. 

The people who built the algorithm were not briefed with the task of solving educational inequality. But imagine if they had been? Imagine if a group of diverse minds from a set of diverse backgrounds were given the budget and resource to say: ‘let’s harness technology to make things different this time’.  

Imagine if this group of people put a different set of biases into the building blocks; a set of biases that we could all get behind. Where being aware of bias didn't mean ‘try not to mimic patterns of structural and societal racism when building your algorithm’, but rather ‘we know you already think about how the underrepresented in society are affected by systems – keep doing that’.

Then, at the other end, imagine if there were another group of humans to look at what the machine puts out, to spot the errors that will come with a machine, to put let another lens on it. To say, what this machine has produced is a prediction – what can humans do to make it a reality we want to live in? 

The algorithm did what we told it to. Maybe next year, we can ask a different algorithm to do something else. 

Hannah Marcus is a strategist at Discover.AI

0 Comments