Algorithmic grading is not an answer to the challenges of the pandemic
Graeme Tiffany is a philosopher of education. He argues that replacing exams with algorithmic grading, as was done in Great Britain, exacerbates inequalities and fails to assess students' abilities.
There is an extraordinary irony in the evolving story of predicted grades used in Great Britain in lieu of exams, which were cancelled due to the Covid-19 pandemic. The debacle that ensued was entirely predictable.
Indeed, no less that the Royal Statistical Society warned months ago there were problems associated with the proposed mechanism to award grades to a cohort, already labelled the ‘Covid generation’, given the impact of school closures and the abandonment of exams.
Many young people have been unable to engage with online learning by virtue of ‘digital poverty’ or because of limited or no support. According to Alan Smithers, a professor at the University of Buckingham, upwards of 700,000 pupils have been unable to take advantage of remote lessons because they lack access to a laptop or functioning internet. Inevitably, this disadvantage has been unevenly spread. For some schools, as many as seven in ten pupils have no access to laptops or efficient internet connection. Conversely, it has been ‘business as usual’ in private schools (known as the “independent sector” in Great Britain), where resource issues are rarely of concern.
No attempt was made to factor in these inequalities into the statistical modelling used to inform grade predictions. But this seems par for the course. Inequalities have been an entrenched feature of almost every education system in the world, and little if anything ever seems to be done about them.
Exacerbated inequalities
And yet, what is happening this year appears to have done more than reproduce existing inequalities. It has patently exacerbated them. A key driver is the governments’ obsession with ‘moderation’, the desire to ensure that no one year is more or less advantaged compared with that before or, indeed, the year after. One set of commentators argues this is essential to ensuring the ‘integrity of the system’, while others argue that this constitutes little more than the maintenance of the status quo. Whichever it is, such statistical moderation is premised on an assumption that the ability of students in one year is broadly in line with that of the year before, and likewise, will be in relation to the next.
Put another way, the concept of ‘ability’ takes on a fixity without philosophical foundation. What is revealed is the discriminatory reality of a system that labels children ‘low’, ‘average’, or ‘high’ ability even as infants, on the basis of broader, historical, sociological population data. This data also has a discrete geography. Low, average, or high performing schools tend to be defined as much by postcode as by league tables. While we know postcode is a reasonable proxy for poverty, especially in countries with high levels of inequality (like the UK), to use this also as a proxy for ability is something altogether different.
This is why the inclusion of past school performance data as a predictive metric has become so controversial. As revealed in the outcomes of the Scottish system last week, the overwhelming majority of students marked down compared with their teacher’s predictions were from deprived communities.
Student protests
This catalysed student protests in which young people carried banners proclaiming “Stop the postcode lottery”, and “Students not stats”, which pricked consciences about the dehumanising capacity of this algorithm. What followed was a speech by First Minister Nicola Sturgeon who said: “Despite our best intentions, I do acknowledge we did not get this right and I’m sorry for that.” Arguably, this was also one of the first occasions where politicians have acknowledged the discriminatory potential of algorithmically-informed policy regimes.
Notwithstanding the contradictions implicit in policy agendas that demand continuous improvement in school performance and a ‘narrowing of the gap’ between high and low achievers, the architects of moderation systems appear to fear significant change. Teacher bias is judged a serious threat, even though long-term data shows predicted grades are really pretty accurate, albeit with some important caveats related to high achieving students from Black and Minority Ethnic (BAME) backgrounds.
On this, the Scottish Qualifications Authority (SQA), an independent organisation sponsored by the Scottish Government and responsible for issuing grades, has pointed out that had adjustments not been made, teachers’ recommended grades would have resulted in an uplift of between 10% and 14% (depending on the level of qualification) compared with the previous year. As stated, this is inevitably viewed as a threat to the legitimacy of ‘the system’. Albeit fanciful to many, the idea that this uplift might have been some compensation for months of disrupted formal education was never entertained. Nor was the idea that we might throw some significant research capacity at studying these unique circumstances, thereby testing long-held views that a ‘less bright’ student will inevitably struggle in higher education.
The irony is that the First Minister’s apology will likely result in amendments being made to the grades awarded by the SQA, bringing them more in line with the predictions teachers had made and for which they had been vilified. The (now altered) rationale is that teachers know their students better than an algorithm ever will, and that ‘individual level data’ should be ascribed greater value. Does this spell an appreciation that there may have been an over-reliance on algorithms, a detachment from the human dimensions of policy formulation and effect analysis?
In this context, the decision made in 2015 to reform the certificates marking the completion of the secondary education curriculum, known as GCSE, (ostensibly to reduce cheating) and replace them almost entirely with exams will likely undermine any humanising intent, as swathes of previously nuanced data about students’ week-on-week performance, collected through continual assessment, is no longer accessible. A further irony.
Exams are not more objective
With a view to preventing these issues reappearing next year, and in anticipation that Covid-19 may still be around, the call has been made to ensure exams go ahead regardless of the situation we find ourselves in. But this may constitute a further dereliction of duty. As the Royal Statistical Society argued, “actual exam results are an imperfect measure of what students know. There is variability by which exam questions are set and answered, there is marker variability, and individual student performance can vary day to day.” So, any suggestion that exams are the be-all and end-all of objectivity is patently nonsense.
Second, we might be failing here to respond to something extraordinarily positive that this novel coronavirus has given us – the stimulus to think differently. Here it becomes at least imaginable that it is not the absence of exams that is the problem but the very way we conceptualise education, and, practically, the way learning should be assessed.
Might we abandon exams altogether?
Such provocations have the capacity to breathe life into fundamental questions about the philosophy of education: what, actually, is education? Can learning be reduced to something examinable? If what has happened constitutes a starting point that reveals the inequities ‘baked in’ to existing education systems; their role in simply comparing, ranking and stratifying young people; and ignites dialogue about responses and a more equitable future, then we might come to value these strange and difficult times.
Democratizing algorithms
Potentially, human values might get more consideration when it comes to thinking about what education is and what happens in its name. One value above all will hopefully get an airing – democracy. Where the workings of the algorithms that have informed the grades handed out in Scotland last week, and those that will be handed out across the remainder of the United Kingdom this week, have not been accessible to us, it seems the proponents of algorithmic ethics will have their day, at least for now, as current politics demands they will have to be revealed. On this, the philosopher Onora O’Neill speaks of ‘Intelligent Transparency’[1], where data systems should be accessible, comprehensible, usable, and assessable. Democracy seems to demand all.
Algorithmic functions in education can be seen in a much wider realm of problematic algorithmic cultures. These include the role of algorithms in propagating racial and gender biases[2] and the unforeseen (or deliberately hidden) consequences of algorithmic decision-making in social work[3].
What is needed is not an attack on algorithms per se, rather a more nuanced appreciation that data exists in multiple forms, that data is qualitative as well as quantitative, and can be interpreted in both discriminatory and liberatory ways. Social practices such as education are revealed to have subjective and objective dimensions; they are as much art as science. What democracy demands is an uncertainty-appreciative outlook, in the practice of education and the assessment of that thorny concept, ‘learning’.
The very concept of ‘algorithm’ needs democratising.
[1] O’Neill, O. (2009) Ethics for Communication?, European Journal of Philosophy 17:2, Blackwell: Oxford.
[2] Benjamin, R. (2019) Race after Technology, Polity: Cambridge; Noble, S.U. (2018) Algorithms of Oppression, New York University Press; New York.
[3] Tiffany, G. (2020) Thinking, and acting, philosophically in relation to risk: social education in an era of Big Data, Quaderns d'Educació Social: Barcelona.
Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!