Algorithms And Education: Not So Fast

Algorithms And Education: Not So Fast

Algorithms And Education: Not So Fast


One sector where the pandemic has accelerated the pace of digitalization is education, although as with so many others, this has not always been done as efficiently as might be hoped. Almost overnight, millions of students around the world were confined to their homes and forced to continue their education online, regardless of whether they had the appropriate infrastructure, with no training, and without even adapting teaching methodologies to the new channel.

And without doubt, the weakest link in the chain has been evaluation: last month, in England, hundreds of thousands of students were shocked by the unexpectedly low grades awarded by an algorithm in their General Certificate of Secondary Education (GCSE) after the summer exams were cancelled. The algorithm, created by Ofqual, the exams watchdog for England, standardized grades to avoid inflation, prompting mass protests by school students and their parents that forced the government to make a U-Turn and cancel the results and order a review of the entire process.

Something similar had happened earlier in July in England with the International Baccalaureate: another algorithm replaced the demanding A Level exams that traditionally take place at the end of the academic year and downgraded students who were expecting higher marks, in many cases denying them a place at the university of their choice at a time when life-changing decisions have to be made quickly. Again, the government performed a U-Turn and cancelled the results, opting instead for an assessment based on their mock exams and teacher assessments, or allowing students to sit their exams later in the year.

My favorite example of the problem of using algorithms to assess students’ performance comes from the United States where a number of students in lockdown began to use an online platform, Edgenuity, to get the highest scores in their exams: they realized that the correction algorithm for essay-type exams actually only looked for certain keywords in the answers they had given, and began to include, as if they were hashtags, a series of generic words related to the answer. The algorithm duly detected them, assumed that the answer covered the topic it had to cover according to the subject, and awarded full marks.

Once again, the way that we assess students’ academic performance has been found wanting: an algorithm that standardizes or assigns grades fails to take into account many important aspects of a student’s abilities, while exams themselves merely measure factors such as how well young people are able to memorize “facts”, even though they may not have understood or assimilated what they’ve been taught. I’ve been a teacher for thirty years, and there’s nothing I hate more than the way we evaluate student work, and I’m sure I’m not alone.

Every year, millions of students end up making forced decisions regarding their future conditioned by grades that, in many cases, do not reflect their true abilities or skills, and that need to be improved if we’re going to use them as a diagnostic tool. The shift to online education forced by the pandemic has starkly exposed these deficiencies. At the same time, blaming algorithms, which are nothing more than mathematical tools, would be a mistake. In this case, the culprit is to be found elsewhere.


Source link Google news