In defence of paper based exams for GCSE computer science

Much comment has been made following a recent GCSE computer science exam and, whilst I have my own views on the situation, the purpose of this post is not to comment on that specific paper because I don't teach that particular course and therefore don't have access to the paper or have any of my own students who've sat it. Rather, I intend to argue in defence of the use of paper based written exams for GCSE computer science more generally since one argument I've seen raised in the aftermath of that exam is that in favour of moving towards computer based exams. Specifically points such as this made in the Schools Week article linked above:

Miles Berry, professor of computer education at the University of Roehampton, said the subject was difficult to assess in general because written exams offered “a very unrepresentative experience of problem solving using programming”.

“I think it would be lovely to get back to something which gives a more realistic experience of programming,” he added.

I'm not picking on Miles specifically, I've seen this type of argument made numerous times elsewhere, he just happens to be the person making it and quoted in the article.

An "unrepresentative experience" 

The most common argument in favour of ditching paper based exams for GCSE computer science is that they provide an "unrepresentative experience" of problem solving using programming, or something to that effect. What I think those making these arguments are most often referring to here is the idea that, in a professional context, software engineers write program code on computers and not on paper. Whilst this is true some of the time, it is not universally so. 

Whilst some companies have moved towards digital coding interviews for hiring, a great many still favour the whiteboard method. When I was hired for my last software engineering role before entering teaching it was a whiteboard coding interview I had to pass and that was not in the distant past, it was the latter half of the 2010s. Then, once you've got your job, unless it is at the smallest of startups where you're the only engineer, you'll need to work as part of a team. In my prior professional experience, that teamwork would most often happen by drawing diagrams, making notes and writing code on whiteboards as a group. That's where a vast amount of the problem solving happened in a professional setting. Therefore I suggest problem solving by programming by hand is often not only not an unrepresentative experience when it comes to working in industry, but is in actuality a highly representative experience which should not be lost. 

There's also next steps in education. When I studied my computer science degree I did need to complete coursework producing working software using a computer, but an equal weighting in many of the programming modules I took was also placed on being able to demonstrate this knowledge in a written programming exam. For those students looking to take the subject further in higher education, a written exam is highly representative of what you'll be expected to do even on a degree course which is considered more practical which mine was.

In reality, a handwritten programming exam is no more unrepresentative an experience than a mathematics exam. Whilst professionals using mathematics for their job may often be doing so using a computer, handwriting the work is equally as valid. In the computer science context, I'm going to use the rest of this post to argue that it is better than the currently existing alternatives. 

The on-screen exam

The most realistic alternative is the on-screen exam (I'm not even entertaining the idea of NEAs at GCSE for hopefully obvious reasons) and is a poor one in my view. Firstly, lets consider cognitive load of students. For some subjects such as Art or Food, it is not necessarily possible to assess it without having a different format of exam. However, the more different formats of exam a student must contend with which require them to do things differently, the greater their cognitive load in the exam period. Ideally when a student is being assessed we want as much of their working memory to be available for holding the information about the questions they're being asked in the exam. When you introduce an unfamiliar exam format, a student is going to need some of that memory to remember how they're supposed to approach the alternative format. I've tried to mitigate this for my own students taking the AQA A-level on-screen exams this year by running additional mock exams, but not only did this add to my own workload and that of the students it also do not completely alleviate the issue.

Another point that has been raised is uptake of GCSE computer science. An on-screen exam is not going to make it easier for a school such as the one I teach in to boost uptake for practical reasons. Currently I have 33 students in year 10 studying the subject. We have 2 computer rooms in which on-screen exams could be taken which have a combined capacity of about 60 students. However, in a public exam students would need to be spaced out and therefore I'm not convinced I could even fit my current year 10 cohort of 33 in for an on-screen exam without building extra computer room space elsewhere. A subject lead asking SLT to get an extra computer room built because it'll be needed for less than 1/2 a day per year is going to be a tough sell in today's financial climate.

The student experience when things inevitably go wrong should also be considered. When I was at school I studied engineering, initially as a BTEC, and I vividly remember the on-screen exam failing which led to us switching course to the GCSE which had a written exam. Even if those kinds of failures are less likely with a more modern on-screen exam it still will suffer from a greater chance of having to be abandoned in cases of an emergency such as a power cut or cyber attack. Even in the case of a power cut most exam venues will have windows and/or backup lighting and the exam would be able to continue. Whereas with on-screen exams there is far more potential for things to go wrong and the stress that this can cause students should not be underestimated.

One might argue that some of the practical points I've raised in this section aren't relevant because such considerations should not have a bearing on the decision of what is the best assessment method for a given subject. I suggest though that whilst we campaign for the education system we'd like to have we must be pragmatic about working within the system as is and that anything that puts additional barriers in the way of schools offering the subject to more students cannot be a good thing.

Although written exams are not by any means perfect, and I do in principle support efforts to explore and develop better assessment methods, they're the best method we have currently got of assessing computer science and any moves to instigate change must be cautious, ensuring that we don't end up with something worse.