If universities start to set more of their own tests in order to tell candidates apart, the burden will fall on sixth forms, writes Emma White.
After this year’s A-level grade inflation dramas, there have been rumblings that universities will start to introduce a wider range of admissions tests, in order tell one triple A* student from the next.
A good idea, you might say. Why not let universities devise their own ways to select the pick of the crop and get the right “fit” of student for their unique courses?
Some may argue that a more detailed approach to selecting students is over due and will allow those most suitable to have a stronger chance of winning a place, where admission is oversubscribed.
They may well be right but creating a new layer of challenge will ultimately fall onto the shoulders of Sixth Form staff, and it will not be good news for A-level students either.
Year 13 is a pressured year at best. For those facing Oxbridge application, medicine or dentistry etc the year starts early, almost as soon as Y12 draws to a close. There is a summer of drawing up a personal statement and working towards the pre/at-interview assessments set by the chosen course or college.
“Creating a new layer of challenge will ultimately fall onto the shoulders of sixth form staff.”
Anyone familiar with this process will know how much work goes into producing students for those early assessments. Indeed, my own son has spent his summer preparing for his Cambridge NSAA for natural sciences.
His focus is understanding the thought patterns of those who set the test — quite a different approach from the A-level material he has faced so far. The responsibility to make sure he arrives on November 3rd, ready to take his chance, is mine — the workload his.
The range of university assessments designed to test applicants is varied. Beyond the standardised BMAT and LNAT a host of other exams exist but it is the intricacies that pose the problem. Take the history or modern languages assessments for Oxbridge, not only do they differ for each university but some colleges add an extra dimension to the challenge with further proximal content.
“From a mental health perspective it would be a big ask for youngsters who already feel they are tested too much and too rigorously.”
Whilst Cambridge sets a challenge of evaluation and analysis, Oxford is more grammar centric. If you are preparing two students then you have two very different expectations to set and manage.
So imagine this scenario rolled out across all universities. All students applying through UCAS would face increased study as they prepare for pre-A-level tests. They would of course have the A-level work too and you may believe that it would focus minds and keep brains match-fit.
However, from a mental health perspective it would be a big ask for youngsters who already feel they are tested too much and too rigorously, and their teachers.
Would teachers really want such an increased workload? I doubt they would. Every student applying for five different universities each with an idiosyncratic challenge that had to be met before a place could be offered.
The current system of A-levels tests students on what they have studied and it recognises the content difference from one exam board to another. This is important with a subject such as history where the broad range of topics is addressed fairly, but this would not be possible with a university exam.
“Students may soon begin to sift through courses to choose the admission test that best suits their skills rather than the course that lies behind it.”
The tendency for history at-interview assessments is to simply choose a source or interpretation from a theme or period, which may be familiar to some students but certainly not all. Admittedly the objective is to measure how a student approaches the task but for those who are not familiar with the topic or period and have no broader understanding of the context, then it becomes quite daunting.
Students may soon begin to sift through courses to choose the admission test that best suits their skills rather than the course that lies behind it.
There is the further problem too that schools don’t necessarily teach content in the same order. Come A-level season there is an understanding that every student has crossed the finish line and all knowledge will have been gleaned along the way, but test students at sporadic intervals en-route and there will be some who have covered topic areas which others haven’t started. And in judging them at any other point than when all is complete would be unfair.
“Some schools will have the resources to be better at this than others.”
Let’s remember how we got here: universities no longer believe the results our schools yield. Whilst we congratulate students on how well they coped with Covid-induced uncertainty we are now potentially throwing them to the lions by making them face individual, unstandardised university tests.
At best we second guess what might be asked of them and learn to bend and react to what they may face. Some schools will have the resources to be better at this than others and extenuating circumstances and factors of social context will no doubt come in to play.
But this does not need to happen. We may soon see ourselves bemoaning a new layer of complexity and wish we had been collectively more accurate in our appraisal of A-level ability. If university tests become a part of the Y13 calendar for all we will only have ourselves to blame.
Honest, accurate predicted grades should be a fair compromise which should prevent universities from raising the bar to our collective detriment.