Robin Jackson, Director of Assessment
As educators, we all want to see our learners (including apprentices) progress and move on or into successful careers. However, their journey may not be smooth. For example, they may fail an assessment and need to resit. Therefore, it’s natural that training providers will look to give their learners the best possible opportunity to achieve. One factor they frequently consider is the pass rate for the qualification where this is offered by multiple awarding organisations.
We have recently seen a flurry of communications across social media suggesting providers are likely to achieve a higher pass rate by switching awarding organisations. It is certainly attractive to select the awarding organisation which appears to offer the highest pass rate. Particularly in a market where achievement is a contributing consideration to any external judgement about the quality of education. However, there is a real danger in using pass rates as the sole consideration when choosing a specific awarding organisation or qualification.
Pass rates are potentially open to misinterpretation as they are dependent upon the ability of the cohort of learners taking them. For example, assume two awarding organisations use the exact same question paper with the same pass mark. Awarding organisation ‘A’ whose learners are all of high ability will likely have a high pass rate in comparison to awarding organisation ‘B’ whose learners are of lower ability and will likely have a low pass rate.
If both awarding organisations shared their pass rates, providers might choose an awarding organisation based on an inference that isn’t accurate. In this example, it won’t matter which of the two awarding organisations a provider chooses. Their learners will still have to meet the same threshold to pass, so that provider’s pass rate will be the same irrespective of which awarding organisation they choose to use.
As learner cohort ability changes, so might the pass rate. Therefore, any change in pass rates with an awarding organisation’s qualification over time (say between academic years) probably doesn’t reflect variation in the difficulty of their assessments (which change over time as they withdraw older and release newer versions), as this should be taken into consideration through the awarding process. It’s most likely the average ability of the cohort of learners who undertook assessment has changed. Indeed, in a recent report (January 2024) from Ofqual into assessment of reformed Functional Skills qualifications, they stated that “comparing pass rates over time is challenging… …because the make-up of the cohort of students taking the qualifications typically varies over time”.
In the case of Functional Skills, the notional/ minimum level of performance a learner must meet to pass was set through the Ofqual accreditation process. As Ofqual noted recently (January 2024), “Ofqual worked with awarding organisations to facilitate alignment in standards, including the development of common pass grade descriptors, to support consistency in grading”. Therefore, all awarding organisations should be working to the same understanding of what is required for a learner to pass. No awarding organisation should be able to claim their qualifications are easier to pass than their competition.
How the pass rate is presented is also not straightforward.
- Is it the overall number of assessments passed as a percentage of the total number of tests taken.
- Is it the percentage of assessments passed on the first attempt?
- Is it the volume of passes as a percentage of the total number of learners?
In an extreme scenario, if one learner takes 4 attempts and succeeds on attempt number 4, an awarding organisation would report a first attempt pass rate of 0% and an overall pass rate of 25%, while the provider will report a pass rate of 100%.
Providers know they need to leverage initial and diagnostic assessment to plan the learning for their learners, based on their prior attainment, to support them to achieve. Therefore, a provider’s pass rate does not accurately indicate how much learning has taken place, or how much teaching and learning work has been done, since the rate doesn’t reflect the starting points of individual learners or groups of learners.
We also know that the pass rate may vary by mode of assessment. Therefore, there are some questions that providers should reflect on. Does the pass rate differentiate between onscreen or paper-based assessments? Has the pass rate been influenced by learner preference for a particular mode and if so, has the learner been entered into the most appropriate mode? Has the support they have been given to prepare them appropriate for that mode of assessment?
Ultimately, it would be naïve to suggest we should all ignore pass rates. Open Awards monitors our pass rates regularly as part of our overall approach to standard setting. They are a useful metric to support decision making. However, they should not be the only metric or consideration used. Providers should undertake due diligence about their available options and where appropriate, apply some healthy scepticism to any marketing claims.