To the test. Evaluation of the tenth grade mathematics examination in the spring of 2017
-
Engelsk sammendrag av Fafo-rapport 2017:36
-
Silje Andresen, Aina Fossum, Jon Rogstad og Bjørn Smestad
-
11. mai 2018
This report describes the tenth grade mathematics examination given in the spring of 2017. The report is the first in a series of three; the two to follow will address the mathematics examinations in 2018 and 2019 respectively. The topic of this evaluation is whether the examination questions were comprehensibly formulated and whether they were congruent with the teaching that the students had received. In addition, this year’s report investigates the experiences of the examiners with regard to the guidance provided and the assessment of the examination papers. In a general perspective we are thus asking whether this year’s examination was fair and perceived to be fair. This presupposes that the questions are accessible in terms of their form and content, so that the students’ mathematical skills are the ones that are tested. Our description is structured to answer five main questions:
- Is the exam in mathematics designed in a comprehensible manner, so as to test the students’ mathematics skills?
- How good is the consistency between the curriculum, the examination and the teaching provided to the students?
- Does the examination include questions with a varying degree of difficulty that can measure all skill levels?
- How do the students assess the examination workload in relation to the time available?
- To what degree is there concurrence between the assessments made by different examiners?
The data collection for this investigation was a complex one. We have sent electronic questionnaires to the tenth grade maths teachers whose students have sat the 2017 examination, undertaken case studies at four different schools, been present immediately after the examination in one school, requested assessment forms from examiners, and analysed the examinations in light of textbooks and theories on language and layout of mathematics exams. Moreover, we have added questions to the survey of examiners by the Directorate of Education and Training, and we have participated in examiners’ meetings.
Based on the data that we have collected, the tenth grade mathematics examination in the spring of 2017 appear to have been fair and proper. This opinion prevailed among students, teachers and examiners, and is accordingly our main conclusion. A number of other and more detailed findings serve to corroborate this conclusion, but they also add nuances to the main picture.
A key topic in the report concerns whether the students understand the questions in the question paper; if they fail to understand the questions, it should come as no surprise that they give the wrong answers. In other words, it might happen that their Norwegian language skills are tested more than their mathematical skills. This recognition was the reason why we made the importance of language for the students’ examination performance into a general topic. However, understanding – or lack of such – is also associated with the use of illustrations. In some cases, they are highly important and illuminating, whereas in others they may cause confusion. In this year’s exam the illustrations were mainly illuminating, but not without exception. The illustration for one of the questions contained an error that may have caused some students to give up answering it.
A second topic in this report describes how this year’s examination was perceived as good, because the questions were considered to touch upon keys elements of the competence objectives. Most teachers also claimed that there was concurrence between the skills requirements and what was tested by the examination. The qualitative interviews showed that over time, no topics have been systematically excluded from the examination questions. According to information provided to us, the students were highly satisfied with the teaching that had been given to them, when seen in relation to the questions included in the examination. They appear to have been well trained, meaning that differences in performance were mainly caused by variations in their mathematical skill level at the time of the examination. Use of different sets of textbooks could be one reason for the differences in skills. However, we found no systematic bias in favour of students who had used a specific textbook. On the other hand, the data indicate that there are challenges associated with providing the students with equal opportunities when it comes to solving examination questions that require digital tools.
A third topic concerns the degree of difficulty of this year’s examination, in terms of content as well as whether the allotted time was sufficient for the candidates to demonstrate their full range of skills. The analyses show that the degree of difficulty varied considerably, but also that all the candidates were able to demonstrate at least some of their skills. It was emphasised, however, that the lowest performing candidates were barely able to cope with any of the questions in the second part of the examination. Time constraints may explain the variability in performance in the second part. The proportion of unanswered questions increases towards the end of the set, which may indicate that the candidates ran out of time. Most teachers, on the other hand, believed that the workload was appropriate, despite the fact that approximately one-third of the teachers had students reporting to them that the allotted time was insufficient.
The fourth topic in this report is language. The language used in this year’s exam was mainly appropriate, although there were some words that were not universally understood, such as ‘jerrykanne’ and ‘klaffebro’, although both of these were illustrated. In this context it is relevant to note that according to several maths teachers, questions with a lot of text in them prevent some of the students from demonstrating their mathematical skills. This applies to minority-language students in particular.
The fifth and final topic is the examiners’ assessments. The guidance for external examiners appears to have been highly successful. All of the 129 examiners who had volunteered to explain their opinion took a positive view. It must be emphasised that there was a high degree of concurrence between the examiners in their grading proposals before the joint examiners’ meeting. The differences in grading mainly appear to occur in the second part of the examination. Some examiners called for better guidance in the grading of some of the questions. This applied especially to clearer guidelines for grading of questions that require use of digital tools.