As part of my coursework requirements for my Master’s degree at Truman State University, I conducted an action research study in my Algebra II classrooms this semester. Alongside the study, I was required to write a summary paper introducing the topic and the methodologies to be used, reviewing the existing literature and previous research, and summarizing the data, its impact in my classroom, and the implications for future research. Because I am still finalizing some of my data and conclusions, my action research summary paper made available to you here on the PD blog is not yet my final draft. I intend to update this post and make the final draft available when I have submitted my paper to review at Truman.
While my rationale for choosing this particular study is described in detail in the paper, I will provide a brief overview here. The purpose of my study was twofold: to determine whether certain measures could be taken to increase the likelihood that more students would attempt to complete an entire assessment and to determine whether these measures would motivate students to answer higher-level questions while simultaneously increasing students’ achievement levels. Therefore, I created two forms of a summative assessment for this study. One form contained test items written in the usual order, with 2.0-level questions appearing first, followed by 3.0- and 4.0-level questions. The other form included the same questions, simply rearranged; 2.0-, 3.0-, and 4.0-level questions were interspersed and grouped by content in the order the material was taught.
When I was reviewing the existing literature on the topic, I did learn quite a bit of new information when reading the articles on test-item arrangement and student achievement levels. While creating multiple forms of a test was not a new idea to me, it had never really occurred to me that, even though tests may contain the exact same questions, the order in which the items appear may have an impact on students’ scores, sometimes putting one group at an unfair advantage over another. After reading over ten articles that span decades of research, it appears as though there is no conclusive evidence to the question of whether the ordering of test questions has a bearing on student performance. In fact, many of the earlier studies produced inconclusive and sometimes conflicting results. Even in the most recent 2013 study investigating the effects of changing the order of mathematics test items, it was concluded that the ordering did not impact performance. While it was my hope that students would receive better summative scores as a result of varying the test item order, there are still several ways this new knowledge positively impacts my students and my classroom procedures. Most importantly, it shows me that I can continue to distribute multiple forms of the same test in order to decrease academic dishonesty without fear that one group of students has an advantage over the other group in terms of overall success on the assessment.
As for the results of my study specifically, my findings were similar to the previous research I had read. In my study, the overall average scores proved to be about .2 points higher on a 4.0 scale for the form in which the questions appeared in the usual easy-to-hard order. The most significant piece of data in my opinion, however, came from a comparison of the percentage of students who attempted to answer the 4.0 questions on the two different forms. On the first form where the 4.0 questions were the last questions on the test, only 45% of these questions were even attempted. However, 60% of these same questions were attempted by students when they were interspersed throughout the test on the second form. This finding certainly has an implication for the future assessments I (and other teachers for standards-based classes) create. Knowing that students are more likely to be “tricked” into answering higher-order questions when they appear randomly throughout a test is very worthwhile toward achieving my goal of increasing the likelihood that more students will complete an entire assessment without leaving major portions of it unanswered.
Post by | Allison Rettke | High School Mathematics
This is a resource build by the ESSD40 staff for to aid in transforming teaching and learning.
Inspire. Empower. Challenge.
Learning Out loud