All tests, including those produced by the Examinations Institute, are instruments that are designed to measure student knowledge. Because they are making a measurement, they are susceptible to error, and the Institute is keenly interested in understanding and minimizing sources of measurement error resultant from the use of ACS Exams.
As is evident from the fact that people can be trained to be good “test takers,” there are components of any test that provide information that influences student responses. These influences may, or may not be, related to the content knowledge being tested. One area where this is particularly true lies in the complexity of the items on an exam. Because of the nature of human cognition, this complexity provides one way to analyze ways that the item affects the measurement of content understanding.
This project is essentially derived from work in the concept of cognitive load – the amount of information that is used to accomplish a cognitive task is referred to the load. Everybody has limitations on the amount of information that can be part of the active, working memory. This idea was first articulated by George Miller in the 1950s, and in chemistry education was significantly advanced by Alex Johnstone (winner of the ACS Award for Achievement in Research for the Teaching and Learning of Chemistry) and his co-workers.
At the Examinations Institute, we have devised a rubric to help us assign the complexity of the items on any multiple-choice exam, and we have applied this rubric to items on ACS Exams. Current research has been focused on analyzing the validity of this method of assessing item complexity and on determining if the construct is measurable in statistical analysis. To date, results are encouraging, and we expect to be able to use complexity analysis as the key cognitive dimension for the analysis of ACS Exams for the purpose of aligning our exams in terms of both cognition and content knowledge.