Our chemical education research group is focused on methods of content delivery, student cognition in problem-solving strategies and assessment in preparatory and introductory college chemistry courses. More specifically these goals have centered on three main projects:
Measuring and enhancing students’ scale literacy
Development of a new assessment strategy (rapid knowledge assessment) for understanding students’ problem solving strategies
Examining multiple-choice assessments for differential item functioning
Grasping scale outside the visual realm can be difficult particularly with regards to the very small. Undergraduate students in preparatory and introductory chemistry courses, for example, are required to begin thinking about certain concepts in chemistry on a particle level, which is orders of magnitude smaller than the resolving ability of the human eye. The development of a student’s scale literacy outside of the concepts of chemistry has been noted as an important component of a student’s overall science literacy by the AAAS (American Association for the Advancement of Science). Research has shown that students need to continue cultivating their understanding of scale, particularly down to the nanometer size, beyond their elementary and secondary education years. Additionally, it has been found that students who utilize instrumentation in these very small regions have a better concept of scale than those who do not.
This project measures changes in understanding in both a student’s scale perception and unitizing on the atomic level. Unitizing is the development and use of a convenient or familiar unit. For instance, although we use a common length of the meter this is on the order of human size (we unitize to what we are most familiar) and it is often only through necessity that we unitize to other units (for example, a light year). It is expected that students in preparatory or introductory college chemistry “think conceptually” of atoms and molecules interacting. The precursor expectation to this is unitizing on the atomic level (with the unit of the atom). Once students unitize on the atomic level, the transfer of both enhanced scale perception and atomic unitizing to other specific content areas are measured.
This project utilizes the previous studies by presenting students in preparatory and introductory chemistry courses with images generated by a real-time, remote access canning electron microscope or portable, in-class scanning tunneling microscope for inquiry-based exercises in during lecture. Both informal and formal assessment measure the efficacy of the instrumentation utilized during the lecture demonstrations in enhancing students’ scale perception, atomic unitizing and transfer to other specific content areas.
We have developed a means to measure two key components of scale literacy (scaling skills and scale conceptual understanding) that have been found to be better predictors for success in general chemistry I than more traditional measures of math or chemistry content knowledge. We are in the process of developing individual supplemental activities that are designed to target specific proficiency-levels of students and focus on enhancing performance on aspects of scale literacy: measurement and magnification and particulate nature of matter.
Rapid knowledge assessment
Improving students’ learning and understanding in science, technology, mathematics or engineering (STEM) disciplines coincides with improving methods of assessing learning and understanding in STEM. Cognitive load theory, a theory from cognitive psychology, has been applied in mathematics where expertise of a student was assigned based on a rapid-measurement scheme assessing which immediate first step was taken in the problem-solving process. This measurement coupled with a measure of the student’s reported mental workload provided greater insight into the problem-solving strategies employed by the student than performance alone. Additionally, once expertise can be approximated, instructional materials can be tailored to further enhance performance on subsequent assessments where research has shown the lower expertise students performed better after instruction with worked examples while higher expertise students performed better with problem solving instructions.
Through this a rapid knowledge assessment (RKA) for preparatory and introductory college chemistry modeled after the rapid-measurement scheme utilized in mathematics has been developed. Efficiency in students’ problem solving strategies is assessed through the reported immediate first step, correct or incorrect response to the exercise overall and student reported mental effort. Validation of this instrument included expert analysis of student responses, student problem solving strategies as reported through think-aloud protocols, and longitudinal analysis of similar items in preparatory to introductory college chemistry as well as internally and externally validated with other measures of the instrument and two different standardized, high-stakes exams. Validation of the responses and subjective reporting of mental workload includes examining time on task, scan path maps, fixation patterns and task-evoked pupillary response (TEPR) using a desk-mounted eye tracker for both novices and experts. Instructional materials using expert performance research and the foundations of deliberate practice exercises have been developed and field-tested.Further refinement, development and field-testing in conjunction with theinstrument use is ongoing.
Differential Item Functioning
Differential item functioning (DIF) is an item-level characteristic of test items where an item may be found to be statistically easier for members of one demographic comparison group than another. DIF analyses typically involve matching examinees from different subgroups (such as gender, race/ethnicity, socioeconomic status, language ability) on a proficiency variable, carrying out item analysis for each group, and evaluating the results for statistical significance. Where DIF is present, it is said that the item “favors” one group over another (a result that suggests that examinees at equal skill levels from different subgroups do not have an equivalent chance of answering a question correct due to subgroup membership). Statistical techniques for detecting DIF include item response theory (IRT), simultaneous item bias statistic (SIBTEST), and Mantel-Haenszel statistic and can be carried out on both multiple-choice and constructed-response items.
In this project, multiple-choice exam items in general chemistry were investigated using trial-tested preliminary exams in preparation for a standardized first-term general chemistry exam. These items were then retested in general chemistry I with both the original item (that was not included in the final released version of the test) and clones of the original items. These were tested in high, medium and low stakes testing and matching proficiency internally (using the score on the assessment as the measure of proficiency) and externally (using placement exam scores, ACT score and subscores, and standardized final exam scores). These items have been tested over three semesters and the subset of persistent DIF items have been coded into an eye tracker. Examining for differences between the subgroups solving the tasks using the eye tracker includes performance, time on task, scan path maps, fixation patterns and TEPR. Additionally, these subgroups will be collectively examined for processing differences when correlated to DIF persistence.
In addition to differential performance on tasks, subgroups have been found to have differential persistence in STEM majors. Using social cognitive theory and social cognitive career theory, four indicators are used to predict persistence in STEM majors: self-efficacy, outcomes expectations, interest and goals. An instrument to measure these in chemistry are in development and the correlation between persistent DIF and persistence in STEM will be examined.