Evidence of Learning: Direct and Indirect Measures

The program assessment process involves gathering evidence of student learning. All programs are required to collect direct evidence of student learning for each program learning outcome.  However, the evidence gathered can vary across programs and depend on each program’s learning outcomes and the opportunities in each curriculum map to collect data.

Types of Assessment Evidence:

Direct Evidence

Direct evidence measures student learning by examining student work or performance directly. It can offer insight into what and to what degree students have learned through evaluating exams, papers, performances, observations, or other artifacts of student work. Direct evidence is compelling in that it offers the ability to make judgements about the relative degree of learning or mastery that students have achieved. Assessment plans are designed to gather direct evidence wherever possible. Examples of direct evidence:

  • Evaluation of capstone projects (scored with a rubric)
  • Evaluation of student portfolios (scored with a rubric)
  • Performance evaluations
  • Comprehensive examinations
  • Performance in proficiency exams (e.g., language proficiency exams)
  • Performance in licensure exams
  • Other faculty evaluation of student work in assignments, projects, performances, presentations, quizzes, exams, or thesis
  • Evaluation of a random sample of student writing (scored with a rubric)
  • Pre-post assessments (measuring student change over the course or program)
  • National or standardized exam scores
  • Evaluation by internship supervisor

Things to consider:

  • It’s important that direct measures validly measure the specific learning outcomes in question and provide faculty with usable information. For example, an exam that is broken into sections that each align with a particular outcome provides more usable information than one that lumps them together.
  • Even valid and reliable direct measures may only capture a student’s performance or learning at one particular moment and may be better at capturing learning for some students than others. It’s helpful to provide students with multiple opportunities and different methods of demonstrating their learning on the same outcomes over the course of their study.
  • When using a scoring rubric across multiple courses and instructors in a program, it is important that faculty are normed or calibrated to interpret and use the rubric consistently.
  • There are some goals or outcomes a program may have, such as fostering particular dispositions or habits of mind, that are difficult to capture with direct evidence.
Indirect Evidence

Indirect evidence suggests that learning has taken place and can often provide important insight about or context for interpreting direct evidence. It may also be the only kind of evidence available for program goals aimed at cultivating dispositions, habits of mind, or attitudes necessary for students to succeed. As such, most assessment plans try to include some kind of indirect evidence in conjunction with direct evidence. Examples of indirect evidence:

  • Exit interviews
  • Focus groups
  • Student surveys
  • Alumni surveys
  • Student self-evaluations
Supporting Evidence

Supporting evidence is not evidence of learning per se, but has a role in the assessment of student learning.  It can help illustrate a program’s successes or needs, provide context for interpreting assessment data, or aid in designing assessment plans that address pressing questions of interest to the program. Supporting evidence also plays a role in the assessment of academic and non-academic strategic goals that a program may have.

Some examples of supporting evidence include:

  • GPA data
  • Job placement rates
  • Graduation rates
  • Student publications and presentations
  • Course pass rates


Grades: Grades are a kind of supporting evidence that are rarely designed to reflect specific learning outcomes, but rather combinations of outcomes and student behaviors. Grades assess the overall performance of individual students, but as evidence of the effectiveness of a program they are messy and complicated indicators at best. Assessment is more effective when evidence specific to each outcome can be separately gathered and reported. For this reason, GPA data or course grades by themselves are generally not helpful evidence for learning outcomes assessment, although they can be useful supporting evidence when examined in relation to other direct and indirect evidence of student learning.

Evidence and Equity

UWM’s commitments to diversity and the success of all students means individual programs have a special impetus to consider questions of equity. This means designing assessment and gathering evidence that can help programs support the success of all students.  The single most important way to do that is to disaggregate data by relevant groupings wherever possible, and to include addressing and closing any equity gaps as part of the assessment action plan. Refer to Departmental Data and Equity: Disaggregating Data for more information about campus resources for disaggregating assessment data.