Qualitative Analysis of Summit Discussion Notes
By: Lindsey Harness | March 1, 2015 | National DETA Research Center
Key Questions for Part 1:
1. Importance in National Standard for Defining Key Definitions:
- Distance education (e.g., blended learning versus technology-enhanced)
- The DE student
- Critical Thinking
- Media Literacy
- Student preparedness
- Student success
- Online Learning
2. What is student success from the student’s perspective (not the institution or the program)?
3. Who is conducting the research? What data are you compiling around student success parameters and who is compiling them? And who is looking at this data?
4. What student support interventions are needed for success and learner outcomes? What support is available?
Specifics of questions:
Who is the DE student?
- distance only students have different characteristics than other students who use blended and partly in person. Are there common factors?
- Are there certain behaviors or patterns in online learning? Where do students spend time and where in learning environments?
- What are the small number of definable, measurable characteristics that make research rich for student success in distance education?
- definitional issues with the terminology – “distance education” as the umbrella term for online learning, virtual learning, digital learning, blended learning, etc.
- What does competency mean in education?
- How can we update the traditional system of education to increase competency?
- How can we define and measure student success beyond traditional outcomes (e.g., academic learning, competency-based)?
- The credit-hour
- People are social learners. Can some competency-based programs get to that social learning?
- mastery – demonstration of competencies
- Learning how to think like a scientist, a historian, etc. on a disciplinary level
- How can disciplinary thinking be measured?
- What strategies support student success at the individual learner, class, program and institutional levels?
- What strategies are generalizable across institutions?
- Is there a common understanding of “student success” that extends across institutions?
- What does it mean for students, what does it mean for the institution, what does it mean for our key stakeholders?
- How do students measure success?
- What are intervention strategies for helping students be successful?
- What interventions are particularly successful?
- How can technology-based assessment truly assess learning and skills?
- How can we develop of student-driven individualized metrics of success?
- How can we define and measure student success beyond traditional outcomes (learning and competency)?
- When we are talking about success, we are shooting ourselves in the foot because student success is tied in non-trad student. If you are in flex modality, that is the modality they have to stay in.
- How does student performance differ in online, blended, and F2F classes? Synchronous/asynchronous? (Normalizing the students with something like propensity score matching so that we are doing “apples to apples” comparisons)
- What methods/measurements are being conducted at institutions to measure student success now? e.g. PAR, NSSE, institutional research ?
- How do we measure and assess these students on the same scale
- To what extent are campuses meeting the needs of all learners? How are they meeting the needs of differently-abled learners? What are the benefits to all students when the materials are fully accessible?
- What are the barriers to access for various populations
- Is there a ubiquitous and device agnostic access to the learning platform? What are the training and support options for students?
- Issue is not that teachers do not want to make things accessible, it’s that the technologies needed to design and create are either not available or not made known
- We need make the criteria sheet accessible to students and faculty
- We need to track student preferences and motivations differently
- Change in motivation to achieve
- Who has accessed the student support services?
- What type of support is available and who has access to this support?
- Who is conducting the research?
- Is the research led by institutional goals/outcomes, student goals/outcomes, faculty?
- What data are you compiling around student success parameters and who is compiling them? And who is looking at this data?
- What data do you have access to at your institution? Is your data open? Why or why not?
- Are there any relationships between any data in our systems and any definition of success?
- I think it will be critical to review http://www.nosignificantdifference.org/ and Thomas Russell’s work. I suggest the goal of this project should be to ensure that none of the common mistakes are made. (From Larry Johnson)
- What is the relationship between easily-obtainable/quantitative/scalable data and more in-depth/harder to obtain/qualitative data?
- Do we need qualitative projects to better surface appropriate variables related to student success (e.g., student goals for enrolling, etc.)??
- Not all data needs to be gathered centrally; there may be benefit in collecting data locally (small groups) where variable definitions are clear and measurements are specific to question(s) about identifying the active ingredient(s) in the intervention.
- To truly understand WHY our interventions work, we must show change in the mediator!
- Who are the constituencies interested in data and research?
Key Variables for Part 2:
- Learner Characteristics
- Student preparedness
- Student goals
- Student success – from their definition
- Student satisfaction
- What are the behavioral indicators that suggest satisfaction (e.g., repeated enrolled in the online modality)?
- Social presence/access
- Patterns of behavior leading to increased student learning for different populations
- Student outcomes beyond grades
- Definition of digital mode (fully online versus blended)
- Faculty characteristics
- Faculty preparedness
- Social presence/access
- Institutional support
- Motivation from faculty, from students, from institution
- Design components
- What are the different design components (e.g., content, interactivity, assessments, etc.) that impact student learning?
- Definition of different teaching methodologies (what we mean by: collaborative, cooperative, active, student-centered, etc.)
Citation: National DETA Research Center, March 1, 2015, “Key Themes from Summit Discussion Notes”, Retrieved from: https://uwm.edu/deta/?p=181