MED-AUDIT:  The Medical Equipment Device – Accessibility and Universal Design Information Tool

Background

MED-AUDIT Logo
MED-AUDIT Logo
The R3 Project was one of several research and design activities within the Rehabilitation Engineering Research Center on Accessible Medical Instrumentation (RERC-AMI), which was based at Marquette University in Milwaukee during 2003-2008. The R3 Project of the RERC-AMI was entitled “Accessibility Measurement” and it aimed to determine whether it is possible to scientifically measure the accessibility of medical instrumentation. The RERC-AMI R3 Project team was led by Roger O. Smith, Ph.D., Dr. Rochelle Mendonca, and consists of various team members from academia and industry. Initially, the team met regularly from 2003 to 2008, and many resources were created as well as several versions of a tool called the MED-AUDIT (Medical Equipment Device Accessibility and Universal Design Information Tool). The development and testing of the MED-AUDIT continues as we develop the tool further, conduct user testing, and speak with stakeholders.

This site hosts the R3 Project’s resources and documents whereas a general description for the R3 project can be found on archived RERC-AMI website and the R3 Project website.

We appreciate your interest in the MED-AUDIT Project and welcome your comments on any of our resources or work.

About the MED-AUDIT Accessibility Measurement Tool

Several versions of the MED-AUDIT have been developed, and below we document the initial implementation and testing of the MED-AUDIT that is aimed at providing a medical equipment and device evaluation and information system to promote equal access to healthcare for all individuals, including people with disabilities and older adults.

Conceptual Overview

The MED-AUDIT was designed to evaluate and quantify accessibility for people with disabilities and consequently reduce healthcare disparities for populations of medical device users with disabilities. Five specific design objectives were stated for the RERC-AMI R3 Project (2003-2008):

  • MED-AUDIT needed to assess medical devices and provide informative reports for designers and the public that might not know about special needs of people with disabilities.
  • MED-AUDIT needed to be efficient. While many hundreds of questions could be asked about the accessibility of a product, the questioning process needed to be efficient.
  • MEDAUDIT scores needed to inform product designers who needed to see how universally designed a product was for all potential users, yet be specific enough for an individual with a disability to see how accessible a product would be for their unique needs.
  • The assessment output needed to be quantitative so device designs could be compared.
  • MED-AUDIT scores needed be reliable and valid form a psychometric standpoint.

Overall, the integrated MED-AUDIT scores draw on two major data sources. The first source of data is elicited from designers or other product assessors as they tally which tasks a device requires and which features a device includes. The second data source is imported from a knowledge base of two matrices previously populated by experts. These matrices predict relationships between a) product features and user impairments and b) product features and tasks. These data are integrated using an algorithm that effectively weights the assessor’s responses to produce MED-AUDIT scores to indicate the degree of accessibility of the evaluated medical device on a scale of 0-100% accessibility.

See also: Measuring Accessible Medical Instrumentation: Annotated Bibliography.

Question Domains

The MED-AUDIT was conceptualized with two major question sections: (I) Procedures-Task Analysis and (II) Device Features. The MED-AUDIT team postulated that in order to determine the accessibility of medical devices, it was necessary to know the tasks that a device user was required to perform in order to use the device as well as the accessibility features present in the device design. Relevant tasks are important to measure the accessibility of a device because, for example, if a device did not require users to position themselves on a device (like with an auditory alarm), there would be no concern about users transferring onto the device. Thus, certain tasks become irrelevant for scoring accessibility and other tasks are critical for measuring device accessibility. The second domain of questions focuses on the accessibility features of the device being rated. Obviously, which accessible features a product design integrates would affect the accessibility scores generated. These two domains relate, as some tasks required by a device for use would likely require accessible features. In the example of an auditory alarm, an essential task would be to recognize the sound. For this device to rate highest on accessibility, it would need to include a visual and tactile alarm output as well. Table 1 shows excerpts from the two core MED-AUDIT scoring domains (see also “Black Box System (BBS) MED-AUDIT Taxonomy”). The current question taxonomy draft includes 1158 distinct questions, including: 177 task requirements and 981 device features. The questions are arranged in a hierarchical outline that includes 33 major categories: 10 for the task requirements and 23 for device features.

Table 1. Procedures Task Analysis (I) and Device Features (II) Questions for MED-AUDIT
PROCEDURES-TASK ANALYSIS

  • Prepare for device use
  • Select appropriate device
  • Familiarize self with device
  • Familiarize self with person
  • Match device to situation
  • Understand device use
  • Understand general procedure
  • Understand component procedure
  • Understand controls
  • Understand display info
  • Receive training to use the device
  • Position device-prep for use
  • Locate device
  • Detect orientation of the device
  • Approach- move to device
DEVICE FEATURES

  • Overall Device Features
  • Parts that req. assembly and disassembly
  • Easy assembly
  • Infrequent assembly
  • Few steps required
  • Easy disassembly
  • Infrequent disassembly
  • Displays
  • Monitors/screen displays
  • Enhanced contrast
  • Screen Size
  • Brightness contrast
  • Contrast adjustment
  • Brightness adjustment
  • Brightness coding

Software and Question Branching

The initial MED-AUDIT software was built on a previously developed software platform called OT-FACT, running on a “Spinnaker Plus” platform. It was then converted using a software shell called xFACT running on Livecode. The xFACT interface was optimized to generate a taxonomy of branching questions to be both comprehensive yet efficient. Figure 1 below shows a representative screenshot of the prototype software.

Screen shot of MED-AUDIT software running in OTFACT 2.0 interface
Figure 1: Screen shot of MED-AUDIT software running in OTFACT 2.0 interfaceView EqTD

The question domains (procedures-task analysis and device features) are imported into the software and the MED-AUDIT question taxonomy uses the trichotomous tailored subbranching scoring structure (TTSS) as an efficient question branching method where irrelevant questions are eliminated. Fundamentally, TTSS uses a trichotomous response for each question, where 2 corresponds to not problematic, 0 corresponds to totally problematic, and 1 corresponds to partially problematic. When a rater responds to a MED-AUDIT question with a 2 or 0 response, the TTSS software moves to the next major category of questions (skipping the sub-level detailed questions in between major categories). When a rater responds to a question with a 1, the TTSS breaks down the category into more detailed subcategories to request more information from the rater. Thus, the trichotomous scoring is (1) cognitively more simple resulting in increased reliability and response scoring speed, (2) more efficient because it only asks detailed questions when needed so there is potential to include more detailed questions because irrelevant questions are omitted, and (3) more flexible because the verbal anchors that accompany the response sets can be adjusted as necessary and can intentionally vary in construct (e.g., requires tasks, somewhat requires task, does not require task; includes feature, somewhat includes feature, and does not include feature) (Smith 1993, 1994, 1995, 2002).

Impairment Categories

A comprehensive survey of the literature (see: MED-AUDIT Impairment Categories: Working Towards Mapping AMI Usability) was conducted to identify optimal impairment-related categorization schemes for consideration as the basis for generating MED-AUDIT scores (Barbotte, Guillemin, Chau, and the Lorhandicap Group, 2001, Center for Rehabilitation Technology, 2001, Pizur-Barnekow, Lemke, Smith, Winter and Mendonca, 2005, United States Census Bureau, 2004, United States Department of Health and Human Services, 2004, Vanderheiden, and Vanderheiden, 1991, World Health Organization, 2002). From this comprehensive review, a set of thirteen impairments and definitions were developed for the MED-AUDIT with mutually exclusive and exhaustive impairment domains. The impairment categories are used to generate scores for device accessibility. The set 13 impairments are: (1) hard of hearing, (2) deaf, (3) vision limitation, (4) blind, (5) expressive communication, (6) comprehension disorders, (7) other cognitive disorders, (8) mental and behavioral impairment, (9) sensitivity impairment, (10) lower limb impairment, (11) upper limb impairment, (12) head, neck, and trunk impairment, and (13) systemic body impairment.

Accessibility Expert Knowledge Matrices

The expert mapped matrices for MED-AUDIT work in the background of the software algorithm to provide prior likelihoods for the simple Bayes model (Birnbaum, 1999, Gustafson, Cats- Baril, and Alemi, 1992, Malakoff, 1999). These provide relative weightings to the question categories in order to generate overall accessibility scores for medical devices. Two distinct matrices correlate: (1) tasks involved with using the medical device and accessibility features that are related to completing the tasks and (2) medical device features that make devices more accessible for specific user impairment groups. The correlation between the device features and user impairments provides the critical connection for generating overall device accessibility scores. The data contained in the expert knowledge matrices combine with the data completed by the rater for a specific device enables the MEDAUDIT to generate accessibility scores for different user impairment types using the algorithm described in the next section. Tables 2 and 3 below show excerpts of the two matrices.

Table 3. Excerpt of the Expert Knowledge Impairment-Feature Matrix of MED-AUDIT
IMPAIRMENTS
DEVICE FEATURE DEAF LOW VISION BLIND COMPREHENSION DISORDERS BEHAVIORAL IMPAIRMENT SENSITIVITY IMPAIRMENT LOWER LIMB IMPAIRMENT UPPER LIMB IMPAIRMENT
ADEQUATE APPROACH/TURNING SPACE 1 2 2 1 1 2 2 1
ADEQUATE TABLE/COUNTER HEIGHT 1 1 2 1 1 2 2 1
ADEQUATE LOWER LIMB CLEARANCE 1 1 2 1 1 2 2 1
ADEQUATE PRIVACY 2 1 2 1 2 1 1 1
CLEAR APPROACH/TRANSFER PATH 1 2 2 1 1 2 2 1
ADEQUATE OVERHEAD CLEARANCE 1 2 2 1 1 2 1 1
Table 4. Excerpt of the Expert Knowledge Task-Feature Matrix of MED-AUDIT
TASK REQUIREMENTS
DEVICE FEATURE ACCESS IN-PROGRAM HELP USE PRINTED MANUAL USE TUTORIAL USE PERSONAL HELP APPROACH THE DEVICE MOVE AWAY FROM THE DEVICE TRANSFER ON TO THE DEVICE TRANSFER OFF THE DEVICE
ADEQUATE APPROACH/TURNING SPACE 2 2 2 2 2 2 2 2
ADEQUATE TABLE/COUNTER HEIGHT 2 2 2 2 1 1 1 1
ADEQUATE LOWER LIMB CLEARANCE 2 2 2 2 1 1 1 1
ADEQUATE PRIVACY 0 0 0 2 0 0 0 0
CLEAR APPROACH/TRANSFER PATH 2 2 2 2 2 2 2 2
ADEQUATE OVERHEAD CLEARANCE 2 2 2 2 1 1 1 1

MED-AUDIT Accessibility Scoring Algorithm

Initial development of the MED-AUDIT scoring was completed through conceptualization of the logic and scoring requirements for the algorithm, as shown in Figure 2 below.

Flowchart showing the scoring algorithm used in MED-AUDIT
Figure 2: Flowchart showing the scoring algorithm used in MED-AUDITView EqTD

The domains of the tool were established (including the need for device feature and task requirement sections), the expert matrices were conceptualized (including features required for tasks and features required for different impairments), and the basic operations of the scoring modality were established (including the need to increasing and decreasing the overall score, depending on the type of information provided for each particular device). In Figure 2, the different scoring cases can be seen, including a maximum mathematical relationship of +8 and minimum mathematical relationship of -8, middle cases of +4, +2, -2, and -4, as well as a case of 0. Equation 1 below was used for the pilot scoring algorithm during early implementation of the MED-AUDIT (initially developed in Fortran). To generate overall accessibility and usability scores, sums of product relationships with 4 product terms are used: (1) Expert scored device feature requirement for a task [xe-dT], (2) Expert scored device feature requirement for user impairment [xe-iD], (3) Rater scored device feature presence on the device [xr-d], and (4) Rater scored task requirement for device use [xr-T].

When raters score a device feature presence of 2, the scoring element is positive (i.e., if the feature is needed then the score will be positive because the needed feature is present, when raters score device feature presence as 1 the scoring element is ignored as a zero score (i.e., the feature is needed the score will not be affected positively or negatively because the needed feature may or may not be present), and when raters score device feature presence as 0 the scoring element is negative (i.e., if the feature is needed then the score will be negative because the needed feature is not present. Pilot testing of this approach was conducted with an improved MED-AUDIT interface that used case specific logic to generate accessibility and usability scores for different medical technologies (subsequently developed in the C++ coding language).

Team

Current Team

  • Roger O. Smith, Ph.D., University of Wisconsin-Milwaukee
  • Rochelle Mendonca, PhD, Columbia University
  • Maysam M. Ardehali, University of Wisconsin-Milwaukee

Past Team

  • Roger O. Smith, Ph.D., University of Wisconsin-Milwaukee
  • Rochelle Mendonca, University of Wisconsin-Milwaukee
  • Melissa Lemke, Marquette University
  • Todd Schwanke, MSE, ATP, University of Wisconsin-Milwaukee
  • Jack Winters, Ph.D., Marquette University
  • Laryn O’Donnell, OTD, University of Wisconsin-Milwaukee
  • Megan Sullivan, University of Wisconsin-Milwaukee
  • Sheldon Pearson, University of Wisconsin-Milwaukee
  • Brooke Hartman, Columbia University
  • Susan Xing, Columbia University
  • Amanda O’Connor, Columbia University
  • Jamie Tan, Columbia University
  • Emily Thomas, OTR/L, Volunteer Research Assistant, Columbia University
  • Charlie Raphael, Professional Development, Education and Training Specialist

References

1.Smith, R. O. (1993). Sensitivity analysis of traditional and trichotomous tailored sub-branching scoring
(TTSS) scales.
University of Wisconsin-Madison, Madison, Wisconsin. https://books.google.com/books/about/Sensitivity_Analysis_of_Traditional_and.html?id=CT_TAAAAMAAJ 

2. Smith, R. O. (1994, 1995). OT FACT, version 2.0 [Computer software]: American Occupational Therapy
Association, Rockville, MD.

3. Smith, R. O. (2002). OTFACT: multi-level performance-oriented software with an assistive technology
outcomes assessment protocol.
Technology and disability, 14, 133-139. https://www.researchgate.net/publication/294683143_OTFACT_Multi-level_performance-oriented_software_with_an_assistive_technology_outcomes_assessment_protocol 

4. Barbotte, E., Guillemin, F., Chau, N., & the Lorhandicap Barbotte, E., Guillemin, F., Chau, N., & the
Lorhandicap Group (2001). Prevalence of impairments, disabilities, handicaps and quality of life in the
general population: A review of recent literature.
 Bulletin of the World Health Organization, 79, 1047-
1055. Retrieved on January 1, 2005 from
https://pubmed.ncbi.nlm.nih.gov/11731812/

5. Center for Rehabilitation Technology (2001). Barrier Free Education Concepts – Disability Definitions.
Retrieved on November 4, 2004, from https://web.archive.org/web/20041028161154/http://barrier-free.arch.gatech.edu/Research/concepts.html(Archived Version)

6. Pizur-Barnekow, K., Lemke, M., Smith, R. O., Winter, M., & Mendonca, R. (2005). MED-AUDIT
impairment categories: Working towards mapping AMI usability.
 Retrieved January 5, 2009 from
med-auditimpairments.pdf

7. U.S. Census Bureau (2004). Disability status: 2000 – Census 2000 Brief. Retrieved on November 6,
2004 from https://web.archive.org/web/20041106151812/
http://www.census.gov/hhes/www/disable/disabstat2k/table1.html (Archived Version)

8. United States Department of Health and Human Services (July 2004). Vital and Health Statistics,
Summary Health Statistics for U.S. Adults: National Health Interview Survey, 2002
, Series 10, Number
222. Table 18. Retrieved on October 6, 2004 from
http://www.cdc.gov/nchs/data/series/sr_10/sr10_222.pdf

9. Vanderheiden, G. & Vanderheiden, K. (1991). A brief introduction to disabilities. Trace Center.
Retrieved on December 20, 2004 from http://hcim.di.fc.ul.pt/hcimwiki/images/6/60/Vanderheiden1991-IntroductionToDisabilities.pdf

10. World Health Organization (2002). Body function – ICF categories. Retrieved on November 4, 2004
from https://www.who.int/classifications/icf/icfbeginnersguide.pdf

11. Birnbaum (1999). Bayesian Calculator. Retrieved September 5, 2009 from
http://psych.fullerton.edu/mbirnbaum/bayes/BayesCalc.htm

12. Malakoff, D. (1999). “Bayes Offers a New Way to Make Sense of Numbers.” Science, 286, 1460-1464. https://science.sciencemag.org/content/286/5444/1460 

13. Gustafson, D. H., Cats-Baril, W., Alemi, F. (1992). Forecasting without Real Data: Bayesian Probability
Models. Systems to Support Health Policy Analysis.
Ann Arbor, MI, Health Administration Press: 176-201. https://www.researchgate.net/publication/14676231_Forecasting_without_historical_data_Bayesian_probability_models_utilizing_expert_opinions