top of page

EDSS Scoring Accuracy for Clinical Monitoring

(Neurostatus & Expanded Disability Status Scale)

Improved EDSS scoring accuracy by redesigning clinical monitor training into a performance-focused, on-demand learning system.

Doctor Assessing Child

Project Details

Project Name

EDSS Scoring Accuracy for Clinical Monitoring

Tools Used

Articulate Rise and Storyline 360 · LMS (SCORM 2004 3rd Edition) · SharePoint· Snagit · PowerPoint

Skills Applied

​Learning Needs Analysis · Performance Gap Analysis · Curriculum Redesign · Performance Support Design · SME Collaboration · Tool-based Learning · Stakeholder Consulting

Project Background

Clinical trials investigating Multiple Sclerosis therapies rely on standardized neurological examinations to assess disease progression. A key outcome measure is the Expanded Disability Status Scale (EDSS), derived from Kurtzke’s Functional Systems scoring and documented using Neurostatus tools.

While principal investigators (neurologists) perform and document these assessments, clinical monitors are responsible for independently verifying the accuracy of Functional System scores and EDSS calculations during site visits. Errors or inconsistencies can compromise data integrity and must be addressed in real time.

This project focused on equipping clinical monitors with the skills, tools, and confidence needed to accurately review EDSS source documentation, identify scoring discrepancies, and appropriately challenge investigator assessments when required.

The Problem

Leadership initially believed clinical monitors lacked confidence when questioning investigators and requested a short “confidence booster” course to supplement an existing 2-hour instructor-led training (ILT).

However, early signals suggested the issue was not confidence alone, but inconsistent mastery, limited access to tools at the moment of need, and an overreliance on a single facilitator for ongoing support.

Analysis and Key Insights

To validate the problem and identify appropriate solutions, I conducted a focused learning needs analysis that included:

  • Observation of the live ILT session

  • Interviews with the course facilitator

  • Interviews with experienced clinical monitors

  • Live demonstrations of EDSS scoring and calculation by both groups

 

Two critical issues emerged:

  1. Instructional bottleneck
    The facilitator was balancing trial responsibilities, training delivery, and ongoing field support, limiting the ability to provide targeted follow-up or updates.

  2. Performance gaps at the point of use
    Clinical monitors were often unsure of their own calculations and struggled to quickly locate the correct Neurostatus tools during site visits—precisely when accuracy mattered most.

 

These findings indicated that adding more instruction would not resolve the underlying performance problem.

Learning Strategy

Rather than layering additional content onto an already dense ILT, I recommended retiring the proposed “confidence booster” entirely and redesigning the program around performance support and just-in-time access.

 

The revised strategy focused on:

  • Shifting foundational instruction out of the classroom

  • Preserving facilitator time for targeted coaching

  • Embedding tools directly into the monitor workflow

Solution Design

The redesigned learning ecosystem included four coordinated components:

  • On-demand prerequisite training
    The original ILT content was converted into a structured, self-paced Rise 360 course, allowing monitors to build foundational knowledge before live engagement.

  • Targeted 1:1 follow-up
    Facilitator time was repurposed for individualized coaching sessions, informed by real monitoring challenges.

  • Centralized resource hub
    A Neuroscience SharePoint site was created to house Neurostatus forms, FAQs, scoring guidance, and monitoring tips in one searchable location.

  • Mobile EDSS calculator
    A mobile-friendly EDSS calculation tool was developed to support accurate scoring during site visits—an innovation for these studies.

Instructional Design & Development

The core course was built in Rise 360 to ensure full responsiveness for field use across devices. Interactive elements requiring procedural accuracy were developed in Storyline 360 and embedded directly into the Rise module.

 

Design decisions were intentional:

  • No audio, to support mobile use and environmental constraints

  • Visual and interaction patterns optimized for quick reference

  • Legacy graphics recreated using Snagit and PowerPoint to ensure consistency and clarity

 

Two non-proprietary Storyline interactions—How to Score and How EDSS Is Calculated—were designed to reinforce correct application without exposing protected materials.

Course Interactions

The buttons below launch two core interactions from the course.


EDSS Scoring Table walks through how Functional System scores are applied, while EDSS Score Calculation allows learners to practice determining the correct EDSS Step across multiple scenarios.
Each interaction opens in a new window.

My Role
  • Conducted learning needs analysis and stakeholder interviews

  • Observed live training and assessed performance gaps

  • Advised leadership on retiring misaligned content requests

  • Designed and developed the Rise 360 course structure

  • Built Storyline 360 interactions for scoring and calculation practice

  • Recreated and standardized visual assets for responsive delivery

  • Partnered with SMEs and leadership to implement performance supports

Constraints and Considerations
  • Content required to be mobile-friendly for on-site monitoring

  • Audio was intentionally excluded due to field conditions

  • Proprietary Neurostatus materials could not be redistributed

  • Tools needed to support real-time decision making, not recall-based learning

Outcomes and Impact

The redesigned program replaced a static, instructor-dependent model with a scalable, performance-centered learning system.

 

Clinical monitors reported:

  • Increased confidence in EDSS scoring review

  • Improved accuracy when identifying discrepancies

  • Greater readiness to engage investigators when errors occurred

 

Facilitators became advocates for the new approach, as learner feedback from 1:1 sessions directly informed ongoing updates—creating a continuous improvement loop grounded in real field experience.

bottom of page