The Evidence-Bound Learning Design (EBLD) Manifesto
Why I Wrote This Manifesto
I didn’t set out to create a new learning theory.
I wrote this manifesto because, over years of building training programs in real organizations, I kept encountering the same pattern: learning was expected to solve problems it did not cause, and was judged using measures that could not prove whether it helped.
Courses were launched because someone requested them. Programs were defended because people liked them. Success was declared based on completion, sentiment, or tradition.
Meanwhile, the actual problems remained.
I’ve worked in environments where performance mattered immediately, where mistakes had real consequences, and where leaders needed to know whether an investment in learning produced measurable return. In those contexts, good intentions and engaging content were not enough. Learning had to earn its place.
Again and again, I found that the most responsible decision was sometimes not to train at all, but to fix a system, redesign a workflow, or remove friction that made success harder than failure. And when learning was the right solution, it had to be designed in a way that allowed its impact to be observed, measured, and defended.
Over time, this forced a shift in how I approached learning design.
I stopped treating training as the default response to performance problems.
I stopped accepting anecdote as evidence.
I stopped separating learning from accountability.
What emerged was not a new methodology, but a discipline: one that binds learning to evidence, constrains it by real-world performance, and treats return on investment not as a reporting exercise, but as a design responsibility.
This manifesto exists to make that stance explicit.
It defines how I decide whether learning should exist, how I design it when it should, and how I evaluate whether it deserves to continue. It reflects a commitment to building learning systems that can withstand scrutiny, survive scale, and produce outcomes that matter beyond the classroom.
What follows is not a set of best practices.
It is the line I do not cross.
Evidence-Bound Learning Design
Learning is often treated as an act of faith. Courses are built, programs are launched, and success is inferred from completion, satisfaction, or hope. While evidence-informed approaches use data to improve learning quality, Evidence-Bound Learning Design treats evidence as an obligation. Learning is designed only when it can be tied to observable behavior change and defensible return, not simply because training seems appropriate.
Evidence-Bound Learning Design exists because that is not enough.
Evidence-Bound Learning Design at a Glance
This flowchart represents my Evidence-Bound Learning Design (EBLD) process at a high level.
It is a gated, performance-first system that begins with real work, diagnoses the true cause of performance breakdowns, and permits learning only when capability is the actual constraint. The process can intentionally exit the learning path in favor of non-learning solutions such as tool, workflow, operational, or policy changes.
The flow is organized around two parallel tracks:
-
Performance: what must change in real work
-
Evidence: how that change will be observed, measured, and defended
Progression requires both tracks to remain intact. Learning that cannot be justified, measured, or governed does not proceed. What scales must remain provable.
This diagram is not a production checklist. It is a decision framework for responsible, measurable learning design.

Case Study
Rebuilding Frontline Performance in a Healthcare Patient Access Organization
This case study reflects real work I performed within a healthcare patient access organization. The company name has been withheld, but the constraints, decisions, and outcomes are represented accurately.
Context
The organization was a healthcare patient access company responsible for handling high volumes of inbound patient calls on behalf of multiple healthcare clients. Frontline call agents were responsible for verifying patient information, navigating complex call systems, and scheduling appointments accurately across multiple electronic health record (EHR) environments.
At the time, the organization did not have a formal learning function. Training was owned by supervisors and experienced agents and delivered almost entirely through informal, on-the-job instruction.
The Performance Breakdown
Two performance issues were immediately apparent in the operational data:
-
Extremely high first 90-day attrition among new agents
New hires were leaving at unsustainable rates, driving up hiring and onboarding costs and creating constant operational instability. -
Low job function accuracy and inefficient call handling
Agents frequently selected incorrect appointment types, required frequent supervisor intervention, and had longer call handling times due to proficiency gaps.
These issues carried clear organizational costs:
-
Repeated hiring and onboarding expenses
-
Lost productivity during ramp-up
-
Fewer paid patient interactions due to errors
-
Supervisor time diverted from higher-value work
This was not a perception problem. It was a measurable performance failure.
Diagnosing the True Constraints
Rather than immediately designing training, the work began by observing agents in their real workflow.
Two different types of constraints emerged.
Constraint Type 1: Capability Gaps (Training-Appropriate)
New agents were being overwhelmed early by:
-
Program-specific complexity introduced too soon
-
Inconsistent foundational instruction depending on who trained them
-
Learning by following along while someone else demonstrated, without independent execution
These issues plausibly justified learning intervention.
Constraint Type 2: System and Workflow Design Failures (Not a Training Problem)
One program, in particular, showed excessive call handling times and error rates. Supervisors believed additional training was required, though they could not specify what should be trained.
Direct observation revealed the real issue:
-
The call system manager (CSM) fields did not align with the client’s patient intake form
-
Agents were forced to:
-
Ask patients for information they had already submitted
-
Write details into a separate notes application
-
Manually re-enter or copy and paste that data into the CSM
-
This created:
-
Longer calls
-
Patient frustration
-
Increased risk of data entry errors
-
Agent fatigue and disengagement
Critically, agents were not confused. They were compensating for a broken system.
Training would not have solved this problem. It would have normalized inefficiency and shifted blame onto agents.
Intervention Decisions
I. Where Learning Was Rejected
For the CSM workflow issue, the recommended solution was not training.
Instead, the solution was to:
-
Modify CSM fields to align with the client intake form
-
Implement an API connection to auto-populate those fields
-
Communicate the workflow change to agents
No training intervention was required beyond awareness of the updated process.
II. Where Learning Was Designed
For genuine capability constraints, a new learning system was designed from the ground up.
Training was re-architected into:
-
A unified core program for all new agents covering customer service fundamentals, call systems, and shared expectations
-
A program-specific split in the second phase focused on individual client workflows and EHR environments
This also reduced total training time from four weeks to two weeks without sacrificing performance.
III. Format Change
The legacy “watch and follow along” model was replaced with a blended, evidence-bound approach:
-
Standardized computer-based trainings (CBTs)
-
Simulated EHR and CSM environments specific to each program
-
Dedicated trainers acting as guides, not lecturers
In the simulations, agents were required to:
-
Perform tasks independently
-
Make decisions
-
Execute workflows step by step
-
Recover from errors in a safe environment
This ensured consistency across learners and eliminated dependence on individual trainers’ preferences or habits.
Measurement Strategy
Measurement was designed into the system from the beginning.
Tracked indicators included:
-
First 90-day attrition rates
-
Job function accuracy
-
Call handling time
-
Supervisor intervention frequency
-
Learner behavior within the LMS
-
Job satisfaction survey results
Because learning was standardized and instrumented, variance was reduced and attribution strengthened.
Outcomes
The results from the first cohorts through the redesigned system were unambiguous:
-
First 90-day attrition dropped from 45% to 7%
-
Job function error rates decreased by approximately 60%
-
Call handling times decreased, increasing paid patient interactions
-
Supervisor interventions declined significantly
-
Job satisfaction survey scores improved
These outcomes translated directly into:
-
Reduced hiring and onboarding costs
-
Improved operational efficiency
-
Increased revenue opportunity
-
Greater workforce stability
Resistance and Governance
One long-tenured supervisor objected to the new program, stating they “didn’t like the new way of training” and believed it was insufficiently specific to their program.
However:
-
Their new hires had completed the same program-specific training as others
-
Performance data showed new agents outperforming or matching seasoned agents in accuracy and efficiency within the first 90 days post-training
Rather than debating preference, the response was to present the evidence.
The objection was resolved not through persuasion, but through outcomes.
Why This Case Matters
This work demonstrates the core principles of Evidence-Bound Learning Design in practice:
-
Learning was not assumed to be the solution in every situation
-
Systems were fixed before people were trained
-
Training was standardized to protect evidence
-
Practice replaced observation
-
Measurement governed decisions
-
Preference did not outweigh outcomes
The results were not accidental. They were the predictable consequence of designing learning only where it was justified, and binding it to evidence from the start.

