Taming Complexity: Introducing the Training Evaluation Framework and Tools (TEFT)

October 22, 2012

This month, I-TECH is pleased to introduce the TEFT, a new, pilot collection of free resources designed to help evaluators, implementers, and program managers plan successful outcome evaluations of in-service training programs. Visit the TEFT online resource at www.go2itech.org/resources/TEFT.

Because health care workers practice in complex settings, measuring the impact of instruction they receive can be challenging

“How do we draw a clear link between a nurse in a classroom, and a healthy baby born under her care?”

I-TECH’s Dr. Gabrielle O’Malley, a University of Washington professor and evaluation specialist, leans forward emphatically. “It sounds easy, right?”

–It isn’t. In fact, O’Malley explains, the question is one of the simplest, and most difficult, faced by the people who fund, design, and implement health care worker training programs worldwide. Because health care workers practice in complex settings, measuring the impact of instruction they receive can be challenging.

Now, however, evaluators and stakeholders won’t have to meet this challenge alone. At the request of the US Health Resources and Services Administration, O’Malley, along with an I-TECH/University of Washington team, have spent the last year focusing on the unique needs and challenges of outcome evaluation of in-service training programs. This month, I-TECH is pleased to announce one result of this effort: A pilot release of the Training Evaluation Framework and Tools, or “TEFT,” a collection of resources to help evaluators and stakeholders plan efficient, well-organized, and appropriate evaluations. The TEFT is available to any user online at <www.go2itech.org/resources/TEFT>.

Hands working on a document

A Clear Goal

In almost any setting, health leaders, global funders, and patients alike can easily point to a top priority: a local health care workforce with updated training and expertise. As a result, in-service training programs have become a pillar of health systems strengthening efforts. To date, the US President’s Emergency Plan for AIDS Relief (PEFPAR) alone has supported over 3.7 million training and re-training encounters worldwide.

There is little doubt that these programs have had a broad impact on the quality of care available. But how can evaluators formally identify and quantify these outcomes? The I-TECH TEFT team began with a clear goal: to create a framework that would address this question.

Dr. Frances Petracca, a senior quality improvement advisor with the TEFT team, explains that the group began the project by conducting an extensive literature review, to see what kinds of frameworks and ideas might already be circulating. The team also interviewed professionals with expertise in the area of in-service training.

“Many of our interviewees expressed a common theme,” says Petracca, “which is that determining a causal link between training and its measurable outcomes is challenging.” The impact of an in-service training program can be affected by a seemingly endless number of factors, from seasonal flooding to the availability of key medications at a facility. Many interviewees embraced the idea of a resource to help them deal with these factors and seek outcomes—reinforcing the team’s mandate.

Outputs and Outcomes

“Some of this increased emphasis on outcome evaluation is [also] related to changes in the priorities of global funders,” adds O’Malley. In the past, the focus of in-service training program evaluation was on outputs—that is, evaluators could report such basic information as how many workers were trained or the topics they were taught. Now, however, many funders and implementers are interested in outcomes: results that sit closer to overarching goals, such as measurable improvements in patient health. This distinction between outputs and outcomes is one that every public health professional holds close, notes O’Malley, but the renewed emphasis on outcomes has created an additional push for more subtle, well-planned program evaluation.

The Framework

The Training Evaluation Framework visually represents causal relationships between a training and outcomes at several levels.

The Training Evaluation Framework, which forms the basis of the TEFT, responds to these concerns by helping evaluators consider the types of outcomes that may result from their training intervention and at what level they might occur. It also provides graphic that allows them to visually trace causal relationships.

“As we talked to people, it became clear that a big part of successful evaluation relies on understanding that training can have an impact in several ways, and at several levels of a health system,” says Petracca.

For example, imagine that an evaluator is planning to examine the outcomes of a training  preparing health care workers to prescribe antiretroviral treatment (ART) to patients living with HIV. At the individual level, patients treated by these workers may have improved health, demonstrated by higher CD4 counts (a test that shows whether ART is successfully working against HIV). At the organizational level, the health care facilities where the trained workers practice may show an increase in the number of patients on ART—and so on. Where the evaluator looks for outcomes may depend on the scope of the training, her funding and timeframe, and other factors. Thinking clearly about different types of outcomes, and the arenas in which they might be found, can help her work more effectively.

Taming Complexity: Using the TEFT

In addition to the Framework, the TEFT includes a collection of tools and an online resource,

“The Framework, and the six tools, break the evaluation planning process down into structured pieces,” says O’Malley. She adds that the tools may also help evaluators and stakeholders talk about the scope and goals of their evaluation, which is especially useful when a group has limited resources to dedicate to the task. After users explore the Framework and begin to determine the type and level of outcomes to pursue, the remaining tools can help them identify factors that might affect their evaluation, synthesize ideas, and choose appropriate evaluation design and methods.

In addition, the website provides several detailed case studies, which demonstrate, using examples, how the tools can be used together, step-by-step, to plan an evaluation. Other sections outline the TEFT Team’s comprehensive literature review, provide links to additional evaluation resources, and offer a wide range of examples and discussion.

Of course, the TEFT is only a planning tool; it can’t tell users exactly which approach to take or which outcomes to focus on. Instead, says Petracca, “We hope the TEFT will guide users as they move through the process of making those decisions, so that they can plan and implement insightful, efficient, and successful evaluations.”

Visit the new TEFT online resource: www.go2itech.org/resources/TEFT

Previous post:

Next post: