Contact:
Patricia Harris
(319) 335-8037
(319) 337-2399 Home

IOWA CITY, Iowa -- The guidelines used to evaluate on-the-job injury prevention programs often use the weakest available measurement criteria, according to published research led or co-led by Dr. Craig Zwerling, University of Iowa associate professor of preventive medicine and environmental health.

Additionally, the use of inaccurate measurements may lead employers to waste money on unproven and costly injury prevention programs.

Two of Zwerling's recent publications suggest that U.S. companies, rushing to improve on-the-job injury rates, may not be evaluating injury data correctly or may hasten their employees into untested injury education and prevention programs.

In an article in the most recent American Journal of Industrial Medicine, Zwerling and colleagues evaluated the design, conduct and evaluation of numerous occupational injury prevention and education programs. They found that such programs often use measurement criteria that can be affected by factors separate from the workplace and the possibility of injury there, among other problems. For example, low back injuries account for 30 to 40 percent of the payments made from workers compensation programs. However, such injuries can often be aggravated by non-work activities. Zwerling and his colleagues wrote that this can make it difficult for employers to accurately connect a set of symptoms with a certain workplace function.

Comparing the number of injuries from one year with those of another year may not be enough, but it is often the only test some employers use to evaluate the performance of a safety program, Zwerling says.

"Companies are often naive about historical bias -- there may be something else going on," Zwerling says. "It may not be the (program) that's having an effect."

Zwerling says, however, that such comparisons are a good beginning for research into whether or not a workplace program lessens occupational injuries. But such comparisons need to be followed with more systematic testing of the effects, or lack thereof, of the program. Large-scale randomized trials of the most promising programs can be used in some cases, though they are quite expensive and only larger companies or organizations can afford them.

One such study in which Zwerling participated found that an expensive back injury education program had no concrete positive effect on employee safety. The results of this study were published in the July 31 edition of the New England Journal of Medicine.

Postal workers in Boston, Mass., went through a so-called "back school" taught by experienced physical therapists. Injury and re-injury rates of these 4,000 employees were then examined by Zwerling and colleagues during more than five years of follow-up. A comparison of employees who went through "back school" with those who did not found that the program did not reduce the rate of low back injury, the median cost per injury, the time off of work per injury, the rate of related musculoskeletal injury, or the rate of repeated injury after returning to work.

"There are hundreds of companies across the country wasting millions (of dollars) on many back schools that don't work," Zwerling says. "These kinds of programs need to be evaluated to make sure they work. It's just like testing a new drug -- you don't prescribe it until it has been tested and you're sure it works."

-30- 8/21/97