Scand J Work Environ Health 2016;42(5):355-358    pdf full text


Organizational stress management interventions: Is it the singer not the song?

by Kompier M, Aust B

Good reasons exist for combating stress at work. It is a burden for individual employees and their families and costly to companies and society. Moreover preventing stress at work is a sign of good corporate citizenship as it respects modern legislation that stimulates the provision of a good quality of working life. 

In order to recommend good interventions, we need to know which interventions work. Outcome evaluation research is, therefore, important but not enough. We not only need to know “what works”, but also “when, how, and why” this may be the case. This means that we need to combine classical effect evaluation research with process evaluation. Combining these two approaches fits in a young scientific discipline: implementation research. Implementation refers to “the way a program is put into practice and delivered to participants” (1). Implementation research crosses traditional disciplinary boundaries, with contributions being made from both occupational and public health domains but also from psychological theories on motivation, behavioral change, and social influence.

In a recent commentary, Durlak (1) has highlighted major findings in implementation research. He concludes: “We now know that it is not evidence-based programs that are effective, but it is well-implemented evidence-based programs that are effective” (p1124). He also explains: “If we do not assess implementation, we do not know if a program has been put to an adequate test. It may fail not because the intervention lacks value, but because the intervention was not implemented at a sufficiently high level to produce its effects” (p1124).

Effect-only evaluation data may thus mask intervention effects that are sensitive to variations in intervention processes (2). This all means that checking effects is not enough to determine whether a program works or not. To answer this question properly, we need to open up the black box because we also must know how it was implemented.

Ten years earlier – in an occupational health intervention context – Tage Kristensen made comparable arguments (3). Kristensen explained the difference between implementation and theory failure by describing the case of a patient (employee) and a pill (stress intervention): “It does not help that the pill has effect if the patient does not take it, and it does not help that the patient takes the pill if it has no effect” (p207). In both cases, process evaluation is needed to find out what actually happened and make the right decisions to proceed. In the first case (pill not taken), the implementation failed and it is therefore not possible to determine if the program works. The underlying theory still may be right. Had the implementation not been studied, the erroneous conclusion might have been that “the theory was wrong”. In this first case, the next step should hence be to work on better implementation. In the second case (the patient took the pill), there were no problems with implementation, but there was no effect. This means that the theory behind the intervention seems to be wrong and needs to be reconsidered.

Process evaluation is thus needed to evaluate the generalizability of an intervention (to answer questions such as “under which circumstances will an intervention work?” and “which were the factors that hindered or stimulated the change?”), so that it can be implemented successfully in a variety of settings (2, 4). Studying implementation and process factors is both complicated and challenging. One of the problems is that there are many factors that may codetermine whether a certain intervention does or does not bring about the desired change in health or health behavior. To overcome the problem of long ad hoc lists and enumerations, more systematic taxonomies are needed. Such conceptual frameworks have yet been suggested, for example by Steckler and Linnan (5), Saunders and colleagues (6), and Nielsen & Randall (2). The first two examples merely focus on intervention delivery and participation. They distinguish between fidelity, dose delivered, dose received (exposure and satisfaction), reach, recruitment procedures and context aspects. The broader framework of Nielsen & Randall makes a distinction between (i) the intervention design and implementation, (ii) the context, and (iii) participants’ mental models, such as readiness for change. These latter authors have developed a checklist in order to systematically check these three clusters. Checkpoints in the first cluster are: Who initiated the intervention and for what purpose?; Did the intervention activities target the problems of the workplace?; Did it reach the target group?; Who were the drivers of change?; and What kind of information was provided to participants during the study? The main checkpoint in the context cluster is “which hindering and facilitating factors in the context influenced intervention outcomes?”, whereas the third cluster checks the role of participants’ mental models, such as resistance. Kompier made an earlier comparable attempt (7), arguing that high quality intervention research needs to study three clusters of questions, ie, with respect to the content of the interventions, the context of the study and its interventions, and the design of the study at hand. Also in the organizational health intervention domain, another promising “context, process and outcome evaluation model” has recently been published (8).

When it comes to interventions to reduce occupational stress, there certainly is a need for the development, refinement, and utilization of such taxonomies. Various reviews of organizational level intervention studies have been published since 2000 (for example 9–13). The focus of such studies is on the methodological quality of the included studies, which of course is a good thing, and on the effects of interventions (outcome evaluation), but not so much on the intervention itself and how it is implemented. This means that the link between process evaluation and outcome has not yet been systematically addressed (14). Therefore it is hard to understand what actually happened in these intervention studies, and as a consequence is it hard to answer questions such as “How, when and why interventions have or have no effects” (see also 15). In search of better answers to these questions process evaluation is a conditio sine qua non.

In this issue of the Scandinavian Journal of Work, Environment and Health, Havermans et al (16) present an interesting overview of process variables in organizational stress management interventions. In a thorough search process, they selected 44 articles that each report on process evaluations. Process variables were defined as “any measure included in the evaluation study that is hypothesized to be associated with the process of implementation of the stress management intervention” (p2). Each article was assessed with a clear template that, as proposed by Nielsen & Randall (2), classified process variables into three major categories: context, intervention, and mental models. Each of these three clusters was itself divided in a number of sub-clusters. The context cluster was made up of context, and barriers and facilitators. The intervention cluster constituted initiation, intervention activities, implementation, and implementation strategy. The mental model comprised readiness for change, perceptions, and changes in mental models. All in all, in 44 articles, 47 process factors were identified. In the majority of these articles, these process factors were collected at one point in time. Process factors were mostly measured at the level of the individual employee and post intervention. Together these authors provide a broad overview of the wealth of factors that may influence the outcomes of an intervention.

Interestingly half of the 44 articles did not refer to the process evaluation literature. This suggests that, until now, process factors often are investigated in a merely explorative manner. Furthermore, as Havermans et al point out, many studies do not seem to include a program theory for the intervention under study, ie, explaining how the authors assume the intervention to work and which proximate, intermediate and distal changes (8) they expected. Without predictions stated in a program theory, however, it remains unclear when a certain implementation process is good enough or if intermediate goals were reached. This makes it more difficult to integrate process and effects evaluation. Also, more than half of the studies had a participatory format, which means cooperation of different stakeholders in the assessment, targeting and prevention of work stress. This is an interesting observation that raises the question why employee participation may be essential. We believe that there are several assets to such an approach. First, it may support an accurate problem identification and analysis, and may optimize the fit of the intervention to the organizational culture and context. This simply acknowledges the fact that employees “have agency” and are experts as to their own work situation. Employee participation may also mean improved communication and less resistance to change. It may increase responsibility for dealing with the problems identified and increase commitment to change strategies. And, finally, it may constitute an intervention in its own right as it reflects the concrete enactment of job control (17). The formulation of a program theory and process evaluation, for example through the repeated measurement of commitment and quality of communication, may contribute to the further testing of such assumptions.

So far organizational stress interventions reveal mixed results and this makes it difficult to find the best ways for reducing stress at work and to convince businesses to engage in these activities. Outcome research is essential, but only measuring effects without taking the implementation into account may lead to erroneous conclusions. We thereby risk discarding potentially powerful interventions concepts without having tested them properly. The “does it work?” question is a simplification. More informative questions are “under which circumstances will this intervention work?” and “what are hindering and stimulating factors?”. In organizational stress intervention research, process evaluation is thus needed as “an add-on” to – and not substitute for – effect evaluation. The Havermans et al paper can be interpreted as a plea for the integration of process evaluation in all studies of organizational stress interventions, in a way that enables the relation of process aspects to outcomes. We believe that future research may aim at a further standardization of a taxonomy that is rich but not overly detailed. It seems that the Nielsen & Randall classification scheme provides a good starting point.

We also believe that such organizational stress management intervention research will falsify an “implementation research hypothesis” formulated by Mick Jagger and Keith Richards in 1965. They assumed that “It’s the singer, not the song”, whereas we argue that it is both the intervention (the song) and the implementation (the singer) that make a difference when it comes to organizational stress management interventions.

This article refers to the following text of the Journal: 2016;42(5):371-381
The following article refers to this text: 2017;43(4):337-349