Editorial

Scand J Work Environ Health 2016;42(5):355-358    pdf

https://doi.org/10.5271/sjweh.3578 | Published online: 07 Jun 2016, Issue date: 01 Sep 2016

Organizational stress management interventions: Is it the singer not the song?

by Kompier M, Aust B

Good reasons exist for combating stress at work. It is a burden for individual employees and their families and costly to companies and society. Moreover preventing stress at work is a sign of good corporate citizenship as it respects modern legislation that stimulates the provision of a good quality of working life.

In order to recommend good interventions, we need to know which interventions work. Outcome evaluation research is, therefore, important but not enough. We not only need to know “what works”, but also “when, how, and why” this may be the case. This means that we need to combine classical effect evaluation research with process evaluation. Combining these two approaches fits in a young scientific discipline: implementation research. Implementation refers to “the way a program is put into practice and delivered to participants” (1). Implementation research crosses traditional disciplinary boundaries, with contributions being made from both occupational and public health domains but also from psychological theories on motivation, behavioral change, and social influence.

In a recent commentary, Durlak (1) has highlighted major findings in implementation research. He concludes: “We now know that it is not evidence-based programs that are effective, but it is well-implemented evidence-based programs that are effective” (p1124). He also explains: “If we do not assess implementation, we do not know if a program has been put to an adequate test. It may fail not because the intervention lacks value, but because the intervention was not implemented at a sufficiently high level to produce its effects” (p1124).

Effect-only evaluation data may thus mask intervention effects that are sensitive to variations in intervention processes (2). This all means that checking effects is not enough to determine whether a program works or not. To answer this question properly, we need to open up the black box because we also must know how it was implemented.

Ten years earlier – in an occupational health intervention context – Tage Kristensen made comparable arguments (3). Kristensen explained the difference between implementation and theory failure by describing the case of a patient (employee) and a pill (stress intervention): “It does not help that the pill has effect if the patient does not take it, and it does not help that the patient takes the pill if it has no effect” (p207). In both cases, process evaluation is needed to find out what actually happened and make the right decisions to proceed. In the first case (pill not taken), the implementation failed and it is therefore not possible to determine if the program works. The underlying theory still may be right. Had the implementation not been studied, the erroneous conclusion might have been that “the theory was wrong”. In this first case, the next step should hence be to work on better implementation. In the second case (the patient took the pill), there were no problems with implementation, but there was no effect. This means that the theory behind the intervention seems to be wrong and needs to be reconsidered.

Process evaluation is thus needed to evaluate the generalizability of an intervention (to answer questions such as “under which circumstances will an intervention work?” and “which were the factors that hindered or stimulated the change?”), so that it can be implemented successfully in a variety of settings (2, 4). Studying implementation and process factors is both complicated and challenging. One of the problems is that there are many factors that may codetermine whether a certain intervention does or does not bring about the desired change in health or health behavior. To overcome the problem of long ad hoc lists and enumerations, more systematic taxonomies are needed. Such conceptual frameworks have yet been suggested, for example by Steckler and Linnan (5), Saunders and colleagues (6), and Nielsen & Randall (2). The first two examples merely focus on intervention delivery and participation. They distinguish between fidelity, dose delivered, dose received (exposure and satisfaction), reach, recruitment procedures and context aspects. The broader framework of Nielsen & Randall makes a distinction between (i) the intervention design and implementation, (ii) the context, and (iii) participants’ mental models, such as readiness for change. These latter authors have developed a checklist in order to systematically check these three clusters. Checkpoints in the first cluster are: Who initiated the intervention and for what purpose?; Did the intervention activities target the problems of the workplace?; Did it reach the target group?; Who were the drivers of change?; and What kind of information was provided to participants during the study? The main checkpoint in the context cluster is “which hindering and facilitating factors in the context influenced intervention outcomes?”, whereas the third cluster checks the role of participants’ mental models, such as resistance. Kompier made an earlier comparable attempt (7), arguing that high quality intervention research needs to study three clusters of questions, ie, with respect to the content of the interventions, the context of the study and its interventions, and the design of the study at hand. Also in the organizational health intervention domain, another promising “context, process and outcome evaluation model” has recently been published (8).

When it comes to interventions to reduce occupational stress, there certainly is a need for the development, refinement, and utilization of such taxonomies. Various reviews of organizational level intervention studies have been published since 2000 (for example 9–13). The focus of such studies is on the methodological quality of the included studies, which of course is a good thing, and on the effects of interventions (outcome evaluation), but not so much on the intervention itself and how it is implemented. This means that the link between process evaluation and outcome has not yet been systematically addressed (14). Therefore it is hard to understand what actually happened in these intervention studies, and as a consequence is it hard to answer questions such as “How, when and why interventions have or have no effects” (see also 15). In search of better answers to these questions process evaluation is a conditio sine qua non.

In this issue of the Scandinavian Journal of Work, Environment and Health, Havermans et al (16) present an interesting overview of process variables in organizational stress management interventions. In a thorough search process, they selected 44 articles that each report on process evaluations. Process variables were defined as “any measure included in the evaluation study that is hypothesized to be associated with the process of implementation of the stress management intervention” (p2). Each article was assessed with a clear template that, as proposed by Nielsen & Randall (2), classified process variables into three major categories: context, intervention, and mental models. Each of these three clusters was itself divided in a number of sub-clusters. The context cluster was made up of context, and barriers and facilitators. The intervention cluster constituted initiation, intervention activities, implementation, and implementation strategy. The mental model comprised readiness for change, perceptions, and changes in mental models. All in all, in 44 articles, 47 process factors were identified. In the majority of these articles, these process factors were collected at one point in time. Process factors were mostly measured at the level of the individual employee and post intervention. Together these authors provide a broad overview of the wealth of factors that may influence the outcomes of an intervention.

Interestingly half of the 44 articles did not refer to the process evaluation literature. This suggests that, until now, process factors often are investigated in a merely explorative manner. Furthermore, as Havermans et al point out, many studies do not seem to include a program theory for the intervention under study, ie, explaining how the authors assume the intervention to work and which proximate, intermediate and distal changes (8) they expected. Without predictions stated in a program theory, however, it remains unclear when a certain implementation process is good enough or if intermediate goals were reached. This makes it more difficult to integrate process and effects evaluation. Also, more than half of the studies had a participatory format, which means cooperation of different stakeholders in the assessment, targeting and prevention of work stress. This is an interesting observation that raises the question why employee participation may be essential. We believe that there are several assets to such an approach. First, it may support an accurate problem identification and analysis, and may optimize the fit of the intervention to the organizational culture and context. This simply acknowledges the fact that employees “have agency” and are experts as to their own work situation. Employee participation may also mean improved communication and less resistance to change. It may increase responsibility for dealing with the problems identified and increase commitment to change strategies. And, finally, it may constitute an intervention in its own right as it reflects the concrete enactment of job control (17). The formulation of a program theory and process evaluation, for example through the repeated measurement of commitment and quality of communication, may contribute to the further testing of such assumptions.

So far organizational stress interventions reveal mixed results and this makes it difficult to find the best ways for reducing stress at work and to convince businesses to engage in these activities. Outcome research is essential, but only measuring effects without taking the implementation into account may lead to erroneous conclusions. We thereby risk discarding potentially powerful interventions concepts without having tested them properly. The “does it work?” question is a simplification. More informative questions are “under which circumstances will this intervention work?” and “what are hindering and stimulating factors?”. In organizational stress intervention research, process evaluation is thus needed as “an add-on” to – and not substitute for – effect evaluation. The Havermans et al paper can be interpreted as a plea for the integration of process evaluation in all studies of organizational stress interventions, in a way that enables the relation of process aspects to outcomes. We believe that future research may aim at a further standardization of a taxonomy that is rich but not overly detailed. It seems that the Nielsen & Randall classification scheme provides a good starting point.

We also believe that such organizational stress management intervention research will falsify an “implementation research hypothesis” formulated by Mick Jagger and Keith Richards in 1965. They assumed that “It’s the singer, not the song”, whereas we argue that it is both the intervention (the song) and the implementation (the singer) that make a difference when it comes to organizational stress management interventions.

References

1 

Durlak, JA. (2015). Studying program implementation is not easy but it is essential. Prev Sci, 16, 1123-1127, http://dx.doi.org/10.1007/s11121-015-0606-3.

2 

Nielsen, K, & Randall, R. (2013). Opening the black box: presenting a model for evaluating organizational level interventions. Eur J Work Organ Psy, 22, 601-617, http://dx.doi.org/10.1080/1359432X.2012.690556.

3 

Kristensen, TS. (2005). Intervention studies in occupational epidemiology. Occup Environ Med, 62, 205-210, http://dx.doi.org/10.1136/oem.2004.016097.

4 

Cooper, C, Dewe, P, O’Driscoll, M, Cooper, C, Dewe, P, & O’Driscoll, M (Eds.). (2001). Organizational stress: A review and critique of theory, research and applications. Thousand Oaks, CA: Sage. Organizational interventions, pp. 187-232.

5 

Steckler, AB, & Linnan, L. (2002). Process evaluation for public health interventions and research. San Francisco: Jossey Bass.

6 

Saunders, RP, Evans, MH, & Joshi, P. (2005). Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide. Health Promot Pract, 6, 134-147, http://dx.doi.org/10.1177/1524839904273387.

7 

Kompier, M. (2004). Work organization interventions. Soc Prev Med, 49(2), 77-78, http://dx.doi.org/10.1007/s00038-004-3147-2.

8 

Fridrich, A, Jenny, JJ, & Bauer, GF. (2015). The context, process, and outcome evaluation model for organizational health interventions. Biomed Research International, Article ID 414832, http://dx.doi.org/10.1155/2015/414832.

9 

LaMontagne, AD, Keegel, T, Louie, A, Ostry, A, & Landsbergis, PA. (2007). A systematic review of the job-stress intervention evaluation literature 1990-2005. Int J Occup Environ Health, 13, 268-280, http://dx.doi.org/10.1179/oeh.2007.13.3.268.

10 

Bambra, C, Egan, M, Thomas, S, Petticrew, M, & Whitehead, M. (2007). The psychosocial and health effects of workplace reorganization 2. A systematic review of task restructuring interventions. J Epidemiol Comm Health, 61, 1028-1037, http://dx.doi.org/10.1136/jech.2006.054999.

11 

Bambra, C, Gibson, M, Sowden, AJ, Wright, K, Whitehead, M, & Petticrew, M. (2009). Working for health? Evidence from systematic reviews on the effects on health and health inequalities of organizational changes to the psychosocial work environment. Prev Med, 48, 454-461, http://dx.doi.org/10.1016/j.ypmed.2008.12.018.

12 

Richardson, KM, & Rothstein, HR. (2008). Effects of occupational stress management intervention programs: a meta-analysis. J Occup Health Psychol, 13, 69-93, http://dx.doi.org/10.1037/1076-8998.13.1.69.

13 

Montano, D, Hoven, H, & Siegrist, J. (2014). Effect of organisational-level interventions at work on employees’ health: a systematic review. BMC Public Health, 14(1), 135, http://dx.doi.org/10.1186/1471-2458-14-135.

14 

Murta, SG, Sanderson, K, & Oldenburg, B. (2007). Process evaluation in occupational stress management programs: a systematic review. Am J Health Promot, 2, 248-254, http://dx.doi.org/10.4278/0890-1171-21.4.248.

15 

Biron, C, & Karanika-Murray, M. (2014). Process evaluation for organizational stress and well-being interventions: Implications for theory, method and practice. Int J Stress Manage, 21, 85-111, http://dx.doi.org/10.1037/a0033227.

16 

Havermans, BM, Schelvis, RMC, Boot, CRL, Brouwers, EPM, Anema, JR, & Van der Beek, AJ. (2016). Process variables in organizational stress management intervention (SMI) evaluation research: A systematic review. Scand J Work Environ Health, 42(5), 371-381, http://dx.doi.org/10.5271/sjweh.3570.

17 

LaMontagne, AD, Noblet, AJ, Landsbergis, PA, Biron, C, Karanika-Murray, M, & Cooper, CL (Eds.). (2012). Improving organizational interventions for stress and well-being. Addressing process and context. Hove (East Sussex, UK): Routledge. Intervention development and implementa-ion. Understanding and addressing barriers to organizational-level interventions.