Process variables in organizational stress management intervention evaluation research: a systematic review

OBJECTIVES
This systematic review aimed to explore which process variables are used in stress management intervention (SMI) evaluation research.


METHODS
A systematic review was conducted using seven electronic databases. Studies were included if they reported on an SMI aimed at primary or secondary stress prevention, were directed at paid employees, and reported process data. Two independent researchers checked all records and selected the articles for inclusion. Nielsen and Randall's model for process evaluation was used to cluster the process variables. The three main clusters were context, intervention, and mental models.


RESULTS
In the 44 articles included, 47 process variables were found, clustered into three main categories: context (two variables), intervention (31 variables), and mental models (14 variables). Half of the articles contained no reference to process evaluation literature. The collection of process evaluation data mostly took place after the intervention and at the level of the employee.


CONCLUSIONS
The findings suggest that there is great heterogeneity in methods and process variables used in process evaluations of SMI. This, together with the lack of use of a standardized framework for evaluation, hinders the advancement of process evaluation theory development.


Introduction
Work stress is a problem for individuals, organizations, and society at large.It poses a threat to workers' well-being by increasing mental and physical health risks [1,2].Work stress also contributes substantially to sickness absence [3].This is a costly problem for organizations and society in general.The annual price tag of work stress to society amounts to €20 billion in the European Union alone [4].In order to combat this problem, organizations deploy stress management interventions (SMI).Scientific evaluations of these SMI can support organizations in making an informed choice about the most effective and appropriate intervention and may also help to test theories upon which the interventions are based.
The most widely used approach to SMI evaluation is characterized by a (quasi-) experimental research design that focuses on outcomes at the level of the worker (eg, stress, burnout) [5][6][7].According to Kristensen [8], whether or not the intervention has had the desired effect on the targeted outcome is only one of three important questions to ask when evaluating an intervention.To interpret the effect, one should firstly assess if the intervention was carried out as intended and then assess if the intervention brought about the intended (change in) exposure or behavior.This way, a distinction can be made between program versus theory success in effect interpretation.
A way to gather information about the success or failure of an intervention program is to look at intervention implementation.This can be done by studying process variables [9][10][11][12].There are different ways of investigating the implementation process.Steckler and Linnan [9], for instance, propose a focus on intervention delivery and participation.Fleuren et al [13] assert that components, such as the sociopolitical context, and characteristics of the organization, participant (skills, knowledge, and perceived support), and the intervention itself (complexity, relative advantage) are also important for implementation.Finally, Nielsen and Randall [11] suggest that mental models (pertaining to constructs such as readiness for change) should be added to existing process evaluation frameworks.
Despite increasing support for the incorporation of process factors into the evaluation of SMI in the last 10-15 years, there is still limited consensus on which process variables should be assessed.In addition to frameworks that offer suggestions for the use of certain process variables, insight into current practice could also support future process evaluations of SMI.More overviews that stress the importance of process measures in organizational-level intervention evaluations do exist.Egan and colleagues [14], for example, provide a review of implementation appraisal of complex social interventions, concluding that implementation and context are crucial for impact assessment of these interventions.To the authors' best knowledge, a decade ago, Murta et al [15] have provided the only review describing which process variables are used in SMI evaluation research.In accordance with the aforementioned different perspectives on process evaluation, they observed great heterogeneity in variables and designs researchers use for SMI process evaluation.Murta and col-leagues made this observation using a restricted selection of process variables [15].A limitation of this restricted selection is that publications reporting other process variables could have been neglected.Building on their research, a broader approach can leave room for more current frameworks to be recognized in process evaluation practice.The aim of this review was to explore which process variables are used in SMI evaluation research.

Methods
A systematic literature review was performed to investigate which process variables are reported in SMI evaluation research.Components from the PRIS-MA statement [16] were used in reporting this systematic review.

Search and study selection
Studies were eligible for inclusion if they (i) reported on an SMI directed at paid workers aged ≥18 years, (ii) reported a process evaluation of the intervention (at least one process variable assessed), (iii) were published in a peer-reviewed journal (conference abstracts, books and design protocols were excluded), and (iv) were written in English or Dutch.An SMI was defined as an organizational intervention focusing on individual or organizational changes, targeted to prevent or reduce stress in employees at the primary or secondary prevention level.A process variable was defined as any measure included in the evaluation study that is hypothesized to be associated with the process of SMI implementation.
Together with a library search specialist, the following databases were searched from inception to October-December 2014: PubMed, PsychINFO (October 8, 2014), ISI/Web of Science, Embase (October 24, 2014), Proquest (December 3, 2014), EconLit (December 5, 2014), andEbsco/Cinahl (December 11, 2014).For every database, the search was adapted to the appropriate terminology specific to that database, using synonyms and closely related words (for the complete search, see the Appendix, www.sjweh.fi/data_repository.php).If a process evaluation was mentioned in design protocols then the first author searched the electronic literature databases and contacted the authors to identify additional studies.
The first author removed the duplicates from the records identified.Then, the first and the second author independently screened titles and abstracts of all remaining records, selecting articles for fulltext inspection, using the aforementioned eligibility criteria.If at least one of the two authors had selected a record for fulltext inspection then it was retrieved.Subsequently, both authors independently assessed the remaining selection of articles for inclusion.There was an independent consensus for in-and exclusion of fulltext articles of 72%.Remaining discrepancies were resolved with face-to-face deliberation.When this did not lead to consensus, one of the co-authors was consulted to make a final decision.

Data extraction
A template was constructed, containing a list of data to be extracted from the included articles.This template was used independently by the first and second author by applying it to two, randomly selected articles.Then, they compared the data they had extracted and modified and further specified the template towards consensus.Random selection and coding of studies by both the first and second author was repeated for 20% of all articles after which a clear coding format was obtained.
The template used for data extraction contained three main component categories: (i) study and intervention characteristics, (ii) process evaluation methods, and (iii) process variables.Intervention characteristics were adapted from Murta and colleagues [15].The process evaluation methods components were adapted from Wierenga and colleagues [17].The specific components are listed in table 1. Process variables were coded using a list of concepts derived from process evaluation literature [9,11,13,15,17].During coding, the researchers used the list of concepts as a frame of reference, as a starting point.When necessary, the researchers diverged from this list so as not to exclude variables that were not on the list but were used as process variables.Data were collected at the level of the employee (micro level), the level of the supervisor, manager, or department (meso level), and at the level of the CEO, organization, or sector (macro level).For every process variable, it was assessed how many articles reported data collected at the micro, meso, or macro level.

Analyses
The Nielsen and Randall model for process evaluation [11] was used to cluster the process variables retrieved because the model was developed especially for organizational-level interventions and it provides the opportunity to take a broad perspective to process evaluation.Using this model, mediating and moderating factors of implementation can be detected.It was deliberated under which cluster a process variable should go, first with two of the co-author and then with all authors of this review.After each deliberation, the arrangement was adjusted according to the feedback received.The main clusters were context, intervention, and mental models.The content of the three clusters is in accordance with the three central themes of the Nielsen and Randall model for process evaluation [11].Context pertains to situational aspects that affect organizational behavior and functional relationships between variables, and contains hindering and facilitating factors.The intervention cluster refers to aspects of intervention design and implementation that determine the maximum levels of intervention exposure that can be reached, and contains the sub-clusters initiation, intervention activities, implementation, and implementation strategies.The mental models cluster refers to underlying psychological aspects that may help explain stakeholders' behavior in and reaction to the intervention.The mental models cluster contains the sub-clusters readiness for change, perceptions, and changes in mental models.

Study selection
The database search identified 4668 records.After duplicates were removed, an initial screening of titles and abstracts of the 3613 remaining records produced 100 potentially relevant publications.After screening of the 100 retrieved fulltext articles, 59 were excluded.The three main reasons for exclusion were: no process evaluation, no SMI, no results presented (design or protocol paper).Additionally, three eligible articles were identified through design papers.Finally, 44 articles met the selection criteria, and were included in this review (Figure 1).Two studies were reported in more than one of the included articles, so 42 studies are represented in this review.

Study and intervention characteristics
Table 1 presents the most important characteristics of the studies in the context of this review.Of the 42 reported studies, 27 were conducted in Europe (64%), 10 in North America (24%), 4 in Australia (10%), and 1 in Asia (2%).Most studies were conducted in the healthcare sector (45%), followed by education (13%).Of the 42 studies, 9 (21%) were conducted in a mixed set of organizations in more than one sector.More than half of the interventions (55%) had a participatory format.A participatory approach is characterized by cooperation of different stakeholders (eg, employees, managers, intervention providers) in the assessment, targeting, and prevention of work stress.Intervention duration ranged from 1-312 weeks, with most intervention durations (64%) not exceeding one year.Half (50%) of the 44 articles did not contain any reference to process evaluation literature in the introduction or methods section.In 20 articles, process evaluation data were collected at more than one moment.Collection of process evaluation data mostly took place post (84%) or during the intervention (55%).In five cases, process evaluation data were collected pre-intervention (11%).In most articles (93%), process evaluation data were collected at the micro level.In 22 and 8 articles, process evaluation data were collected at the meso and macro level, respectively.All articles that reported only quantitative data for process evaluation used a questionnaire for the process.In the articles that reported a qualitative or a combined approach, (group) interviews were mostly used for process evaluation.

Process variables
Table 2 shows all 47 process variables that were retrieved.Some of the most striking findings are discussed below.The context cluster contained 2 process variables, the intervention cluster 31, and the mental models cluster 14.The intervention sub-cluster initiation contained 3 process variables, intervention activities 8, implementation 8, and implementation strategy 12.The mental models sub-cluster readiness for change contained 4 process variables, perceptions 7, and changes in mental models 3.For every process variable, the general level of data collection is reported.

Clusters
Context Both a cluster and process variable, context was the third most reported variable.Coffeng and colleagues [18], for instance, reported a reorganization at the beginning of the intervention period as an example of context.Another contextual factor they reported was the fact that three months before the intervention project started, another intervention to improve the work environment had been piloted.Of 19 articles reporting context, 18 reported data that were collected at least at the micro level.The other process variable in this cluster was barriers and facilitators, which was reported six times.Ipsen et al [19] gave an example of both: making the wrong changes slows the process (barrier) and the intervention constitutes a collective process (facilitator).

Resources
Resources (eg, money, time, manpower) available for intervention  (19-21, 23, 25-29, 31, 32, 34-41, 46, 48, 51, 52, 54-56, 58, 60, 62,  was reported most and refers to the extent to which different stakeholders are accountable for carrying out intervention actions.Hasson and colleagues [20] found that senior management differed with human resource professionals about who was responsible for involving line managers in the intervention.In the third sub-cluster, implementation, dose received was the most reported process variable, examples of which included self-reported participation in intervention modules [21] and quiz completion of participants across several quizzes during the intervention [22].Data collection at the micro level was dominant in this sub-cluster.For the sub-cluster implementation strategy, support was the most-reported process variable, examples of which include support from management toward employees to attend intervention sessions [23] and the visibility of senior management's involvement in the intervention [24].Also in this sub-cluster, 11 articles reported the process variable involvement, which was, for example, described as the extent to which stakeholders took part in the development of a plan of action [25].

Mental models
In mental models, the first sub-cluster was readiness for change, of which awareness of problem/intervention was the most reported process variable.Readiness for change was reported in 6 articles, all measured at the meso level.The second sub-cluster, perceptions, contained the most-reported process variable in all 44 articles: attitudes and perceptions of intervention users, examples of which included criticism of employees towards intervention consultants [26] and the belief of employees that management did not take their needs into account [27].In the 30 articles reporting attitudes and perceptions of intervention users, almost all reported data were collected at least at the micro level.Engagement was found in articles that primarily reported data collection at the meso level.For engagement, Sorensen and Holman [28] reported differences in working groups in the extent to which they were able to include employees in the implementation process.In the third and last sub-cluster, process variables that represent changes in mental models were included.In this sub-cluster, effectiveness beliefs were reported most and are most often investigated at the micro-level.

Discussion
The aim of this systematic review was to explore which process variables have been used in SMI evaluation research.In the 44 articles, we found 47 process variables, which were divided into three clusters: (i) context contained 2 variables, (ii) intervention contained 31 variables, and (iii) mental models contained 14 variables.There was great variety in the process variables assessed, but the three most-reported were attitudes and perceptions of intervention users (mental models cluster), support (intervention cluster), and context (context cluster).Many process variables were different from those reported by Murta and colleagues [15].This systematic review revealed that relatively few studies contained theoretical frameworks to guide process evaluations.
Half of the articles did not contain any reference to process evaluation literature in the introduction or the methods section.
Different frameworks for process evaluation are available.Two in particular were present in the studies included in this systematic review, and each provide a different perspective on process evaluation.The first framework, proposed by Linnan and Steckler [9], focuses on implementation.In the findings of the present review, the framework was represented by process variables such as dose delivered (the extent to which the intervention was made available by its providers), dose received (the extent to which the target population actively uses or engages in intervention facilities and activities), and fidelity (the extent to which the intervention was delivered as planned).Evaluating implementation answers the question "Was the intervention carried out as intended?".However, this is only one of three important questions for intervention evaluation [8].Focusing solely on implementation in the process evaluation leaves unanswered the question "Did the intervention bring about the intended (change in) exposure/behavior?".The second framework, a model proposed by Nielsen and Randall [11], takes a broader view.It does not focus on implementation alone but also incorporates concepts such as initiation, implementation strategy, and mental models.By taking this broader view of process evaluation, information could also be gathered about the (change in) exposure or behavior.By adding mental models, for example, an explanation could be found for participants' motivation to take part in intervention activities or make use of intervention facilities.This was illustrated by Biron et al [29], who reported that managers failed to use a stress risk assessment tool (ie, dose received) because they did not feel that stress was a problem (ie, attitudes and perceptions of intervention users).
A problem with this broader approach is that it might blur the lines between process variable and effect outcome.An attitude or perception that seems to influence intervention participation (and implementation) can be regarded as a process variable.Alternatively, maintaining or changing an attitude or perception can be an intermediate effect of an intervention, in which case it may be more accurately described as an effect outcome.An example of a process variable that could also be an intermediate effect is communication.Communication about the intervention may be important for implementation (the process thereof), but an intervention can also change the way different stakeholders interact, leading to improved communication (intermediate effect).The dilemma that arises is in which part of the intervention evaluation should this information be gathered and reported in the context of the process or the effect evaluation?A way to make this decision is to establish beforehand whether the variable is part of the underlying theory or working mechanism behind the intervention [8].If this is the case, the variable should be regarded as an intermediate effect and measured as part of the effect evaluation.A systematic way to take intermediate effects into account is to formulate a program theory [10].A program theory states under which conditions researchers expect proximal changes to occur [30] but seems to be missing in many of the included studies.Program theory evaluation can provide quantitative outcomes, which can be related to intervention effect outcomes.Quantitative variables can give insight into the extent to which the intervention was used (eg, dose received), whereas qualitative data can provide more in-depth information (eg, barriers and facilitators).Sometimes, researchers might not yet be aware of certain intermediate effects.In that case, a qualitative process evaluation offers room for exploration, catering more to the practical nature of the applied research setting of interventions, in which fewer factors can be controlled than in a laboratory setting.This may explain why half of the articles contained reports of process variables but did not mention the use of any theoretical framework for their measurement.

Strengths and limitations
A strength of this review is our elaborate and thorough selection of studies; two independent researchers searched seven databases and systematically inspected 3613 titles and abstracts.Second, the background information on interventions and methods provided unique insight into specific circumstances in which process variables were assessed.Finally, careful deliberation resulted in a clustering structure tailored to the findings.Some limitations should be considered when interpreting the results.First, as this is an explorative review, we chose to use broad definitions of process variables and evaluation.Consequently, more generally defined process variables were reported more often than specifically defined ones.One could argue that a study reporting only one broadly defined process variable can hardly be called a process evaluation.However, using broad definitions served the exploratory goal of this review, in which we aimed at inclusion rather than exclusion.Resulting from this, many process variables found were not part of the preliminary design of the study (ie, they were not part of the theoretical framework used for the evaluation of the intervention).A second limitation is the fact that during data extraction, interpretation was sometimes necessary to tease out the process variables.This meant that not every variable could be extracted literally.For example, employee readiness [31] was coded as readiness for change.To enhance the coding format and curb possible observer effects, the first and second author coded 20% of the included articles independently.Coding was completed only after consensus was reached on the first 20%.Despite the relatively large number of process variables found, it is possible that some variables were missed, especially because there was great heterogeneity in the naming of process variables.

Implications for research and practice
Both the heterogeneity of process variables used and lack of the use of a (standard) framework in process evaluation limit the possibility to compare results and build on previous experiences.This hinders the advancement of process evaluation theory development and limits the possibility to advise organizations about what is important for successful implementation of SMI.Future process evaluations of SMI should be guided by a standardized, comprehensive framework that goes beyond assessing implementation only.The Nielsen and Randall [11] model of process evaluation provides a good starting point.Standardization would also be supported by the systematic use of a program theory, which would obligate researchers to measure if conditions for changes in behavior or exposure were in place and assess if intermediate stages were reached [30].
In most cases, process evaluation data were collected after intervention implementation and at the micro level (ie, at the level of the employee).As argued by Nielsen and Randall [11], retrospective evaluation may not capture changes in the process, and (in non-randomized controlled trial settings) does not provide the opportunity to take corrective action during intervention implementation should gaps emerge.Failing to collect information from stakeholders other than employees (micro level) also means that differences in perspectives among stakeholders might be overlooked.Many studies show, however, that for implementation success, support from other stakeholders is important [19,29,[32][33][34].In future process evaluations, researchers could place more emphasis on the collection of process data at different levels.
It should be noted that even though there have been substantial developments in the research field of process evaluations (for instance, the inception of the new journal Implementation Science in 2006), advancements should still be made in relating available process data to effect outcomes.This way, it could be assessed which process variables are central to successful implementation and predictive of intervention success.

Concluding remarks
This review complements the process evaluation literature by giving insight into the use of process variables in SMI evaluation research.It revealed that there still is great heterogeneity in the methods and process variables used.It also found that many process variables were used in SMI evaluations other than those reported earlier and that, in many cases, no theoretical framework or program theory was used to guide measurement of process variables.In most cases, process variables were measured at the level of the employee and post intervention.Future process evaluations of SMI could benefit from data collection from different stakeholders (eg, employees, management, CEO) and at different times (before, during, and after the intervention).Also, the use of a theoretical framework could support a broader approach to process evaluation and may lead to a more standardized way of assessing intervention implementation.

Figure 1 -
Figure 1 -Flow Chart of Study Selection Process directly or indirectly involved in the intervention; User=person who directly participates in the intervention.b Micro=employee; Meso=supervisor/manager/department; Macro=CEO/organization/sector.

Table 1
a Cooperation of different stakeholders in the assessment, targeting, and prevention of work stress.b Micro=employee, Meso=supervisor/manager/department, Macro=CEO/organization/sector.c Quantitative=questionnaires/company records (CR); qualitative=conference/focus groups/group meetings/interviews/observation notes (ON)/reports/workshop; Com-bined=Combination of quantitative and qualitative methods.d Not specified.

Table 2 -
Process variables reported in the included articles