Editorial

Scand J Work Environ Health Online-first -article    pdf

https://doi.org/10.5271/sjweh.4270 | Published online: 07 Dec 2025

Algorithmic management and psychosocial risks at work: An emerging occupational safety and health challenge

by Bowdler M, Lahti H, Jelenko M, Buresti G, Valtonen T

In recent years, algorithmic management (ALMA) systems have spread rapidly from platform work to more traditional sectors such as logistics, retail, and healthcare (1, 2). Defined as “the use of complex digital systems or AI [artificial intelligence] to manage workers” (3) and encompassing the semi- and fully automated execution of managerial functions such as scheduling, goal setting, and performance evaluation (4), ALMA is profoundly reshaping how work is organized and monitored. Building on recent discussions in this journal on digitalization and occupational health (eg, 5), this editorial examines how ALMA is transforming working conditions and power dynamics and considers its implications for workers’ health and well-being in changing work environments.

A review of the literature up to 2022 (6) synthesized emerging evidence on how ALMA affects job quality and worker health in platform work, highlighting psychosocial risks such as intensified workload, irregular and unpredictable working hours with unpaid waiting times, reduced decision authority, and social isolation. In other words, ALMA can recalibrate the balance between job demands and resources in ways that heighten psychosocial risks. Examined in the light of established models of psychosocial risk (7) – such as the Job Demands–Resources model (8, 9), the Job Demand–Control–Support model (10,11), and the Effort–Reward Imbalance model (12,13) – these prior findings suggest that ALMA can create conditions that trigger classic risk pathways in which job demands exceed available resources, leading to stress, burnout, and related health outcomes.

However, ALMA is not inherently detrimental. A growing body of research emphasizes that ALMA is not just a technical system, but a socio-technical process shaped by organizational contexts and human decisions (1, 4, 14). Thus, ALMA`s implications for work and workers depend not only on technical features but also, for instance, how and for what purpose the systems are used. When designed with transparency, fairness, and the opportunity for human influence over the system (4), the harmful effects of ALMA might be mitigated. Some researchers have even underscored the dual effect of ALMA, noting that algorithmic systems can both constrain and enable autonomy and value to workers depending on their design (15). However, evidence for positive outcomes remains limited and largely theoretical (4). While some studies have pointed to potential benefits for workers, such as increased flexibility in deciding working hours, these are often accompanied by new forms of control (16).

This editorial builds on a structured (non-systematic) literature review conducted within the PEROSH network (3), which followed a systematic search process and was visualized according to PRISMA 2020 reporting principles (17). The review synthesized findings from 39 studies (published 2022–2024) on the Occupational Safety and Health (OSH) implications of ALMA, covering both traditional and platform work. While the resulting report itself was not peer-reviewed, it draws exclusively from high-quality empirical sources (33 peer-reviewed scientific studies and 6 grey literature reports) and was developed by a team of researchers across multiple European institutions (3). Based on the review, we argue that ALMA is not merely a digital enhancement of existing managerial practices. Rather, it constitutes a novel form of work organization that shifts managerial decision-making from humans to algorithms, thereby reshaping the psychosocial work environment and redistributing power, control, and responsibility. The following sections summarize the key findings of our review, discuss their implications for policy and practice, and outline research priorities to ensure that ALMA supports rather than undermines workers’ health and well-being.

Key findings of the review
The ALMA-AI project’s review indicated that the use of ALMA systems frequently leads to excessive job demands while simultaneously reducing key job resources needed to manage those demands. This imbalance shapes working conditions in ways that heighten psychosocial pressures and increase the risk of adverse OSH outcomes. In the ALMA-AI report, these patterns were identified and subsequently structured into both quantitative and qualitative studies, spanning across platform work and traditional workplaces.

Intensification of job demands
Quantitative analyses conducted in platform work settings consistently show that ALMA systems generate psychosocial – particularly time – pressures that significantly elevate work-related stress levels (eg, 18–21). Time pressure as a job demand was also referred to in qualitative studies that identified excessive workloads as a recurrent feature of ALMA across various platform sectors (eg, 22, 23). In some accounts, workers even described how these heightened demands made them feel “exploited” (24). Similar findings were highlighted in an ILO report, where the intensification of work appeared to be directly linked to the use of monitoring systems (25).

Depletion of job resources
The review showed that the negative impact of ALMA on OSH is often exacerbated when these systems undermine key job resources; a pattern observed particularly when ALMA is used as a control mechanism. Such effects include reduced autonomy (26) and diminished social support, manifested for instance as limited time to interact with co-workers (25). ALMA often imposes standardized workflows, reducing opportunities for worker discretion (27). The loss of autonomy is especially pronounced when algorithms are perceived as opaque (26) or used to impose strictly timed or closely monitored tasks (28). Some of the reviewed studies also reported that workers frequently feel excluded from decision-making processes surrounding the introduction and use of ALMA systems (eg, 25).

Empirical evidence from large-scale surveys
In addition to peer-reviewed studies, the review integrated findings from major institutional reports based on large, representative samples. The EU-OSHA report (29) drew on OSH Pulse survey data covering 27 250 workers across the EU. The results suggest that each one-unit increase in ALMA intensity was associated with a 21% rise in psychosocial risks and a 16.5% increase in health issues. Similarly, a Foundation for European Progressive Studies 2023 survey of 5141 workers in Nordic countries (30) found that intensive use of ALMA nearly doubled stress levels compared with workplaces without ALMA.

Worker involvement and transparency as mitigation strategies
Another key finding of the review was the importance of potential “moderators” that can buffer the negative effects of ALMA. Two strategies appeared: worker involvement and transparency. In terms of worker involvement, collective worker representation has been effective in negotiating limits on algorithmic control, protecting worker privacy, and discretion (31). Participatory approaches, such as co-design and collective bargaining, can help ensure worker influence in the implementation of ALMA, supporting autonomy and trust (30).

While not as impactful as direct involvement, transparency also plays a key role in mitigating the negative effects of ALMA. Clearly communicating how algorithms function and how decisions are made can help maintain job satisfaction, motivation, and trust (30). Transparency also further enhances perceptions of fairness, particularly in platform work settings (21). These two strategies are well-established in OSH practice and remain vital as algorithmic systems evolve.

Research and methodological implications
The findings of the review conducted in the ALMA-AI project underscore that ALMA systems often create an imbalance between job demands and available job resources, contributing significantly to psychosocial risks and negative OSH outcomes. While worker participation and transparency can help mitigate these effects, the novelty and complexity of ALMA call for continuous research, adaptive regulation, and collaboration across stakeholders.

Future research should focus on effective strategies to protect OSH under ALMA, with particular attention to moderating factors such as worker participation, transparency, and the broader socio-technical context. Longitudinal studies are particularly needed to assess the long-term effects of ALMA and capture adaptation processes over time.

A key methodological challenge concerns the lack of standardized and validated tools for assessing ALMA intensity, functions, and impacts, particularly in traditional workplaces. Existing instruments, such as the Algorithmic Management Questionnaire (AMQ) (32) provide a valuable foundation for measuring ALMA exposure and its OSH implications. However, a universally accepted methodology for internal risk assessment is still lacking. The AMQ requires further validation, translation, and adaptation across countries, sectors, and employment types. Some items developed for platform work, such as those related to compensation or job termination, may be less relevant in traditional organizations, whereas new dimensions, like algorithmic task allocation, may have significant psychosocial relevance.

Developing robust and context-sensitive assessment tools capable of capturing the intensity, functions, and uses of ALMA systems and related practices is a fundamental research priority. Such tools would support both scientific understanding and policy development by providing a clearer basis for practice-oriented regulation and workplace risk assessment. Importantly, future studies should account for differences between platform and traditional workplaces, where ALMA systems are embedded within existing managerial structures and practices (1).

Continued research is essential to refine and expand measures that safeguard workers’ rights and well-being in increasingly digital workplaces, ensuring that ALMA systems align with principles of health, safety, and fundamental rights such as privacy, equality, and non-discrimination.

Policy and practice implications
The findings of the ALMA-AI review (3), together with prior research (6), demonstrate that ALMA fundamentally reshapes power relations at work, with control shifting increasingly from workers to employers and platform operators. Of particular concern is the growing risk of psychosocial hazards. These risks often remain obscured, as ALMA systems may be perceived as objective or neutral due to their technical logic. In practice, however, these systems are designed for specific organizational purposes and reflect the values and assumptions of their developers and implementers. Software developers, therefore, represent an additional stakeholder group in OSH, as their design choices may consciously or unconsciously embed management logics.
The impact of ALMA on workers ultimately depends on managerial choices and the broader organizational context. In many cases, employers and platform operators prioritize efficiency and profit, leading to intensified control over workers’ tasks and conditions (33, 34). This underscores the need for robust regulatory and organizational safeguards that ensure compliance with OSH standards and the protection of workers’ health and rights.
A significant step in this direction has been taken with the adoption and forthcoming implementation of the EU Artificial Intelligence Act. The classification of AI systems used in worker management as “high risk” requires employers to conduct comprehensive assessments and mitigate potential impacts on OSH and fundamental rights. However, legal safeguards alone cannot address the complex organizational and psychosocial dynamics documented in empirical studies (eg, 4). OSH research therefore remains crucial in informing both regulation and workplace practice.
From a policy and practice perspective, regulations should mandate OSH risk assessments for ALMA systems, whether or not they include AI. These assessments must safeguard privacy, equality, and non-discrimination, and ensure workers’ right to information and transparency in how such systems operate. Evidence from the ALMA-AI review shows that worker participation and transparency in algorithmic processes are decisive in mitigating negative impacts. Therefore, employers should adopt participatory design and consultation processes when introducing ALMA systems and ensure accountability in algorithmic decision-making. ALMA-related risks should also be incorporated into occupational risk management systems and complemented by training on psychosocial risk prevention.

Finally, a coordinated European effort involving OSH institutions, researchers, policymakers, and social partners is essential to monitor and manage the psychosocial risks associated with ALMA. Such cooperation should ensure meaningful human oversight, strengthen transparency, and enhance workers’ capacity to engage in dialogue over system design and implementation.

Acknowledgements
The ALMA-AI project is a Partnership for European Research in Occupational Safety and Health (PEROSH) project (perosh.eu/project/alma-ai-project-exploring-the-occupational-health-and-safety-impact-of-algorithmic-management-ai-systems). Members of the project include more than those involved in this article. The project includes colleagues from multiple research organizations: Jorge Martín González (INSST, Spain); Marie Jelenko and Thomas Strobach (AUVA, Austria); Joanna Kamińska, Karolina Pawłowska-Cyprysiak and Katarzyna Hildt-Ciupińska (CIOP-PIB, Poland); Teppo Valtonen and Heidi Lahti (FIOH, Finland); Giuliana Buresti and Fabio Boccuni (INAIL, Italy); Benjamin Paty and Virginie Govaere (INRS, France); Jon Zubizarreta, Paula Lara and Denis Losada (INSST, Spain); Therese Kristine Dalsbø (STAMI, Norway); Elsbeth de Korte and Mairi Bowdler (TNO, The Netherlands).

References
1. Baiocco S, Fernández-Macías E, Rani U, & Pesole A. The algorithmic management of work and its implications in different contexts. Seville: European Commission, Joint Research Centre (JRC); 2022. JRC Working Papers Series on Labor, Education and Technology, No. 2022/02. Available from https://publications.jrc.ec.europa.eu/repository/handle/JRC129749
2. European Agency for Safety and Health at Work (EU-OSHA). Artificial intelligence for worker management: An overview. EU-OSHA; 2022. Available from https://healthy-workplaces.osha.europa.eu/sites/hwc/files/hwc/publication/summary-artificial-inteligence-worker-management_en.pdf
3. Martín González J, Jelenko M, Strobach T, Kamińska J, Pawłowska K, Hildt K, et al. (2025). Algorithmic management and AI-based systems as a new form of work organisation: Psychosocial factors and implications for occupational safety and health. Partnership for European Research in Occupational Safety and Health (PEROSH); 2025. https://doi.org/10.23775/20250616
4. Parent-Rocheleau X, Parker SK. Algorithms as work designers: How algorithmic management influences the design of jobs. Hum Resour Manag Rev. 2022;32:100838. https://doi.org/10.1016/j.hrmr.2021.100838
5. De Moortel D, Turner MC, Arensman E, Nguyen ABVD, Gonzalez V. Improving health-promoting workplaces through interdisciplinary approaches. The example of WISEWORK-C, a cluster of five work and health projects within Horizon-Europe [editorial]. Scand J Work Environ Health. 2025;51:259–264. https://doi.org/10.5271/sjweh.4238
6. Vignola EF, Baron S, Abreu Plasencia E, Hussein M, Cohen N. Workers’ health under algorithmic management: Emerging findings and urgent research questions. Int J Environ Res Public Health. 2023;20:1239. https://doi.org/10.3390/ijerph20021239
7. Boot CR, LaMontagne AD, Madsen IE. Fifty years of research on psychosocial working conditions and health: From promise to practice. Scand J Work Environ Health. 2024;50:395–405. https://doi.org/10.5271/sjweh.4180
8. Bakker AB, Demerouti E. The job demands-resources model: State of the art. J Manag Psychol. 2007;22:309–328. https://doi.org/10.1108/02683940710733115
9. Bakker AB, & Demerouti E. Job demands–resources theory: Taking stock and looking forward. J Occup Health Psychol. 2017;22:273–285. https://doi.org/10.1037/ocp0000056
10. Karasek RA. Job demands, job decision latitude, and mental strain: Implications for job redesign. Adm Sci Q, 1979;24:285–308. https://doi.org/10.2307/2392498
11. Theorell, T. The Demand Control Support Work Stress Model. In: Theorell T, editor. Handbook of Socioeconomic Determinants of Occupational Health. Cham: Springer; 2022. p 339–353. Handbook Series in Occupational Health Sciences. https://doi.org/10.1007/978-3-030-31438-5_13
12. Siegrist J. Adverse health effects of high-effort/low-reward conditions. J Occup Health Psychol. 1996;1:27–41. https://doi.org/10.1037/1076-8998.1.1.27
13. Siegrist J. The effort–reward imbalance model. In: Cooper CL, Quick JC, editors. The handbook of stress and health: A guide to research and practice. John Wiley & Sons; 2017. p 24–35. https://doi.org/10.1002/9781118993811.ch2
14. Cameron L, Lamers L, Leicht-Deobald U, Lutz C, Meijerink J, Möhlmann M. Algorithmic management: Its implications for information systems research. Commun Assoc Inf Syst, 2023;52:556–574. https://doi.org/10.17705/1CAIS.05221
15. Meijerink J, Bondarouk T. The duality of algorithmic management: Toward a research agenda on HRM algorithms, autonomy and value creation. Hum Resour Manag Rev. 2023;33:100876. https://doi.org/10.1016/j.hrmr.2021.100876
16. Rosenblat A, Stark L. (2016). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. Int J Commun. 2016;10:3758–3784.
17. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow, CD, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. https://doi.org/10.1136/bmj.n71
18. Lu Y, Yang MM, Zhu J, Wang Y. Dark side of algorithmic management on platform worker behaviors: A mixed-method study. Hum Resour Manag. 2024;63:477–498. https://doi.org/10.1002/hrm.22211
19. van Zoonen W, ter Hoeven C, Morgan R. Finding meaning in crowdwork: An analysis of algorithmic management, work characteristics, and meaningfulness. Inf Soc. 2023;39:322–336. https://doi.org/10.1080/01972243.2023.2243262
20. van Zoonen W, Sivunen AE, Treem JW. Algorithmic management of crowdworkers: Implications for workers’ identity, belonging, and meaningfulness of work. Comput Hum Behav. 2024;152:108089. https://doi.org/10.1016/j.chb.2023.108089
21. Semujanga B, Parent-Rocheleau X. Time-based stress and procedural justice: Can transparency mitigate the effects of algorithmic compensation in gig work? Int J Environ Res Public Health. 2024;21:86. https://doi.org/10.3390/ijerph21010086
22. Cañedo M, Allen-Perkins D. Andamiajes y derivas: La mediación algorítmica en la práctica de los riders. EMPIRIA. Revista de Metodología de las Ciencias Sociales. 2023;59:103–130. https://doi.org/10.5944/empiria.59.2023.37940
23. Riesgo V. Entre el control y el consentimiento: De Braverman a Burawoy en el capitalismo de plataforma. Trabajar para Uber en España. RES. Revista Española de Sociología. 2023;32:a175. https://doi.org/10.22325/fes/res.2023.175
24. Zhang A, Boltz A, Wang CW, Lee MK. Algorithmic management reimagined for workers and by workers: Centering worker well-being in gig work. In: Barbosa S, Lampe C, Appert C, et al., editors. Proceedings of the 2022 CHI conference on human factors in computing systems. 2022. p 1–20, no.14. https://doi.org/10.1145/3491102.3501866
25. Rani U, Pesole A, González I. Algorithmic Management practices in regular workplaces: Case studies in logistics and healthcare. Luxembourg: Publications Office of the European Union. JRC136063; 2024. Available from: https://publications.jrc.ec.europa.eu/repository/handle/JRC136063
26. Möhlmannn M, Alves de Lima Salge C, Marabelli M. Algorithm sensemaking: How platform workers make sense of algorithmic management. J Assoc Inf Syst. 2023;24:35–64.
27. Tuomi A, Jianu B, Roelofsen M, Ascenção MP. Riding against the algorithm: algorithmic management in on-demand food delivery. In: Ferrer-Rosell B, Massimo D, Berezina K, editors. Information and Communication Technologies in Tourism 2023, ENTER 2023 eTourism Conference. Cham: Springer; 2023. p 28–39. https://doi.org/10.1007/978-3-031-25752-0_3
28. Gillis D. Risks and opportunities of AI-based worker management systems in an automotive manufacturing plant in Belgium. Case Study. Bilbao: European Agency for Safety and Health at Work; 2024.
29. Pesole A. Surveillance and monitoring of remote workers: Implications for Occupational Safety and Health. Bilbao: European Agency for Safety and Health at Work; 2023.
30. Jensen M, Oosterwijk G, Nørgaard A. Computer in command: Consequences of algorithmic management for workers. Policy study. Foundation for European Progressive Studies; 2024.
31. Doellgast V, Wagner I, O’Brady S. Negotiating limits on algorithmic management in digitalised services: Cases from Germany and Norway. Transfer. 2023;29:105–120. https://doi.org/10.1177/1024258922114304
32. Parent-Rocheleau X, Parker, SK, Bujold A, Gaudet M-C. Creation of the algorithmic management questionnaire: A six-phase scale development process. Hum Resour Manag. 2024;63:25–44. https://doi.org/10.1002/hrm.22185
33. Kellogg KC, Valentine MA, Christin A. Algorithms at work: The new contested terrain of control. Acad Manag Ann. 2020;14:366–410. https://doi.org/10.5465/annals.2018.0174
34. Rosenblat A, Stark L. Algorithmic labor and information asymmetries: A case study of Uber’s drivers. Int J Commun. 2016;10:3758–3784.

This article refers to the following texts of the Journal: 2025;51(4):259-264  2024;50(6):395-405