London commuters.

How AI is changing the way we work and how we’re governed

9 April 2025

The article at a glance

We should get used to the fact that our workplaces and interactions with government will change dramatically with the emergence of AI – but should we be concerned? Job losses, privacy infringements and rights erosion are growing themes in the media, but in this article we look beneath the hype and use the latest research from Cambridge Judge to explore the growing influence of AI and what the means for the future of human agency, potential and inclusion.

Progress around the world on these issues has been uneven as organisations and governments experiment with this era-defining technology.  

But AI’s impact isn’t just technical, it’s also emotional, economic, and deeply human. In this article, we explore the topic from those different angles, reflecting on recent studies by Cambridge Judge Business School academics.

Will AI replace my job? 

Jochen Menges.
Dr Jochen Menges

One common conversation about AI’s impact on work is often automation and job losses. But that view implies we have no control over the future, and can obscure the bigger picture. The important question now is about how we as humans choose to deploy the technology, and importantly, how humans and AI will work together. Seen through this lens, we can ask how the technology can enhance rather than replace human capabilities, helping us to learn and be more creative, and create a future of work which could be better. 

We should remember that humans have always had the ability to “forge their own destiny at work rather than having it dictated by machines”, advises Jochen Menges, Professor of Leadership at Cambridge Judge and at the University of Zurich, whose work traces how the concept of the future of work has evolved over time, moving away from a focus on efficiency and automation and toward unlocking human potential.  

In a special issue on the ‘The Human Side of the Future of Work’ paper in journal Academy of Management Discoveries, Jochen and co-authors look back to the industrial revolution, when concerns emerged about machinery enriching the privileged while leaving the poor behind.  

The authors also cite a remark from President John F. Kennedy in 1962: if people have the talent to invent new machines that put people out of work, they also have the talent to put those people back to work. 

“The future of work is subjective and can be shaped by people,” says Jochen’s essay. “Technologies affect the qualities we think of as part of humanity itself.” 

As the study suggests, the future is not something we inherit from machines. “As for the future, your task is not to foresee it, but to enable it,” said French writer Antoine de Saint-Exupéry, author of ‘The Little Prince’, who is cited in Jochen’s study. 

So when we ask if AI will replace our job, the answer may be: “if we ask it to”.   

Harnessing AI to support human feedback 

Thomas Roulet.
Professor Thomas Roulet

The belief in human agency and AI’s potential to support human strengths is supported by a different study, this time by Thomas Roulet, Professor of Organisational Sociology and Leadership at Cambridge Judge. Thomas has studied how technology – specifically machine feedback – can improve individual learning and amplify the effect of human feedback. 

Machine-generated feedback was shown not only to improve performance directly, but also to enhance the impact of related feedback given by humans. In particular, this effect is even more acute when it comes to learning from failure: people were seen to pay closer attention to their mistakes, reflect more deeply, and become more motivated to act on feedback – especially when both machine and human feedback pointed to similar issues. It’s a clear example of how collaboration between humans and machines can enhance learning and support personal growth. 

In these cases, trust in the machine programming was key – if people felt the algorithm was reliable and fair, it could improve learning in ways that possibly humans couldn’t achieve (because of human biases which can impact human-to-human-learning). 

As AI systems become more embedded in workplaces, this new partnership between humans and AI to support employee development will only grow stronger, so organisations must work hard to apply the technology in ways that are transparent and fair, representing the diversity of humanity. 

Where AI and capitalism collide 

Philip Stiles.
Dr Philip Stiles

If human agency is key to shaping the future of AI, then it applies not just to how we design software and algorithms, but also how we define the economic systems determining how AI is developed, used and experienced. 

In a recent study on AI and capitalism, Philip Stiles, Associate Professor in Corporate Governance at Cambridge Judge, argues that in a system prioritising efficiency, growth and shareholder returns, AI will be focused on pursuing those same rewards.  

With those priorities comes a continuation of the stresses on humans – ‘techno-stress’ as the study terms it.  Most of us have felt overloaded by having to work faster, adapt to new technology, being constantly on and reachable, or worrying that your skills won’t keep up with the pace of change.  

But the study also points out that many workers – especially in the gig economy – are choosing to use AI-enabled platforms to work flexibly or boost their income.  

“Rather than simply critiquing the technology, we must reflect on the social and economic systems in which it operates,” says the study. 

So while we have choices in how to shape our AI-driven workplace in future, changing the wider economic systems we operate in is a much more difficult and potentially transformative undertaking. 

How is AI impacting government work globally?

Jaideep Prabhu.
Professor Jaideep Prabhu

We now zoom out from the workplace to government and ask: how quickly and successfully are politicians around the world utilising the technology, and to what effect.

Most governments are expected to do more and do it better, but often with less money and rising debt burdens. To avoid unpopular measures to address this, such as tax rises or spending cuts, AI offers new hope, if used properly.

“Becoming more efficient and effective is an often-overlooked option for fiscally squeezed governments,” says Jaideep Prabhu, Professor of Marketing at Cambridge Judge, “and artificial intelligence coupled with frugal innovation practices can play a big part in this. It’s not an immediate fix, but could make a significant difference in the mid to long term.”

1

Estonia’s example: can e-systems save money?

Jaideep highlights Estonia as a stand-out example of using digital transformation to do more with less. In his recent book ‘How Should a Government Be?’ he discusses how the decision to allow citizens to do a whole host of administration online, such as voting, obtaining identities, and managing pensions, allow for better financial tracking and reducing errors, amongst other benefits.

Jaideep adds: “Estonia is at the forefront of integrating AI into government services, with its flagship project Bürokratt, a virtual assistant network launched in 2022 to streamline citizen-government interactions. The country is developing an ‘AI Gov Stack’ of reusable open-source components and implementing AI across various sectors including healthcare and transportation.”

Alongside this commitment to digital services in Estonia, the government is investing in the infrastructure to properly support it, including a programme to raise awareness in its population and maintain ethical and responsible AI standards.

We should look beyond the obvious choices when looking for countries leading in AI innovation.

2

India’s example: inclusive AI

Another stand-out example is India: they are defying the myth that widescale AI initiatives must be led by Big Tech multinationals with billion-dollar budgets. For India, the principles of inclusion, affordability, open source and interoperable protocols – as reflected in the so-called ‘India stack’ in financial services – are now being applied to AI projects. These projects aim to be affordable and available to all on the demand side, while ensuring a level playing field on the supply side, mitigating the dangers of monopoly power.

In India, digitalisation of government benefits – tied to a Unique ID project that digitises identity data for more than 1 billion Indians – has saved $4.8 billion annually since 2016.

3

France’s example: sustainable energy supply

While India’s AI policy is based on accessibility and inclusivity, France is leveraging its nuclear energy infrastructure to power data centres and presenting itself to the world as an example of Frugal AI (powering it sustainably) and an attractive destination for energy-intensive AI companies. This is an alternative definition of Frugal AI, which more commonly refers to developing cost-effective AI systems designed to work with limited data, low computing power, and in resource-constrained environments.

France’s approach could be especially relevant for countries seeking to maximise their AI potential while managing resource constraints. It aligns with recent developments in the UK, for instance, where plans for dedicated nuclear plants to support AI and data centres are being explored.

The UK government recently announced AI Growth Zones, designed to accelerate planning permissions and provide the necessary energy connections for AI infrastructure.

Will AI impact inequality and marginalisation? 

Michael Barrett.
Professor Jaideep Prabhu

These AI applications in government and workplaces may sound compelling but what about the risks of AI in these realms – the nature of which we are only just beginning to understand? How can we imagine the future of work in this new era, and how might worker rights be protected? 

Research by Michael Barrett, Professor of Information Systems and Innovation Studies, highlights a particular risk: when AI systems are built on biased data or assumptions it can reinforce existing inequalities, cementing marginalisation. This can impact us in the workplace but more broadly in any AI application a government might use. 

Such risk is particularly acute when dealing with healthcare, for example, because patient and hospital data analysed by large language Models (LLMs) may not reflect the healthcare experience of people from diverse backgrounds or the diversity of human biology. 

Michael’s paper, which draws on a recent healthcare study, gives us an example: one LLM model trained on retinal images excluded dark-skinned patients, leading to a 12.5% accuracy gap between light-skinned patients (73% accurate) and dark-skinned patients (60.5% accurate) in diagnosing diabetic retinopathy. “This was because of physiological differences between dark-skinned and light-skinned fundi (the eye’s inner surface) that a machine learning algorithm would not understand if it had not been exposed to the data.” 

Yet while highlighting the potential harm of AI bias, the paper also notes the benefits of AI in using synthetic data to limit such prejudice and marginalisation: using AI-based techniques, researchers produced synthetic photos corresponding to dark-skinned patients in order to balance the database. This increased the accuracy for dark-skinned patients from 60.5% to 71%, thus dramatically reducing the accuracy gap. 

In the workplace, as businesses use the technology more and perhaps become over-dependent on it, we are also at risk of undermining professional expertise and critical thinking, leaving workers de-motivated, not learning and expected to defer to machine-generated decisions. This will impact not just tasks but also the social fabric of the workplace, by influencing how workers relate to each other and to organisations.  

Michael and co-authors have introduced to AI policymaking the concept of ‘relational risk perspective’, which aims to maximise the potential benefits of AI while minimising the dangers.  

Key to this perspective is seeing AI as having potential for benefit as well as harm, depending on how it is developed and experienced across different social contexts.  

They also note that the risks are constantly evolving as our interaction with the technology advances. Policymakers and technologists should anticipate, rather than react to, the ways in which AI could deepen existing inequities.

Working towards responsible, inclusive AI  

So as we shape the future of AI, there is much to challenge us as workers, leaders and policy makers: how can we best leverage AI for progress, meaningful work and smart governance, while at the same time minimising AI risks such as techno-stress, employee de-motivation, entrenching inequality, marginalisation and exploitation? 

International cooperation can play a part, particularly in rules governing this powerful technology, and in the spread of principles such as the frugal innovation research pioneered at Cambridge Judge.  

Yet perhaps most important of all is, the ethical framework we adopt over time for using AI may best determine how future historians judge the first AI era that is shaping up before our very eyes. We should remember who is currently in the driving seat– humans.  

“AI is a wonderful tool if applied in a way that recognises the importance of working within environmental and social structures,” says Jaideep Prabhu. “We believe that the meaningful and varied AI research at Cambridge Judge will help in making AI as societally beneficial as it can be.”

This article was published on

9 April 2025.