Human brain vs AI: what makes better decisions?

2 April 2025

The article at a glance

Companies and other organisations are rapidly adopting artificial intelligence to drive strategy and planning. But there’s a key question: does AI actually make better decisions than humans? Academics art Cambridge Judge Business School are using recent research to help answer this question.

The world is rapidly tapping artificial Intelligence (AI) to transform decision-making in business, government, finance, healthcare and beyond. The technology’s analytical power is unparalleled, but a more difficult question is this: does AI actually make better decisions than humans? 

It’s not an easy query to answer. Research by faculty and other academics at Cambridge Judge Business School outlined in this article sheds new light on where AI outperforms human decision-making, where it fails and what business leaders must do to integrate AI most effectively. 

While AI excels in data-driven optimisation, risk assessment and operational efficiency, it struggles when dealing with ethics, strategic foresight and unpredictability.  And then there is another human element to consider: while some people may give too much credence to AI, others (and particularly older people) distrust or are otherwise averse to AI – and that cannot simply be ignored in setting organisational policy. 

“There are profound policy questions surrounding AI, human decision-making and the future of work that Cambridge Judge researchers are grappling with,” says Michael Barrett, Vice-Dean for Strategy and University Engagement and Professor of Information Systems and Innovation Studies at the Business School. “Firm answers may be scarce at this point in time, but we are being responsible in asking the tough questions and engaging with regulators, consumers and companies.”

Michael Barrett, Professor of Information Systems and Innovation Studies image

There are profound policy questions surrounding AI, human decision-making and the future of work that Cambridge Judge researchers are grappling with.

Michael Barrett, Professor of Information Systems and Innovation Studies

Where AI has a clear advantage: data analysis and modelling 

When it comes to predictive modelling and data analysis, AI has a clear advantage as shown in a recent business simulation study by 3 Cambridge Judge academics on the automobile industry. The study found that AI when provided timely data in terms of variety, veracity and volume was faster at designing high-performing products, optimising costs and supply chains, and responding to market fluctuations in real time. 

“Ignoring generative AI in corporate strategy is no longer viable,” say the co-authors – Fellow Hamza Mudassir, Kamal Munir, Professor of Strategy and Policy, and Pro-Vice-Chancellor (University Community and Engagement), and Shahzad (Shaz) Ansari, Professor of Strategy and Innovation.  

“This experiment demonstrates that even untuned models can offer unique and creative approaches to strategy when properly prompted, generating strong results. If generative AI can help companies maximise shareholder value more effectively, why resist? After all, maximising shareholder value is the raison d’être for the role of the CEO,” they wrote in a Harvard Business Review article. 

These findings align with a pilot study co-authored by Lucia Reisch, Director of the El-Erian Institute of Behavioural Economics and Policy at Cambridge Judge Business School. The study demonstrated that large language models (LLMs) can very accurately predict the effects of food-related policy interventions – such as estimating whether a policy to reduce food waste will have a positive effect, and how large that effect might be. While the study did not directly compare AI performance with that of human experts, the results indicate a high degree of predictive accuracy in this domain. 

“Formulas tend to do better than people do, and algorithms often outperform human beings, including experts,” says Lucia. “One reason is that algorithms may exhibit a lower degree of inconsistency or ‘noise.’ Another is that they can, in principle, be free from cognitive biases—provided they are trained on unbiased data.” However, Lucia adds that this is, in fact, a significant challenge: “Algorithms essentially learn from the data they’re trained on. So if that data contains biases, the algorithm is likely to pick those up and reproduce them.” 

Lucia Reisch, Director of the El-Erian Institute of Behavioural Economics and Policy  image

Formulas tend to do better than people do, and algorithms often outperform human beings, including experts. One reason is that algorithms may exhibit a lower degree of inconsistency.

Lucia Reisch, Director of the El-Erian Institute of Behavioural Economics and Policy

Can AI decisions surpass human judgment across industries

As companies increasingly adopt artificial intelligence to drive strategy and planning, the debate over whether AI truly makes better decisions than humans remains central. While the human brain excels in contextual understanding and emotional intelligence, AI offers unparalleled speed and precision in processing vast amounts of data.

For instance, AI-powered diagnostic tools in healthcare outperform traditional methods by detecting cancer earlier and more accurately, demonstrating how AI can drive data-driven decisions. In finance, AI-driven trading models execute billions of trades per second, surpassing human analysts in sheer efficiency.

Meanwhile, corporate strategy benefits from AI-enabled digital twins, digital models that mirror real-world changes to simulate market scenarios. According to Virginia Leavell, Assistant Professor in Organizational Theory and Information Systems, “This is what makes digital twins useful.” However, as research from Cambridge Judge Business School shows, the effectiveness of AI versus human decision-making ultimately hinges on the context in which decisions are made.

But Leavell’s research suggests that the impact of digital twins goes even further than strategic simulation – they can actually help shape reality. When people start to trust the model and act on its predictions without questioning it, the model becomes more than just a tool – it starts to shape real-world decisions. “For a digital twin to become performative, it must first be taken-for-granted as ‘real’,” she explains.

This shows that when people really trust AI models like digital twins, the model can start to shape decisions on its own – sometimes even more than the people using it. But it’s still up to humans to decide when to follow the model and when to step back and think for themselves.

Uncertainty and ambiguity: where AI falls behind 

Despite such strengths, AI often takes second place to humans when dealing with uncertainty, ambiguity, or human-driven complexity. 

The auto industry study, cited above, found that AI-driven CEOs failed when market conditions changed unexpectedly. Unlike human executives, who build in strategic flexibility, AI models are usually optimised for short-term growth and profitability but struggle with unforeseen disruption. That’s because AI relies on historical data rather than intuition, and tends to focus on things that worked in the past rather than on new techniques that might work better in the future. 

“AI can rapidly learn and iterate in a controlled environment, making it less ideal for coping with highly disruptive events that require human intuition and foresight,” wrote Hamza, Kamal and Shaz. 

AI can rapidly learn and iterate in a controlled environment, making it less ideal for coping with highly disruptive events that require human intuition and foresight.

Can AI be as creative as humans?  

Conventional wisdom has long held that while AI may be terrific at number crunching and other data analysis, the human brain is far superior when it comes to creative tasks. Yet research at Cambridge Judge questions such assumptions – up to a point. 

A 2024 study co-authored by Yeun Joon Kim, Associate Professor in Organisational Behaviour, found that while human-AI co-created ideas were initially innovative, creativity later stagnated because human-AI creativity failed to refine and develop initial outputs over time. In 10 rounds of tasks, human-only teams continued to improve creatively while human-AI teams plateaued. 

The research introduces a theory of augmented learning regarding human-AI co-creation in order to enhance joint creativity. “We propose shifting the focus of augmented learning from traditional human cognitive learning to a collective learning process, where humans and AI collaboratively rearrange their levels of involvement in co-creation activities to continuously improve joint creativity over time,” the study says. “Augmented learning is an evolutionary process in which humans and AI continuously adjust their levels of involvement in multiple activities within a task over time to achieve ongoing performance enhancement.”  

Another 2024 study by David Stillwell, Professor of Computational Social Science, found that large language models (LLMs) can match human performance in structured problem-solving but underperform in creative writing, storytelling and tasks requiring emotional depth. AI lacks diversity in responses, often producing repetitive, predictable outputs that fail to challenge conventional thinking. Thus, AI-generated advertising slogans were technically sound they lacked emotional resonance, while human-created slogans were more persuasive. 

“When questioned 10 times, an LLM’s collective creativity is equivalent to 8-10 humans. When more responses are requested, 2 additional LLM responses equal one extra human. Ultimately, LLMs, when optimally applied, may compete with a small group of humans in the future of work,” David’s research concludes.  

This poses an intriguing question: If AI can think like 8 or 10 people, should we treat its decisions like a team decision – with all the various elements that entails – rather than the decision of a single piece of technology? If so, where does human intuition fit in? 

A key takeaway from both studies is that AI should be used for accelerating idea generation, but humans must refine and contextualise AI-generated insights. 

Algorithm aversion: why humans still trust people over AI 

A follow-up study by Lucia Reisch and Micha Kaiser at the El-Erian Institute at Cambridge Judge to the earlier AI pilot study examined algorithm aversion. The study looked at 10 different decision scenarios (including workplace promotion and layoffs, health-related choices and the selection of a family vacation destination) and participants from 6 countries (Germany, Japan, Mexico, Sweden, the US and the UK). 

The vacation scenario, for example, asked participants: “Imagine that you are planning a vacation for yourself and your family. You are not sure what would be best. Would you prefer a recommendation based on:

a. An algorithm (based on data on thousands of people and which vacations they enjoy, as well as relevant data about you and your family)
b. A travel agent (based on his 30 years of experience in the business)?” 

The research found that “majorities generally favour human decision-makers over algorithmic decision-makers in diverse nations and across diverse problems”. While even brief additional information about how and why algorithms work can reduce such algorithm aversion, such aversion is robust among a certain percentage of the population, particularly older people, so such information is unlikely to have much effect among them. 

“The findings suggest that informational interventions alone have a relatively modest effect on algorithm acceptance,” the study says, while acknowledging that this could change over time as people gain additional exposure to algorithms. 

Brains or algorithms: which make better decisions?

So, back to our initial question: does AI make better decisions than humans? 

Yes – when it comes to data-driven, logic-based decisions, where AI is unmatched in speed, accuracy and pattern recognition. No – when it comes to intuition, ethical judgment, adaptability and strategic foresight. 

Perhaps the better question is this: how can humans best use AI in order to arrive at the best decisions? Companies that enhance human intelligence by tapping AI for insight and efficiency while retaining human judgment, oversight and ethical responsibility, will gain a sustainable competitive advantage.

Tips on integrating AI decision making into your daily life

1

Leverage AI for data-driven tasks

If you have a task or project hat requires handling lots of data, perhaps looking for trends or making logic-based decisions, then you can be confident that with the right prompts AI can out-perform humans in outcome and efficiency.

2

Rely on your judgement in ethical or emotional contexts

Don’t rely on AI to make decisions in these types of scenarios – it can assist you with compiling data to support you, but use your own judgement here.

3

Keep oversight of AI outputs

Whenever you’re using AI, check the outcomes carefully. Hallucinations can creep in easily, and also any mistakes in data input can impact outcomes, so be careful and check.

4

Use AI as a creative assistant

Use AI to suggest initial ideas, but then it’s best to use your own (or your team’s) judgment to refine and evolve them.

5

Be transparent in your use of AI

Make sure you state if and how AI has helped you with tasks or projects.

6

Stay informed, but don’t over-rely

Be aware of how quickly the AI landscape is evolving and its capabilities expanding, and communicate with others to stay up to date and try new skills. But beware of over-reliance, and remember that individuals are on this journey at their own pace and within their own context.

This article was published on

2 April 2025.