Artificial intelligence can transform the workplace but managers need to avoid four key “AI dysfunctions” including algorithmic bias, says article co-authored by Dr Stella Pachidi of Cambridge Judge.
Artificial intelligence holds great potential to transform the workplace, but organisations need to take steps to avoid damaging AI “dysfunctions” such as algorithmic bias as this still-nascent technology develops, says an article co-authored by Dr Stella Pachidi of Cambridge Judge Business School in a special issue of the journal MIS (Management Information Systems) Quarterly Executive.
The technology is still in a relatively early state in large companies and mostly absent from smaller non-tech companies, so managers need to steer clear of these dysfunctions and tackle challenges related to deployment and AI talent development, says the introductory editorial article co-authored by Stella, University Lecturer in Information Systems at Cambridge Judge, who was an editor of the special edition.
Many current AI systems in organisations are experimental and have never been deployed at large scale. So there are obstacles to overcome for greater rollout including integration with legacy infrastructure, adjusting organisational culture (including drawing jurisdictional lines between chief information officer and chief data officer), and attracting talent given that data scientists and AI engineers are still scarce, the article says.
The article – entitled “Special Issue Editorial: Artificial Intelligence in Organizations: Current State and Future Opportunities” – outlines four types of AI dysfunctions that managers need to avoid:
- Algorithmic bias in which outcomes of the machine learning algorithm can put certain groups at a disadvantage, including algorithms that appear to be racist or that recommend sentences to judges that propagate preconceptions from past sentencing decisions.
- Unexplainable decision outcomes that produce results that are difficult to explain owing to the many layers that went into the production of such outcomes, including some parole decisions and teacher evaluations. To build trust in such outcomes, the authors recommend being open about the data used and explaining how the model works in simple terms.
- Blurring accountability boundaries when both machines and humans influence outcomes, for example the responsibility for a traffic accident involving a driverless car or big financial losses from algorithmic stock trading. The article recommends engaging in advance with AI designers, business users and institutions to clarify legal liability and responsibility upfront.
- Invaded privacy given AI’s need to process increasingly large amounts of data. Beyond compliance with government rules on personal data such as the General Data Protection Regulation (GDPR) in European Union countries, the article recommends boosting transparency by developing auditable algorithms and performing such audits to identify what data is used and the variables fed into decision-making processes.
The special issue of MIS Quarterly Executive is edited by the article’s three authors – Hind Benbya of Deakin University in Geelong, Australia; Thomas H. Davenport of Babson College in Massachusetts; and Stella Pachidi of Cambridge Judge Business School. Out of 50 submissions, they chose five papers for inclusion in the special issue – on the key challenges of developing AI systems for knowledge-intensive work; unintended consequences of AI in decision making; addressing issues of AI explainability; overcoming user resistance to AI; and designing conversational agents or “chatboxes”.
“As AI technology is still maturing, awareness regarding the new management challenges it poses and the implications it raises for the workplace and the organisation are still emerging, but the most common effect will likely be on how work is conducted in the future,” the article concludes. “Therefore, companies need to begin work now on developing AI applications that create economic value and that lead to new ways of orchestrating work by humans and machines.”