Navigating the Ethical Maze of AI – addressing tomorrow’s problems today!


Darren Mascarenhas and Yitongyue Lin (Lilian)

Darren Mascarenhas and Yitongyue Lin (Lilian)


AI without an ethical framework would be like a car without brakes – brakes help us feel safe when we go fast, not just when we want to slow down, but also give us the confidence that we are in control and reduce the risk of an accident.

In our previous article, we highlighted how predictive and generative Artificial intelligence (AI) has transformed not just the way we work but the industries that we work in. Google’s CEO Sundar Pichai once said that AI is the “most profound technology humanity has ever worked on” and can be more important than electricity or fire. However, one of AI's biggest flaws is the way it handles ethical decision-making. This is because AI lacks the deep understanding and abstraction required to make ethical choices – after all ethics is a human function based on morality and integrity. Ethics cannot be delegated to AI systems but that doesn’t mean it can’t be in the future with the right boundaries. In this article we aim to explore the challenges that firms face when implementing AI and what individuals can do to confront the ethical debate. 

The size of the AI market and recent advances

The UK AI market is estimated to be valued at over £17 billion and early projections are that it will grow to over £804 billion by 2035. The UK represents the third largest AI market in the world after the U.S. and China and the government is laying the foundations for further expansion. Since 2014, they have allocated over £2.3 billion to AI initiatives and in the 2023 budget, the UK government committed nearly £1 billion of government funding towards AI research with a plan to invest £900m in AI strategy, including the development of ‘BritGPT’.

Unlike its EU counterparts, who are expected to adopt AI within European law fairly shortly, the UK government has been tentative in its approach to regulation. In late 2023, it hosted the first-ever global AI safety summit at Bletchley Park in Buckinghamshire – where mathematician Alan Turing cracked Nazi Germany’s Enigma code during the Second World War – focusing on certain types of AI systems and the risks they may pose. However, many experts felt that an opportunity was missed by not achieving a consensus on ethics, regulation, and safety. With the increased use of AI and the uptake in spending pledges by the UK Government, now is the right time to be thinking about regulation and ethics, to help address the nerves around AI technology and enjoy the tremendous potential.

The AI ethical framework

Creators of AI tools have worried about the ethical implications since the early days of development, and these are starting to come to light now. Deepfake tools have been used to imitate trusted people, delivering fraudulent information or recommending risky courses of action. There are numerous GPT chatbots designed to mimic celebrities like Taylor Swift, Elon Musk and Beyonce, impersonating these individuals without their consent. Recently a finance worker was tricked into paying $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call. This is expected to grow during 2024 as fraudsters being their attempts to influence public opinion during the upcoming US and UK election . T When AI tools are built, creators need to consider the ethical implications that come with it. The widely accepted pillars of an AI ethical framework and some practical examples are shown below.

Failing to think about a comprehensive AI ethical framework could impact a business by causing reputational harm, exposure to liability and erode trust with customers and global issues like voter suppression, deceptive content creation and disproportionate targeting.


Collaboration to address the ethical issues:


Firms face their own unique ethical challenges when designing, developing and deploying AI systems, like bias, data security and transparency. This is why it is vital to foster a culture of ethical decision-making, where teams can respond to challenges as they arise or prevent them from happening in the first place. We believe that the best way to do this is by implementing clear design principles and controls, and fostering a culture of open communication within your team so that ethics is front and centre of any debate. Below we have listed some suggested key stakeholders and important ethical challenges for them to consider. 

Next steps

AI will continue to provide businesses and individuals with new opportunities for growth and transformation. But it also raises important ethical questions that need to be considered and addressed. It’s crucial to address these issues to ensure that AI is beneficial and safe for society whilst promoting trust and confidence in its continued usage. There are many use cases where AI has been positively adopted and we need to build our understanding from these use cases whilst learning from past mistakes and refining our approach to using AI. It is a powerful tool but we should exercise prudence and enthusiasm.

If you would like to talk to us about your AI plans, the ethical considerations or need help with reviewing AI frameworks in your firm, please do not hesitate to get in touch with Darren Mascarenhas or your usual Johnston Carmichael adviser.


Want to know more?

Just fill in our short form and one of our experts will get back to you shortly.