Getting to grips with AI
Wisdom and integrity are at the core of Sue Turner’s guidance for using AI. She explains why to Simon Penfold – and offers some examples of what happens when the technology is misused.
The importance of Artificial Intelligence to the UK was underlined in January when the Government published its AI Opportunities Action Plan, with the avowed intention of “ramping up AI adoption across the UK to boost economic growth, provide jobs for the future and improve people's everyday lives”.
So a timely lecture on harnessing the power of Artificial Intelligence was one of the biggest talking points at the UKSPA Summer Conference at Rothamsted in July.
The speaker, Professor Sue Turner OBE, is an experienced company director and leader across the commercial, charity and education fields who has forged a second career as founder director of AI Governance, a business established with the aim of encouraging the use of AI “with wisdom and integrity”.
The company developed from Sue’s own journey to explore how AI worked and the impact it could have.
“In my career I’ve had lots of different roles – chairing, non-executive director, chief executive and so on – of both commercial organisations and charities. In all the different roles, I've been really frustrated that we generate data in our organisations and then we keep it in silos.
“As a leader, you just have a gut feeling that if only you could find the patterns hidden in that data across the silos, you'd be able to find insights that would really help drive the organisation forward.”
It was in 2019 Sue heard the government was backing the creation of new masters degrees in AI and data science.
“These were conversion courses specifically to bring different people from different backgrounds with diverse thinking into this world of AI and data science. So I put my hand up and said I wanted to be in that first cohort of people doing those new masters degrees, because I want to learn this stuff and then I want to figure out how to cascade that information to as many people as possible.
“So I started the masters in September 2020 and at the same time I set up AI Governance with the mission to inspire as many people as possible to use AI with wisdom and integrity.”
The misuse of AI has become notorious, with tools such as FraudGPT and WormGPT developed as to create malicious code, to write more effective phishing emails, hack websites and steal people’s money and information.
“Beyond that deliberate misuse, we have the types of things that go wrong with AI where people either misuse the tool, using it without thinking through the consequences properly, or are just really unaware of the risks,” said Sue.
“For instance, last year Serco Leisure in the UK was upbraided by the Information Commissioners Office because they were using AI computer vision to clock people in and out of work. It involved 2,000 employees going into 38 leisure centres across the UK, so facial recognition was checking them in and out.
“You could see why it was getting around a legitimate problem that the business might have had, but they didn't justify why they needed to use such a high order of personal data.”
The ICO issued enforcement notices instructing Serco Leisure, Serco Jersey and seven associated community leisure trusts to stop all processing of biometric data for monitoring employees’ attendance at work, as well as to destroy all biometric data that they were not legally obliged to retain.
“That’s one order of not thinking through what you're doing. Another order would be the American company Zillow, an online business helping people buy, sell and rent houses.
“They figured they were pretty good at knowing when a property was being offered for sale at a low market price. They thought they could buy it themselves and resell it at the higher price. So they set up an algorithm, they tested it, they found that they could tell the difference between low priced houses and what the market rate was. So in 2021 they started buying in property, but the failure was they hadn't tested their algorithm, the AI model they created, on enough data.
“Once they started using this in the real world – with many, many things that were different from what the machine learning model had been trained on – it failed horribly. They lost $300 million in the space of just a few months and they wiped $9 billion off their share price.
Undestanding what AI is
Much of Sue’s work now involves educating organisations to prevent exactly those kinds of horror stories.
“When I started out, I thought I would spend most of my time writing policies for organisations, helping them figure out what their governance of AI should look like, because that was my background.
“But some research in 2022 showed 58% of boards had nobody on them who understood what AI was, and 91% of organisations had no controls on their AI use.
“That was the spur for me to say, OK, I need to go upstream and not just talk about governance but actually give people the fundamentals to understand what AI is; how does it work? What are the opportunities? What are the risks? Then help them work on what they're going to use it for.
“Quite often when I do sessions like the one for UKSPA at Rothamsted, you'll get people in the audience saying ‘Tell me the one AI tool I need to have in my business tomorrow’. And the answer is, it depends.
“What is your business? How is your data organised? What problems are you trying to solve? There is no single magic bullet that everybody needs to have; that's why it's exciting working with groups and help tailor the solutions to fit exactly what they need.”
Another common issue is organisations rushing to make use of AI without understanding what it is and what it does.
“There’s this fear that you're missing out on something and that's driving people too. I've seen examples of where, for example, a chief executive has said: ‘We need five AI applications up and running by the end of 2025.’ It's a real example from somebody in one of my sessions. I asked why? What problems are you trying to solve? And he said: ‘No idea. That's just what the chief executive said he wants, and we don't know where to go with this. Help us.’
“So that's fairly typical. I've co-authored a recent report for IT leaders and the main reason why they are going for Generative AI is simply because of that pressure, that fear of falling behind, rather than because they've got a clear understanding of the benefits they're trying to achieve.
“My message is always to move away from responding to hype and fear. The focus should be on adopting AI tools because of the value they're actually going to create for the organisation.”
AI maturity
Another area of discussion is identifying where organisations may be in terms of their ‘AI maturity’.
“That tends to be a wake-up call for people. How do they move forward, what do they do next? I provide an understanding to what AI is all about and then we talk about good governance, because no one wants to pull a Zillow.
“We talk about how you understand the risks. How do you put good governance in place? And fundamentally, what is responsible AI? What does that mean? What does it look like? How can we get it right in our organisations from the start.”
Away from her AI work Sue is Chair of North Somerset Environment Company, the waste and recycling company for the region, wholly owned by North Somerset Council.
“My kids call me the bin lady. It’s very practical and keeps me very grounded.
“And then I'm also two days a week Professor in Practise for AI and Digital Technologies at the University of Bristol. So that is really fascinating and a role I only started at the end of April. It's putting rocket fuel into that mission to inspire more people about using AI responsibly.”
When it comes to the ethical use of AI, the biggest issue often involves job losses. The new field of sophisticated generative AI is the result of LLM, or large language model technology. According to a 2023 report from Goldman Sachs, LLMs’ ability to increase productivity by writing and analysing text could result in the automation of 300 million full-time roles across the world’s major economies.
Sue Turner said: “If a young person says to me: ‘What sort of job should I avoid for the future?’, then my answer to them is anything where you are collating information manually. So pulling different bits of spreadsheets together and passing it onwards and upwards through the organisation. If you're the person who's organising the AI to do that, fantastic. If you're the person doing it manually, using Excel or writing reports, handcrafting them, don't go into those sorts of jobs because they're not going to be around for very long at all.”
While the Government has recognised the importance of AI to the UK economy, Sue sees major gaps in their strategy.
“The broad message to companies and public sector organisations is to use AI; it's wonderful, it'll save you money, it'll make you more productive.
“There's a lot of truth in that, but helping people and organisations understand how they do that is a massive missing piece of the jigsaw. And there are risks. We can't just say all AI is wonderful.
“It can be useful, but we have to use it with wisdom and integrity, otherwise we are potentially making a lot of horrendous mistakes that we will come to regret.”
AI with wisdom and integrity Read more about the AI Governance mission at aigovernance.co.uk
See all the slides Click HERE to see all the slides from Sue Turner's presentation at the 2025 UKSPA Summer Conference