Playback from Digital Leaders: Creating trusted AI services for all
by Luke Marshall-Waterfield, acting Chief Digital Officer for London
Earlier this week I spoke at the Digital Leaders Public Sector Innovation Conference how we can harness AI in ways that are fair, transparent, secure, and accountable.
At City Hall, we see responsible AI adoption as a key part of addressing some of the biggest challenges facing London and our public services, from tackling climate change to improving transport, health, public safety, policing, and social care. AI is also a major engine for economic growth, and we want to ensure that London’s brilliant tech sector leads the way in using AI ethically and inclusively.
Like any emerging technology, AI carries risks — bias, opacity, and security vulnerabilities among them. That’s why the GLA and our partners like LOTI (London Office of Technology and Innovation) are working to ensure AI supports Londoners in a safe, transparent, and people-centred way.
Here are five of the ways we’re working to make AI in public services work in the interest of all Londoners.
1. Starting with user needs, doing the basics well
If in doubt — go back to the basics. AI should serve people, not the other way around. At City Hall we champion user centred design as the best way to identify the real challenges faced by Londoners and assessing whether AI is the right solution — ensuring it complements human-led services rather than replacing them. The GLA’s approach to data and AI is rooted in public trust, accessibility, and responsiveness to public expectations.
Tl:dr: Hire a good user researcher and a data protection expert.
2. Embedding the Emerging Technology Charter and GDS AI Playbook
Our Emerging Technology Charter sets principles for safe, fair, and transparent adoption of new technologies, including AI, across London’s public services. Though we wrote it before the major leap forward in generative AI we’ve seen in the last couple of years, the principles stand up:
- Be Open — Explain how and why AI is being used.
- Respect Diversity — Ensure AI does not reinforce bias and AI is accessible to all Londoners.
- Be Trustworthy with People’s Data — Maintain high standards of data ethics, transparency and security.
- Be Sustainable — Consider the environmental impact of AI adoption.
This framework guides the GLA’s own AI initiatives and is embedded in much of work we have done with LOTI to support boroughs and other public sector partners in making responsible technology choices.
Following the publication last month of the AI Playbook for Government we’re now actively working on embedding its 10 principles in the work of the GLA Group.
3. Supporting transparency
The London Privacy Register — which we launched last year — is a key tool for improving transparency around data-driven projects. It’s a place where London’s public sector can (and should) openly document where and how data is used — including in AI applications — so that residents can understand and scrutinise its impact.
We’re also transparent about where we’re using algorithmic tools for decision making — publishing to the government’s Algorithmic Transparency Register.
4. Embedding expertise in services through LOTI’s Data Ethics Service
LOTI’s Data Ethics Service helps London boroughs assess and mitigate risks associated with AI and data projects. It provides practical guidance, ethical review processes, and support to ensure that AI initiatives align with Londoners’ values and rights.
5. Engaging with the data community through London Data Week
London Data Week — now in its third year — will bring together data experts, policymakers, and communities to discuss how data and AI can be used for social good. By engaging with London’s diverse communities, we can co-design AI services that are both innovative and inclusive — ensuring AI works for everyone, not just a few.
In an extremely fast moving field these are just some of the ways London is leading the way in responsible AI adoption, ensuring that AI-powered public services can work for all Londoners.