When city innovators met OpenAI: my Playback…

Chief Digital Officer for London
14 min readDec 14, 2023

Last week I attended the OpenAI/Rockefeller Foundation AI to Benefit Humanity: A Global Conversation on City Use Cases in New York. Attending were 20 cities, academics, engineers from OpenAI and Rockefeller Foundation policy leadership.

The purpose of the event was to “bring together city, industry, academic, and philanthropic leaders to explore how to safely test and deploy artificial intelligence (AI) systems to benefit vulnerable and underserved communities across 20 globally representative cities.”

It was conducted under Chatham House rules and, for transparency, cities were the guests of OpenAI. The following playback is not a comprehensive view on AI and contains my reflections only on the conversation.

The two-day conference started with presentations and panels from various cities, outlining their broad approaches to Artificial Intelligence (AI), deployments and, more recent experimentation with Generative AI (GenAI) with Large Language Models (LLMs) and Chatbots. It moved onto a technical explanation of LLMs and frameworks from engineers and then to structured workshops with cities to brainstorm potential deployment.

I draw a series of conclusions from this stimulating discussion at the end of this piece. By way of precis: cities are already on a journey with AI, and have started to experiment with GenAI use cases. Good city data management is crucial but fundamentally, successful roll-out of GenAI in city services will rely on heavily human involvement: as users, digital/innovation teams service providers and elected representatives. GenAI, should be treated as a service (not a ‘thing’) necessitating the kind of design, iteration and adaptation used in digital transformation and open innovation to guide development, ensure quality and public trust.

AI: the view from cities so far

Singapore — AI strategy 2.0 and deployment

This year, Singapore published a national AI Strategy 2.0 (superseding its first AI strategy in 2019) including an approach to the deployment of Generative AI. Singapore’s thought-leadership (which derives from its unique position and enviable powers as both a city-region and a nation) extends to the recently published wider SCAI Questions for AI use, the result of deliberations between government, academia and industry.

For Singapore’s Innovation team, there’s been a strong initial focus on using Gen AI for internal public service productively. Rather than setting down a priori risk regulations on GenAI, Singapore has adopted a permissive rules-based environment where use-cases are quickly triaged by experts, allowing innovation as long as core red lines on data protection and cyber-risks are not crossed. Experience gained has created a useful body of knowledge for deliberation and deployment of future use cases at far greater speed.

New York City — Mature AI Action Plan, Register and new Chatbot

New York City’s expansive AI Action Plan focuses on the city’s public services and covers responsible use, buying, deployment of AI across agencies. It probably represents, outside of Singapore, the most comprehensive approach to AI deployment in public services around and has been accompanied by a substantial consolidation of digital, data and technology functions in the New York Office of Technology and Innovation. In addition: city services, by law, have a transparency obligation to report algorithms to its algorithm directory, established in 2019.

New York’s MyCityChatbot, trained on city regulations and guidance, currently unifies small business support city services to assist businesses with queries. In the future, other city services are expected to be onboarded.

London — Data for London, AI use cases and participation

London is making substantial investment in preparing for AI. The focus on better city data (a prerequisite for effective digital, data and ‘training’ AI services) involves a new city data platform (2024) and improved collective data governance. The proposed Data for London library acts as a digital service to publish open data and identify useful datasets held by public, private, academic and civil society data holders across the city. In doing so it eases the sharing of data for projects and services, including AI. Data for London is supported by the London Office of Technology and Innovation team at London Councils (the association of London’s 33 boroughs), which is providing vital work in the areas of responsible AI and more broadly data ethics, public engagement and guidance on Generative AI use. London also adopted its Emerging Technology Charter in 2021, to guide the trailing and deployment of smart city technologies.

Reflective of the UK government’s hands-off regime over the last five years, various forms of AI have been funded, tested and used in different areas like local transportation, policing, and local government. Transport for London’s Roadlab programme has also experimented with AI to predict congestion effects, emission increases, and safety impacts of planned roadworks. Currently, TfL is collaborating with Google Maps to enhance their algorithm, making cycling safer and creating a more engaging journey planner. The Metropolitan Police Service introduced Live Facial Recognition (LFR) in 2019 and Retrospective Facial Recognition (RFR) technology, utilising computer vision for identification for specific crimes. At a local level many specific AI applications, focusing on issues like fly-tipping, noise pollution, and flooding risk, have been tested and expanded through the South London Partnership’s InnOvate programme.

Similar to other cities globally, we are beginning to see experimentation with Generative AI in local government, per this 2023 study by LOTI. Similar to other cities, this focuses on internal productivity, or alternatively integration into an existing service such as casework assistance/chatbots in housing or customer services. There’s also been an emphasis on explainability, where we are also seeing innovative steps: Camden council explains to citizens here how AI models can be used to understand resident sentiment from various sources. The Greater London Authority and LOTI also worked together to deliver London Data Week in 2023, a series of 30 public events looking at data and AI — including this workshop on how to visualise AI processes in a responsible way.

Dublin: design, innovation and the near future

Over the past 5 years Smart Dublin has been cementing a reputation for best-in-class urban innovation. Dublin’s approach established a series of Smart Districts specialising in innovation around 5G connectivity, health, community engagement and mobility. Under the banner of ‘test, trial, learn’, Dublin adopts open innovation methods between local government, academics, businesses and the local community. The city also outlined its deep commitment to citizen engagement in emerging technology deployment, through its Academy of the Near Future programme.

As I suggest in conclusion/observations, below, the skillset of this latest generation smart city teams — collaboration, innovation methods, participation — are a transferable skill starting point for new city GenAI innovation.

LLMs and Retrieval Augmented Generation (RAG) framework

Participants were given a high level briefing on how LLMs work via the RAG framework by OpenAI engineers. In simple terms, LLMs are like smart tools that understand and work with language (‘text’). They can do things like customising, writing, answering questions, coding and performing tasks. To make them even better, we can give them context and instructions, known as prompts.

Through the RAG framework, users guide the model to fetch information from specific sources, making its responses more accurate by using data not originally included in its training or settings. For example, if we wanted to apply a RAG system in urban planning in London, it can examine documents like the London Plan, Local Plans in boroughs, past decisions, and data from the Planning Datahub (all open data) to generate precise responses. Additionally, it can access external information not part of its training or settings. This extra information is used to enhance the model’s internal knowledge base.

To make LLMs work well, it’s crucial to carefully choose the prompts, data and other sources they use. However, we were cautioned that these models are not perfect. Sometimes they might give overly-confident but incorrect answers (hallucinations). While we can make them more reliable, right now it’s essential for users to be aware that mistakes can happen and be ready to correct them.

The team at OpenAI suggests an iterative approach to product development by using a ‘cookbook’ of techniques. In practice this means evaluating what the model comes up with (is this a ‘good’ or a ‘bad’ response?), refining the instructions, updating the documentation but also encouraging curiosity. This helps the system become more discerning and develop a range of responses that make sense to the user.

Similar to how we constantly improve the responsiveness of digital services to citizens, ongoing investment in developing these models is necessary. In a democratic setting this also poses questions about formal oversight by our elected representatives (what rules do we put in place to ensure integrity? What is our risk appetite?) and public servants (how these rules are implemented).

Potential uses for Generative AI, LLMs and Chatbots in cities

On Day 2 workshops between city representatives and Open AI engineers and Rockefeller turned to discussing practice and what-if city use cases. The sessions were themed around citizen/customer engagement; organisational efficiency; data and governance and urban planning.

Administrative tasks

Could chatbots be used to simplify legal requirements, such as acknowledging and processing a complaint or dealing with a Freedom of Information request? The UK’s Department of Transport, for example, produced a study on the use of AI in consultations and responses concluding that “the public are receptive to AI’s use in saving time and driving efficiencies, although there is recognition that the benefits of speed can be lost if there isn’t a focus on accuracy.” However, “some participants remain highly sceptical, particularly using AI to draft external correspondence which is perceived to require more empathy.” On the other hand, colleagues from Singapore cited examples where AI can be used to assess existing (human) responses to citizens from customer service interactions to understand and improve courtesy and responsiveness.

Policy-making and other city processes

“What if someone had already figured out the answers to the world’s most pressing policy problems,” asked Christopher Ingraham in the Washington Post, “ but those solutions were buried deep in a PDF, somewhere where nobody will ever read them?” While policy-development is not quite as blunt as ‘finding a hidden solution’, existing practices often rely on teams to conduct wide discovery relevant documents, data or other evidence. This can be a time consuming and assumptive process.

LLMs can be useful in understanding a specific urban challenge by bringing together city reports, civil society or business evidence and other data which otherwise would take extensive time and resources to bring together. While the solution may not be an AI — the ability of an LLM to present findings speedily and more comprehensively may assist arriving at the starting point for further policy discussion. Here, context-is-king and guidance should outline how and when a search was used and caveat on completeness or reliability. However, as a tool to augment policy-making or deliberation an LLM could be beneficial.

How a city reports its performance to itself and citizens using a Chatbot or Model needs careful design (and may be some way away) but could transform current ‘broadcast’ practice. Equally rich data flows on grants, contracts and commissioning could be improved substantially.


Common areas of chatbot experimentation are in specific or general casework, especially in service areas where information is fragmented or hard to come by. For example, chatbots supporting caseworkers in triaging homeless individuals can simplify communication, offering real-time information (e.g. shelter availability) or guide the initial stages of the triage process. The reduction in time spent for administration, could be redeployed for more one-to-one contact with clients, or to generally reduce form-filling in a complex administrative environment. In the UK, one London borough is currently scoping how a chatbot can be used with the social enterprise — BEAM.

Direct-to-client chatbots may be of extra assistance to first-time homeless, who may have no idea what resources open to them or rely solely on word-of-mouth. By the same token, a sudden rise in homeless with particular needs, for example refugees or (as cited by one U.S. city) an increase in older homeless clients, will require different pathways, which can then be incorporated into the AI. However, the sensitivity of this area requires close involvement with users in service design and a rigorous check against bias. (On the flip side, it was noted, if we suspect pre-AI systems are already biased, could we train AI to help us understand more about the nature and incidence of the bias?).

Remote support

Farmer CHAT in India answers agricultural questions and gives personalised advice to farmers, who can upload and receive images or audio for examples to identify blight or improve practices. Buenos Aires runs a Chatbot channel on WhatsApp called BOTI, which now integrates over 100 city services. BOTI’s usage rocketed during the pandemic as a resilience tool (vaccine appointments) and now prototypes are being developed in Tourism — considered an area of ‘low risk’ — where the aim is to create a personal tourism assistant using ChatGPT. This promises a curated data feed to enable personal experience searches (e.g. “I am a vegetarian looking for places to eat”). Learning from early iterations of the Chatbot led the city’s digital team to invest further in risk management steps such as content-filtering, limiting user characters-per-question and blocking. While BOTI’s Tourism prototype is developmental, Buenos Aires’ ongoing experience iterating it’s Chabot channel is clearly instructive, illustrating how growing confidence and learning can led to further innovation.


For disabled people technology can be “the difference between total exclusion and significant inclusion”, an invited civil society speaker told us. Voice-activated chatbots can cater to individuals with visual impairments, allowing them to interact audibly. Moreover, chatbots can serve as valuable tools for managing daily activities, fostering independence, and enhancing the overall quality of life for people with disabilities. However, users should be at the heart of service design, providing lived experience to steer the service rather than risk replicating failures of existing approaches in a new form.

Spatial planning

A potential use case for LLMs in urban planning lies in their ability to process and generate vast amounts of unstructured text and information efficiently. LLMs can assist in parsing and analysing diverse datasets, including urban development reports, citizen feedback and policy documents, providing valuable insights to city planners. These models can aid in natural language understanding, facilitating effective communication between planners and stakeholders, and can contribute to the creation of more accessible and comprehensible public information. Additionally, LLMs can be used to draft reports, generate summaries, and even assist in formulating policies, streamlining administrative tasks. While recognising the challenges, such as the need for ongoing development and considerations for reliability, LLMs present an innovative tool for enhancing the efficiency and effectiveness of urban planning processes.

My 10 reflections for cities…

It’s about humans. GenAI brings great collaboration opportunities but ultimately relies on humans. All good services need to be designed with users and human artistry is required to prompt, refine, source and oversee. So GenAI is part of collaboration, not a proxy for it.

Supporting innovation. We heard about LLMs promise to transform productivity in language/text-related tasks, falling into two main areas: (1) daily work augmentation, including summarising and note-taking etc and (2) designing bots or applying models for specific services. As cities experiment with use cases, they will gradually gain expertise and confidence. National governments should fund and support local government innovation, including sandboxes to design and test new applications.

Start with resilient use cases. Experimentation with new Gen AI systems in cities should start with narrow, resilient uses cases — meaning application in low risk services (where errors do not have big real-world implications) where humans can identify unreliably and easily instruct or condition improvements through prompts, better data or other context.

Data stewardship plays a crucial role in starting the AI journey and improving existing digital and data services. Cities should conduct full data maturity assessments, invest in data capabilities and governance, and address the technology legacy of public service IT which often locks-in or poorly stewards existing data. Explaining (or developing) trust, cyber risk and privacy frameworks is also vital.

Models can enhance existing processes, including data cleansing, and are not just stand-alone. For example, a fly-tipping (illegal dumping of waste) enforcement process could incorporate a more citizen-friendly reporting interface through a Chatbot. LLMs can clean data, often a barrier to data projects, but it will be essential to combine them with other methods and human oversight. Data cleansing typically demands expertise in the relevant field, an understanding of the context, and a careful manual review to ensure informed decisions and uphold data integrity.

City open data platforms? Many cities (and governments) publish large amounts of open data on demographics, place and performance. Designing a ChatGPT-like system for interacting with city open data is technically possible, but it requires significant expertise — which cities will lack. However, it is also possible for third party systems to be created, which have the potential to generate misleading narratives or manipulate data, which could be detrimental to the city’s image or lead to false conclusions. Non-city LLMs may lack the understanding of local factors, contributing to inaccurate analyses.

Organising digital, data, technology & AI. Some cities are reorganising their digital, data, and technology (known as DDaT) functions, forming new teams and adopting new strategies. Cities are at different stages in their strategies and deployment of Artificial Intelligence, but specific capability will need to be grown in the DDaT area for GenAI. Because of the importance of prompts and context in the development of systems this needs to be integrated from the content experts side in policy or delivery areas. Clarity needs to be established on a service-by-service basis of who ‘owns’ an AI. This is a collaboration question: if a service team views the system as ‘IT’ it might not exercise the necessary controls, equally if the DDaT(AI) function is too remote from development the service may lack essential expertise, especially if the system is procured.

GenAI should be treated as-a-service, not a one-off. As such it will require design, iteration, testing and experimentation. Public services will need to adapt existing frameworks, such as the UK (digital) service standard and the emerging data service standard, to guide development and ensure a focus on user needs, working in the open and other Agile approaches. In order to accommodate the ongoing changes that successful deployments may require, GenAI may face challenges similar to the friction current digital transformation bids face against traditional public sector accounting rules and cultures.

Uk government’s service standard guides and governs new digital services, and is a good framework to start from when considering what other measures are needed for GenAI systems

Transferable design skills. The adoption of Internet of Things (IoT) and 5G use cases by local government innovation and digital teams provides valuable initial expertise and tested methods like (a) user-centred service design and design-thinking more generally (b) collaboration with domain-experts on problem definition.

New forms of democratic oversight and trust frameworks are essential, as LLMs can process data in any format, posing both advantages and risks. While they enable access to previously inaccessible data, improper scrutiny over inputs can lead to unreliable results. Cities can also learn from each other on user-design, public participation and explainability of new technologies. Cities will also be at the forefront of testing and interpreting new legislation, for example the impact of the EU AI Act and national legislation.

Networks: There are growing city-to-city networks to share experience and expertise. Cities can discuss use cases via Bloomberg Philanthropies City Connect network, set up in October 2023. The Cities Coalition for Digital Rights (CCDR) also provides a forum for many European and North American cities to share good practice on smart cities, artificial intelligence and data ethics. An associated initiative to the CCDR is the Global Observatory of Urban Artificial Intelligence, based in Barcelona. The London Office of Technology and Innovation is an example of city-regional sharing, with resources like their Guidance on Generative AI open to all. It is important, though, that engagement with other cities doesn’t just celebrate successes but learn from the ‘book for failure’ as well.



Chief Digital Officer for London

@LDN_CDO & Data for London Board @MayorofLondon using data to support a fairer, safer and greener city for everyone​