How do we find a middle path between the idealised potential, and messy reality, of government using data and AI to shape public policymaking?
Even where accurate, legitimate and relevant, turning information into data flattens richness and lived experience.
We should be cautious about seeing datasets as proxies for truth and reality, as there are gaps between people’s lived experiences and the data that has been collected. A care organisation collecting data on the number of visits to a vulnerable patient may have little understanding of the dignity, respect and safety felt by that patient.
Replacing existing consultation analysis with this Consult tool would save thousands of hours of work. However, this approach privileges quantitative over qualitative analysis, undermining the potential of finding a ‘golden nugget’ of data: a compelling quote or story that shifts thinking.
A more fundamental issue (sometimes glossed over by those enthusiastic about headline productivity gains) is that inserting technology into any system changes that system.
Instead, they create ripple effects - altering people’s behaviours and expectations around each technology. Professionals may defer to a tool, ignore it completely, or learn how to ‘game’ it to get the desired results.
As Goldman Sachs’ Head of Global Equity Research points out ‘eighteen months after the introduction of generative AI to the world, not one truly transformative—let alone cost-effective—application has been found.’ Indeed a recent survey of workers and C-suite executives found 77% claim generative AI has added to their workload, leading to increased pressure to work longer hours to be more productive due to using AI.
Where AI does successfully support public sector decision making, we must still carve out resources to complement that data-driven insight with other forms of qualitative, relational and experiential understanding from both publics and professionals.