The UK government has announced a new initiative to develop AI-powered technology aimed at predicting and preventing crime.

The project will create an interactive crime map covering England and Wales, using data from police, councils and social services to identify where crime is likely to occur and to spot early signs of anti-social behaviour.

Teams from business, universities and other sectors will work together under the Concentrations of Crime Data Challenge. The aim is to deliver a fully operational system by 2030, as part of the government’s £500 million R&D Missions Accelerator Programme. An initial investment of £4 million will support the delivery of prototypes by April 2026.

The technology will focus on crimes that make people feel unsafe in their neighbourhoods, including theft, anti-social behaviour, knife crime and violent crime. The government’s stated goal is to halve knife crime and violence against women and girls within the next decade.

Science and Technology Secretary Peter Kyle said the initiative is designed to provide police with the intelligence needed to prevent crime, rather than react to incidents after they happen. The project builds on existing Home Office work, such as mapping knife crime hotspots and the Safer Streets Initiative.

The Safer Streets Mission also aims to expand neighbourhood policing, with 13,000 additional officers, PCSOs and special constables planned for local roles. Each neighbourhood will have a named, contactable officer to address local issues.

Representatives from organisations including Neighbourhood Watch, The Ben Kinsella Trust, Resolve, techUK and St Giles have welcomed the announcement, highlighting the potential benefits of data sharing, early intervention, and targeted resource allocation. Some have also noted the importance of maintaining ethical standards and avoiding unfair profiling.

Further challenges under the R&D Missions Accelerator Programme will address other areas such as energy, healthcare and economic opportunity.

While the government is heavily invested in implementing AI initiatives across civic society, skeptics have warned of potential overreach.

Civil liberties group Big Brother Watch has been a prominent critic of predictive policing and AI-driven surveillance. The organisation argues that such systems risk undermining the presumption of innocence, expanding state monitoring, and embedding bias from historic policing data. Big Brother Watch, in a blog post from April of this year, described the prospect of such a plan as a “frightening expansion” of surveillance powers, warning that predictive tools could lead to pre-emptive interventions against people who have not committed any crime. The group has also raised concerns about the lack of specific legislation authorising these technologies, the potential for misidentification, and the risk of criminalising innocent citizens.

Other independent reviews and ethics committees have questioned the reliability and legitimacy of predictive policing, highlighting the need for clear legal safeguards and public debate before further deployment.

Baroness Shami Chakrabarti, former director of Liberty and Labour peer, has also voiced concerns about the government’s expansion of AI-led policing.

In comments to the BBC this week, she described facial recognition and predictive technologies as “incredibly intrusive” and warned that their deployment moves the UK closer to “a total surveillance society.” She raised issues around privacy, freedom of assembly and the potential for false matches, noting that the technology has so far been used “completely outside the law,” with police forces setting their own rules in the absence of specific legislation.

Baroness Chakrabarti welcomed the government’s consultation on safeguards but emphasised the urgent need for robust legal frameworks before further rollout of these systems.


Share.
Exit mobile version