The UK government is reportedly considering introducing standardised testing for artificial intelligence models used by banks following regulatory concerns about inadequate oversight of their deployment.

The proposal – first reported on by the Financial Times – was put forward last month by Harriet Rees, chief information officer at Starling Bank, and submitted to the Department for Science, Innovation and Technology as policymakers assess how to strengthen safeguards around widely used AI systems. Rees, who also serves as a government financial services AI “champion”, argued that independent evaluation would address gaps in current practices.

Rees said that banks currently rely on their own internal checks without any shared benchmark. “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment,” she said.

The proposal follows warnings from the Bank of England’s Prudential Regulation Authority, which told lenders at meetings in October that their monitoring of AI models was “not frequent enough”, according to presentation materials from the sessions. Regulators have increasingly focused on how banks oversee third-party technologies embedded in critical operations.

Rees told the FT that a centralised approach could reduce duplication and establish consistent standards across the sector. “Given our reliance on US models, it would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard,” she said.

There is currently no legal requirement in the UK for AI models to be assessed before being deployed in regulated industries. While companies such as OpenAI and Anthropic have voluntarily submitted systems for review by the government’s AI Security Institute, these assessments focus on frontier risks rather than routine commercial use in banking.

Rees said an independent testing regime would act as a “fail-safe” rather than replacing firms’ own controls, and cautioned against assigning responsibility to a sector-specific regulator given the cross-industry use of general-purpose AI. She described the AI Security Institute as the “most obvious body” to lead such work and said discussions with its director-general Ollie Ilott had been positive, adding: “They agreed that there was nothing else out there like this today.”

A government spokesperson, however, indicated that ministers are not currently planning to expand the institute’s remit. “The AI Security Institute is focused on frontier-AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson said to the paper.


Share.
Exit mobile version