EthicAI has launched a cloud-based AI SaaS platform to address ethical issues around the adoption of AI.
The company, which was spun out of the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, said AI currently has a number of issues preventing its true economic potential.
BeehAIve allows firms to test their AI models on and datasets against 15 ethical dimensions, assessing evidence across thousands of measurement points to prevent issues such as bias and discrimination.
The platform enables assessment of models and systems against regional and national legislation including the EU AI Act, GDPR, and the Korean AI Act, as well as industry-specific regulations and globally recognised AI standards.
BeehAIve then recommends how any gaps can be filled, which EthicAI said helps firms to prepare for cross-border compliance.
The company added that the platform offers organisations a secure hub for their AI data to ensure information remains safe and encrypted at all times and also makes recommendations on ways to reduce energy usage from AI.
“Research shows that consumer confidence in AI has fallen by 8% globally over the past five years due to concerns about how AI is designed and built,” said Tanya Goodin, founder and chief executive of EthicAI. “BeehAIve looks under the hood of organisations’ AI models and systems — both in development and post deployment — to identify blind spots and weaknesses and provides actionable insights on how to mitigate and manage risks.”