The government has set out plans guidelines to regulate “responsible use” of artificial intelligence (AI).
It said in 2022 computer-backed AI contributes £3.7bn to the UK economy, performing tasks in everyday life without human brain-power.
But observers fear its rapid growth could be overtaken by malicious players.
This includes chatbots which are able to interpret questions and respond with human-like answers as well as picking out objects or people in images.
While much of AI’s scope can be put to good use in delivering real social and economic benefits for people, it could be used to spread misinformation.
According to the BBC, a new white paper from the Department for Science, Innovation and Technology proposes rules for general purpose AI, including for example, those which underpin chatbot ChatGPT.
The BBC reports: “As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety.
“There is concern that AI can display biases against particular groups if trained on large datasets scraped from the internet which can include racist, sexist and other undesirable material.”
Instead of giving responsibility for AI governance to a new single regulator, the government wants existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with their own approaches that suit the way AI is actually being used in their sectors.
These regulators will be using existing laws rather than being given new powers.
Michael Birtwistle, associate director from the Ada Lovelace Institute, carries out independent research, sais: “Initially, the proposals in the white paper will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future.
“The UK will also struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators.”
The white paper outlines five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:
• Safety, security and robustness
• Transparency and “explainability”
• Accountability and governance
• Contestability and redress
Over the next year, regulators will issue practical guidance to organisations to set out how to implement these principles in their sectors.
Technology secretary Michelle Donelan MP said: “Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.”
But Simon Elliott, partner at cybersecurity firm Dentons told the BBC the government’s approach was a “light-touch” that makes the UK “an outlier” against the global trends around AI regulation.
Lila Ibrahim, Chief Operating Officer and UK AI Council Member, DeepMind, said: “AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases.
“This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.
“The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks.”
Grazia Vittadini, Chief Technology Officer, Rolls-Royce, said: “Both our business and our customers will benefit from agile, context-driven AI regulation.
“It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.”
Sue Daley, Director for Tech and Innovation at techUK, said: “techUK welcomes the much-anticipated publication of the UK’s AI White Paper and supports its plans for a context-specific, principle-based approach to governing AI that promotes innovation.
“The government must now prioritise building the necessary regulatory capacity, expertise, and coordination.