SJUK columnist Philip Ingram MBE asks the question: Chat GPT and associated AI – the future or a threat?
In an open letter in March 2023, Elon Musk, Steve Wozniak and other technology leaders made an extraordinary plea.
They wrote: “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”
GPT-4 being the newest version of Chat GPT, an artificial intelligence (AI) driven language model.
It is described in PC Guide magazine as, “Chat GPT is an AI Chatbot developed by Open AI. The chatbot has a language-based model that the developer fine-tunes for human interaction in a conversational manner.
“Effectively it’s a simulated chatbot primarily designed for customer service; people use it for various other purposes too though. These range from writing essays to drafting business plans, to generating code.”
Google, not wanting to be left behind are releasing their AI driven language model called BARD, as Chat GPT was created by US start-up OpenAI and is backed by Microsoft.
The open letter went on to define the risk by saying: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
“As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
David ‘C’ and Paul ‘J’ from the UK’s National Cyber Security Centre looked at issues very quickly and said, “Large language models (LLMs) and AI chatbots have captured the world’s interest, ignited by the release of ChatGPT in late 2022 and the ease of querying it provides.
“It’s now one of the fastest growing consumer applications ever, and its popularity is leading many competitors to develop their own services and models, or to rapidly deploy those that they’ve been developing internally. As with any emerging technology, there’s always concern around what this means for security.”
They concluded: “There are undoubtedly risks involved in the unfettered use of public LLMs. Individuals and organisations should take great care with the data they choose to submit in prompts. You should ensure that those who want to experiment with LLMs are able to, but in a way that doesn’t place organisational data at risk.” However, their comments were written before the Musk, Wozniak letter.
So, what are the issues with AI, Chat GPT and other similar language based, AI driven chat bots? The first place to start was by asking Chat GPT itself, the response, returned in a few seconds said: “There are several risks associated with AI and Chat GPT, including:
1. Bias: AI and Chat GPT can perpetuate biases that are present in the data they are trained on. This can result in unfair or discriminatory outcomes, particularly when the AI is used to make decisions that affect people’s lives.
2. Misinformation: AI and Chat GPT can be used to spread misinformation and disinformation. This can be particularly dangerous when it comes to public health or political issues.
3. Security: AI and Chat GPT can be vulnerable to hacking and other security breaches. If a malicious actor gains access to an AI system, they could use it to spread malware or launch other attacks.
4. Privacy: AI and Chat GPT can collect and store vast amounts of personal data, raising concerns about privacy and data protection.
5. Unemployment: AI and Chat GPT can automate jobs, potentially leading to mass unemployment in some sectors.
6. Ethical concerns: There are a number of ethical concerns surrounding AI and Chat GPT, particularly when it comes to issues like autonomous weapons, facial recognition, and privacy.
It’s important to recognise these risks and work to mitigate them as much as possible, while also continuing to develop and use AI and Chat GPT in responsible and ethical ways.”
Interestingly, when Chat GPT was asked about BARD is didn’t recognise it as it was so new highlighting that the current data sets used are not “live” and completely up to date.
Talking to Security Journal UK, Lisa Forte said: “I think my main concern is that in an era of disinformation and conspiracy theories, some posing a serious risk to national security and democracy itself, it worries me that these tools won’t be differentiating between fact and potential disinformation.
“Bard is slightly safer in this respect because it will provide you with the sources of information it has used so you can actually check where it is pulling its answer from.
“This was a responsible move by Google. The concerns around where the data is going, the ethics behind it, how easy they may be to manipulate are all valid concerns too.”
Not content with just being concerned, Italy became the first Western country to ban Chat GPT. The Italian watchdog said that not only would it block OpenAI’s chatbot but it would also investigate whether it complied with General Data Protection Regulation.
This was triggered by a data breach involving user conversations and payment information. The Italian Watchdog said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
It went on to raise concerns around age verification of users.
There is a degree of emotion around new technology and Chat GPT contributed to that when suggesting the potential for “mass unemployment in some sectors,” but failing to balance with the potential for massive growth in people using these tools in many sectors and that creating new employment.
What is certain is that this is an exciting time for new language applications enabled by AI and clearly with Google and Microsoft leading the way, there is huge commercial potential.
As with all new technologies when it comes to risks, “we don’t know what we don’t know,” so it will take time for a real understanding to evolve.