AI is a double-edged sword, says Philip Ingram MBE.
Artificial Intelligence (AI) has become a buzzword in recent years, revolutionising various industries and transforming the way we live and work. AI refers to the development of computer systems that can perform tasks that typically require human intelligence.
From voice assistants like Siri and Alexa to self-driving cars and personalised recommendations on streaming platforms, AI has permeated our daily lives in numerous ways.
AI can be broadly categorised into two types: narrow AI and general AI. Narrow AI is designed to perform specific tasks, such as image recognition or natural language processing, while general AI aims to replicate human intelligence across a broad range of tasks. The applications of AI are far-reaching, spanning sectors like healthcare, finance, transportation, security and entertainment.
AI in application
In healthcare, AI has the potential to revolutionise diagnostics, enabling early detection of diseases and improving patient outcomes. AI-powered algorithms can analyse medical images, such as X-rays and MRIs, with greater accuracy and speed than human doctors.
In finance, AI has transformed the way transactions are processed and analysed. AI algorithms can detect patterns in large datasets, enabling fraud detection and prevention. Additionally, AI-based virtual assistants can provide personalised financial advice and help customers make informed investment decisions.
In security according to the IBM report, “AI and automation for cybersecurity” by Dr. Sridhar Muppidi, and Gerald Parham: “As cyberattacks grow in volume and complexity, artificial intelligence (AI) is helping under-resourced security operations analysts stay ahead of threats. Curating threat intelligence from millions of research papers, blogs and news stories, AI technologies like machine learning and natural language processing provide rapid insights to cut through the noise of daily alerts, drastically reducing response times.
It seems the potential AI offers is not just theoretical, BM Institute for Business Value (IBV) partnered with APQC (American Productivity and Quality Center) to survey 1,000 executives with overall responsibility for their organisation’s IT and operational technology (OT) cybersecurity systems. Respondents described their initiatives to use AI technology to support security operations and manage protection, prevention, detection, and response processes. Overall, most executive globally and across industries are adopting or are considering adoption of AI as a security tool. “64% of respondents have implemented AI for security capabilities and 29% are evaluating implementation.”
Deep learning, a subset of AI, has played a pivotal role in the recent advancements of AI. It is a branch of machine learning that focuses on training artificial neural networks to learn and make decisions without explicit programming. Deep learning algorithms are inspired by the structure and function of the human brain, enabling machines to process vast amounts of data and recognise complex patterns.
As cyber threats continue to evolve, deep learning will play an increasingly vital role in enhancing cybersecurity measures. The ability of deep learning algorithms to adapt and learn from new patterns and behaviours makes them invaluable in combating emerging threats. By leveraging deep learning frameworks and platforms, organisations can stay one step ahead of cybercriminals and protect their valuable data and systems.
One of the breakthroughs in deep learning is the development of deep neural networks called convolutional neural networks (CNNs). CNNs have revolutionised computer vision, enabling accurate image recognition, object detection, and even facial recognition. These advancements have paved the way for applications such as autonomous vehicles, surveillance systems, and medical imaging analysis.
Another significant advancement in deep learning is the use of recurrent neural networks (RNNs) for natural language processing. RNNs can understand and generate human-like text, leading to improvements in chatbots, language translation, and voice assistants.
While the benefits of AI are undeniable, it is essential to acknowledge the potential risks and threats that accompany its rapid development. One of the primary concerns is the misuse of AI for malicious purposes. AI-powered cyberattacks have the potential to disrupt critical infrastructure, steal sensitive data, and manipulate information on a large scale.
To address the issue of bias in AI algorithms, it is crucial to ensure diversity and inclusivity in the data used for training. By incorporating diverse datasets and involving multidisciplinary teams in the development process, we can reduce the potential biases in AI systems. Additionally, regular audits and transparency in AI algorithms can help identify and rectify biases that may have inadvertently crept into the system.
A question of ethics
Furthermore, there is a concern about the impact of AI on job displacement. As AI systems become more capable, there is a fear that certain job roles may become obsolete, resulting in unemployment and socioeconomic imbalance. However, as with all new technologies, the employment balance will likely just shift as new opportunities will appear.
Ethical concerns surrounding AI can be addressed through the development and adoption of AI ethics frameworks. These frameworks should prioritise principles such as fairness, transparency, and accountability. Establishing regulatory bodies and industry standards can ensure that AI is developed and deployed responsibly, with due consideration for its potential impacts on society.
However, ethical concerns are already being exploited through disinformation as the media fixation on AI as a buzz word fuels sensationalism. According to the BBC, “A US Air Force colonel “mis-spoke” when describing an experiment in which an AI-enabled drone opted to attack its operator to complete its mission, the service has said.” This was in reply to Colonel Tucker Hamilton, Chief of AI test and operations in the US Air Force, speaking at a Royal Aeronautical Society conference in May this year when he was linked to a comment that the AI drone had killed its operator in an experiment. The USAF quickly said, “no such experiment took place.”
AI and its subset deep learning, offers ground-breaking solutions to the ever-evolving challenges in cybersecurity. However, with ground-breaking solutions comes new and novel threat opportunities like AI linked to deep fakes, which is another area for exploration at a future date. This is something all security professionals need to be aware of as it is likely to be one of the biggest growth areas for the industry.