Strategies to safeguard online identity in the digital age

January 25, 2024

FEATURED

DocuSign - online identity

Maxime Hambersin, DocuSign’s Senior Director of Product Management International explores the AI challenges to cyber risk, identity and trustworthiness.

It’s likely that 30th November 2022, when Open AI’s ChatGPT launched to the public, will forever be seen as a new beginning for the age of Artificial Intelligence. 

AI has of course been applied in many forms for decades, outgrown from machine learning and automation – and part of a continuum of making smarter, more useful digital technologies. But the recent public awakening to the power of AI has not only reaffirmed its potential to change almost everything to do with modern work, but about how we operate as individuals. Because almost everything we do is mirrored in digital activity. 

At the heart of every part of our online life is our identity. And AI will also upend everything we know about protecting our online identities and services. AI generated deep fakes – both audio and video – are becoming incredibly lifelike and easy to deploy against victims. GenAI LLMs will accelerate this by generating code that once only skilled hackers could create. 

Society must actively manage the way we verify and trust each other to fight back against the potential for fraud, mistrust, and misinformation if / as the dark side of AI is explored by criminals. To do so we need to understand the concept of identity and how to safeguard against threats. 

What online identity means – and how we safeguard it in the age of AI

Being anonymous can be important for society. Many great and challenging works of art, philosophy, and politics have been created ‘by Anonymous’ – but being anonymous means that a person cannot interact meaningfully with other parties to transact business, as identity is part of establishing trust.

Online, our identity consists of multiple layers. There are our personal details we use to verify ourselves including email, date of birth, place of birth or names. But it also includes things like banking data and other tokens – even biometric data. Hence there is a strong need to protect our identities from theft and harm in the digital age. 

Now, with the introduction of AI tools that can mimic identities or fasttrack fraud, protecting identity has become one of the most critical elements of data security.

Below are a set of foundational steps for protecting identities from AI-enabled risks. Principles that can be applied as individuals to help safeguard your identity, and the obligations organisations need to meet to mitigate risk and protect their customers and staff. 

Protect yourself

Firstly, individuals must learn to better protect themselves and their identity, which means becoming more tech and threat savvy as part and parcel of being so online. Those just starting out can check out information from official sources such as the National Cyber Security Centre.

This advice needs frequent repetition: Practise consistent security hygiene. Choose difficult to crack passwords involving random words and characters. Download updates and patches to your apps and operating systems – they’ll have the latest security fixes. Two step verification helps stop criminals using stolen information in your name – and might alert you if you’ve been hacked. Complacency will contribute to a lot of the luck attackers have.

Absolutely everyone must stay wary. Urgent, unexpected, or to-good-to-be true requests and offers that want you to click, download, or take urgent action may be rushing you so you don’t take care to vet the opportunity. People tend to be more aware of threats for their finances, but all their personal data is monetisable, and can be used to impersonate an identity. Don’t race to offer up your data until you’ve confirmed it’s safe.

Protect the organisation

The UK’s 2023 Cyber Security Breaches Survey showed 32% of businesses recalled a cyber-attack in the previous year – and those were only the figures for those that knew. Organisations should take a digital-first approach to privacy and risk management if they are to protect themselves, their customers, and their revenue in what can be a digitally hostile environment. 

Ensure the business uses state of the art ID verification to successfully verify online user identities and know who it is you’re doing business with. Many identity and agreement technologies are accredited by governing bodies and allow firms to integrate identity proofing and authentication into their workflows.

Embed cybersecurity policy and infrastructure into processes seamlessly, so users work with best practices, not around them. Complement this with training and testing to keep the fundamentals of cybersecurity top of mind, as everyone can be fallible to social engineering.

Manage user authority and access rights. This is bound up with identity management and helps mitigate many other secondary threats such malicious attackers exploiting a security breach or the risk of internal data breaches.

Minimising fraud with AI

Whilst AI may increase the scope and scale of cyber risk, it also has an important role in risk mitigation. In other words, it plays a role as both a threat vector and a detection tool. 

There are many use cases where AI is already being implemented in detecting fraudulent activity – for example, spotting relevant signals or patterns of unlikely user behaviour on a platform based on past experience. Yet the ideal scenario is to utilise both humans and AI in order to increase levels of trust. Both can introduce risk, but collectively they can help to optimise cyber resilience. 

Working together, people and AI partnerships at banks, within tech firms, the public sector, and businesses can layer together to create a stronger identity protection. AI validating identity helps mitigate fraud risk. It can use the patterns within metadata to assess risks without accessing someone’s personal information, based on behavioural patterns.

Using multiple forms of proof increases levels of trust and security. Linking transactions, you do to a one-time secure onboarding can offer high levels of trust – where that onboarding is thorough and vetted. For example, passports. Official reviewers rely on a few measures to check these but trust them as the onboarding process of securing one is rigorous. In the future AI will support identification in the first onboarding and then in all subsequent steps, providing a backup layer scanning for proofs or trust or markers of deception.

Trusting in a safer future

The AI age is accelerating positive change as well as disruption. Identity, cybersecurity risk, and privacy are all intimately bound up, with AI promising to change how we defend and how we are threatened. This calls for a greater level of education and ongoing awareness to better protect our identities, assets and organisations. Taking a digital-first approach to privacy and risk management means ensuring that the primary domain where identity works for us, is made robust in this new age.

More Security News

Read Next

Security Journal UK

Subscribe Now

Subscribe
Apply
£99.99 for each year
No payment items has been selected yet