How the evolution of AI will reshape cybersecurity

It’s easy to think that artificial intelligence (AI) really only appeared in 2019 when OpenAI unveiled GPT-2, the predecessor to ChatGPT.
Reality, though, is somewhat different.

How the evolution of AI will reshape cybersecurity

It’s easy to think that artificial intelligence (AI) really only appeared in 2019 when OpenAI unveiled GPT-2, the predecessor to ChatGPT.

Reality, though, is somewhat different. The foundations of AI can be traced all the way back to 1950 when mathematician Alan Turing published a white paper titled ‘Computer Machinery and Intelligence’. Turing also helped to create the Enigma Machine used during World War Two to crack encryption codes being used by German forces.

In 1952, computer scientist Arthur Samuel developed a program that could play the game checkers. His algorithm was the first that could learn independently.

Fast-forward to 1997 and that was when IBM launched Deep Blue, an AI-powered computer that ended up beating work chess champion Gary Kasparov. This was the first time a chess champion had ever been defeated by a machine.

Also in 1997, Nuance launched Dragon NaturallySpeaking, the first application that could run on a PC and convert human voice to text. This essentially made AI capabilities available to the general public for the first time.

Jump ahead to 2011, and that’s when Apple released the virtual assistant Siri. A year later, Google researchers succeeded in training a neural network to recognise images of cats.

Following this, in 2019, OpenAI unveiled a program called GPT-2, the predecessor to ChatGPT, and made it available to researchers. Many, however, were initially underwhelmed with its capabilities.

This was followed in 2020 with the launch of GPT-3. This program uses deep learning to undertake a variety of tasks, from writing computer code and blogs to writing poetry and fiction.

While it was not the first program to do this, it was the first to deliver responses that were almost indistinguishable from human responses. In 2022, OpenAI went further by overlaying a chatbot interface on the program and released it as ChatGPT.

    

This is when AI really became a mainstream force and demonstrated the capabilities of so-called generative AI to millions of people around the world. Interestingly, where it took Netflix 3.5 years to reach 100 million users, Chat-GPT took just five days to reach the same point.

AI in cybersecurity

It is interesting to understand how AI is already being used in the field of cybersecurity and how this may evolve in the future.

In 2016, security company Cylance raised $US 100 million and quickly became an industry leader in creating predictive anti-malware. This encouraged the entire security industry to pay close attention to how AI could be used in this way.

Indeed, in 2018, WatchGuard launched Intelligent AV. This added a third layer of anti-malware to the company’s Firebox security offering by using the Cylance engine.

In 2022, GitHub released Copilot, a tool designed to streamline the software development process. The company was then acquired by Microsoft and the company is now busy building its AI capabilities into many of its products.

How threat actors are using AI

Just as AI tools have been quickly embraced by mainstream users, so they are being used by cybercriminals. Early examples have been instances where the tools are being used to write better phishing and spear-phishing emails than those crafted by many humans.

Concerningly, there are also examples of cybercriminals using generative AI tools to write malware. While early examples have been relatively easy to spot, as the sophistication of the tools increases, so does the quality of the malware being produced.

Cybercriminals are also using tools such as Chat-GPT to trawl for sensitive data. The massive volumes of data used to train AI tools can contain sensitive or confidential data collected from a range of sources. By using the right prompts, data can be unearthed without the knowledge of the organisation from which it came.

The future of AI and cybersecurity

Based on the trends that have occurred to date, there are some likely developments that will occur around AI and cybersecurity.

One is that while tools such as ChatGPT can be used to write malware, the quality of the output is unlikely to evade strong security tools. As long as organisations have these in place, they should not be unduly concerned.

On a positive note, AI tools will aid in answering technical security questions. It will make it much easier for people to find the information they seek without needing to have specialised skills in search parameters.

Thirdly, the role of AI supervisor will find itself in strong demand. The tools are only as good as the data on which they are trained, so ensuring the quality remains high and it does not contain sensitive or confidential details will be critical.

AI is clearly in the very early stages of reshaping daily life. However, by understanding where it has come from, what it can currently do, and where it might be headed, we can be well placed to take advantage of its sizable potential benefits while also remaining impervious to cyberthreats.

 

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.