AI-supported hacker attacks: a new level of danger

Artificial intelligence is a huge challenge for cyber security, but it is also the key technology in the fight against cyber criminals.

Artificial intelligence is a huge challenge for cyber security, but it is also the key technology in the fight against cyber criminals.

At the beginning of this year, we wrote in our blog that "2023 will be the most dangerous year ever for companies from an IT perspective". The experts were certain of this. In view of increasing digitalization and more and more cloud infrastructures, as well as develop ments such as cybercrime-as-a-service, this was certainly to be expected. Many companies have therefore taken countermeasures. To prevent malware and ransomware attacks, for example, they have strengthened their IT security - including with highly specialized security experts from their trusted IT personnel service provider: Geco.

However, the threat situation has actually become even more acute, meaning that 2024 will probably top the current year in terms of cybercrime. In addition to quantum computing (keyword: encryption and decryption), experts are particularly concerned about the rapid development of artificial intelligence.

 

How do criminals use AI?

Artificial intelligence is clearly a game changer in many areas. It offers numerous advantages: it can automatically write texts, generate images, intelligently search through huge amounts of data and even program - and it is precisely these functions that are unfortunately also attractive to criminals. The list of possible "malicious" applications is long.

With the help of generative AI systems, attacks are becoming more sophisticated and precise - and therefore more dangerous:

  • Take phishing emails, for example: thanks to AI translators such as DeepL and AI text generators such as ChatGPT, cyber criminals can create almost perfectly worded phishing emails to smuggle in malware - in any language imaginable. These emails are difficult to recognize.
  • AI enables a new level of social engineering. Deepfakes, i.e. fake voices or videos, are not only used to trick inexperienced cell phone or internet users. Experts fear that in the near future, even the remote identification used by banks, such as the well-known video identification procedure, could be thwarted.
  • AI-based optical character recognition (OCR) can already trick spam protection captchas.
  • AI can easily be used to create detailed fake profiles, e.g. on LinkedIn, in order to establish the credibility of fake emails or "deepfakes".
  • Automated data collection: In order to be able to carry out such attacks at all, the attackers need to have as much and as accurate data as possible on the attack targets or victims. Whereas research used to take days, today it takes just a few mouse clicks to gather all the necessary data, which significantly reduces the effort involved.
  • AI can recognize patterns and compare data extremely quickly when cracking passwords. And it can do a lot more.


Malware 2.0 already on the way?


In general, AI makes it easier for criminals to identify attack targets and vulnerabilities, strike in a more targeted manner and thus significantly increase the efficiency of attacks. And the worst could be yet to come: Currently, the ability of AI to program malware is still at a relatively manageable level and achieves impact through quantity rather than quality - up to 300,000 new types of malware are now said to be created every day with the help of AI. But according to various experts, the age of "Malware 2.0" could be just around the corner, with AI-based malware that brings with it unimagined power and danger. We are talking about malware that learns through AI and, for example, notices why it has been detected by antivirus software - and adapts its behavior or code accordingly to circumvent it.

 

AI is a key technology against cyber attacks


To counter threats like this, the IT industry must also rely on artificial intelligence. "Many experts see artificial intelligence (AI) as a key element in the next generation of IT security solutions and even as the savior of the entire industry" (Security Insider Magazine) in the fight against cyber criminals. In practice, AI is already in use, especially as "endpoint protection". For example, AI can identify certain attack patterns in firewalls and initiate countermeasures faster and more efficiently than humans. Or AI-based antivirus tools can detect unusual behaviour in programs or phishing emails at an early stage.

 

Implementation very expandable

Despite the large and rapidly growing threat scenario, surprisingly only very few companies in the security sector have used artificial intelligence to date. According to the IT experts at Blackberry, AI is used in an estimated 10 to 20 percent of defense systems at companies and public authorities. https://www.bitkom.org/Presse/Presseinformation/KI-Herausforderung-fuer-Cybersicherheit, just one in seven (!) German companies has even looked into AI to improve cybersecurity. This is despite the fact that 57% believe that generative AI will have a negative impact on IT security, according to the same survey.

There are various reasons for the sluggish implementation. On the one hand, there are the high costs and, on the other, the lack of experts: many IT departments are chronically overloaded and understaffed. External specialists can help here in the short term!

 

AI experts from GECO ensure security


Do you want to adapt your IT to the rapidly growing threat of hacker attacks using artificial intelligence? At Geco, you will find experts who specialize in AI-based cybersecurity. We will be happy to advise you and look forward to your call or email!