REGISTER

email 14 48

Cybersecurity experts and insurance professionals are issuing warnings about the escalating utilization of artificial intelligence (AI) tools by cybercriminals and nation-state cyber operators.

According to a recent report from Lloyd's of London, over the next 12 to 24 months, there is a projected increase in the frequency, severity, and diversity of smaller-scale cyber losses. This surge is attributed to the enhanced capabilities of generative AI and large language models (LLMs) employed by attackers. However, the report anticipates a subsequent plateau as defensive technologies catch up to counter these threats.

Lloyd's highlights the transformative impact of gen AI and LLMs on the cyber risk landscape for both attackers and defenders. The evolving technology necessitates adaptations in business resilience practices. Similarly, Britain's National Cyber Security Center (NCSC), part of GCHQ intelligence agency, underscores the imminent challenges in cyber resilience posed by advancing AI technologies. According to the NCSC, attackers across various skill levels, from less-skilled cybercriminals to sophisticated nation-state groups, are already leveraging AI to varying extents.

Experts note a significant interest among criminal and nation-state entities in AI-powered tools, evident from underground discussions and attempts to refine these tools for malicious purposes. For instance, phishing emails have become more sophisticated, with improved language and construction, making it harder for recipients to discern fraudulent content. This trend increases the likelihood of successful cyberattacks, such as data breaches or malware infections.

The NCSC predicts that AI advancements will further complicate cybersecurity efforts, making it challenging for individuals to differentiate between genuine and malicious emails or requests. Additionally, AI automation is expected to expedite attackers' exploitation of vulnerabilities in software, leading to more efficient reconnaissance and exfiltration of sensitive data, amplifying the impact of cyberattacks like ransomware.

Lloyd's report outlines potential future cyber risks, including automated vulnerability discovery, hostile cyber operations by nation-state groups, lower barriers to entry for cybercriminals, optimization of phishing campaigns, and the emergence of single points of failure in AI-integrated services. As AI tools evolve, attackers are likely to become bolder, with the ability to conceal digital forensic evidence more effectively.

While initially, well-funded nation-state hacking groups may have the advantage, the NCSC anticipates that the cybercrime ecosystem will quickly catch up, resulting in increased cyberattack volume and impact, including ransomware attacks. Moreover, as AI-powered cyber tools become commoditized, they will become more accessible to a broader range of cybercriminals, further enhancing their capabilities.

In the long term, novice cybercriminals and hacktivists are expected to conduct more effective operations facilitated by AI. Despite existing barriers to malicious AI use, ongoing refinements and the availability of alternative AI models suggest that these barriers may soon diminish, making sophisticated AI tools accessible to malicious actors of all kinds.

In conclusion, the proliferation of advanced AI tools poses a significant and evolving threat to cybersecurity, requiring continuous adaptation and innovation in defensive strategies to mitigate risks effectively.

 

 

 

CyberBanner

Log in Register

Please Login to download this file

Username *
Password *
Remember Me

CyberBanner

CyberBanner

Go to top