https://newsletter.en.creamermedia.com
Components|generation|Infrastructure|SECURITY|System|Systems|Technology|Training|Infrastructure|Operations
Components|generation|Infrastructure|SECURITY|System|Systems|Technology|Training|Infrastructure|Operations
components|generation|infrastructure|security|system|systems|technology|training|infrastructure|operations

AI-generated tools being used across cyberattack chains, requires systematic approach to training and internal policies – Kaspersky

8th January 2026

By: Schalk Burger

Creamer Media Senior Deputy Editor

     

Font size: - +

The rapid development of AI is reshaping the cybersecurity landscape in 2026 for individual users and for businesses. Large language models (LLMs) are influencing defensive capabilities while simultaneously expanding opportunities for threat actors, says cybersecurity company Kaspersky.

AI will become a cross-chain tool in cyberattacks and be used across most stages of the kill chain. Threat actors already employ LLMs to write code, build infrastructure and automate operational tasks.

Further advances will reinforce this trend, as AI will increasingly support multiple stages of an attack, from preparation and communication to assembling malicious components, probing for vulnerabilities and deploying tools.

Attackers will also work to hide signs of AI involvement, thereby making such operations harder to analyse.

“While AI tools are being used in cyberattacks, they are also becoming a more common tool in security analysis and influence how security operations centre teams work. Agent-based systems will be able to continuously scan infrastructure, identify vulnerabilities and gather contextual information for investigations, thereby reducing the amount of manual routine work.

“As a result, specialists will shift from manually searching for data to making decisions based on already-prepared context. In parallel, security tools will transition to natural-language interfaces, and enabling prompts instead of complex technical queries,” adds Kaspersky Research Development group manager Vladislav Tushkanov.

Further, deepfakes, generated by AI tools, are becoming a mainstream technology and awareness will continue to grow. Companies are increasingly discussing the risks of synthetic content and training employees to reduce the likelihood of falling victim to it.

Awareness is rising within organisations and among regular users, as end consumers encounter fake content more often and better understand the nature of such threats.

Deepfakes are becoming a stable element of the security agenda, which requires a systematic approach to training and internal policies.

Additionally, while the visual quality of deepfakes is already high, realistic audio remains the main area for future growth.

Content-generation tools are becoming easier to use, as non-experts can now create a mid-quality deepfake in a few clicks. As a result, the average quality continues to rise, creation becomes accessible to a far broader audience, and these capabilities will inevitably continue to be leveraged by cybercriminals, the cybersecurity company says.

Online deepfakes will continue to evolve, but remain tools for advanced users. Real-time face and voice swapping technologies are improving, but their setup still requires more advanced technical skills.

Further, wide adoption is unlikely, yet the risks in targeted scenarios will grow and increasing realism and the ability to manipulate video through virtual cameras will make such attacks more convincing.

Meanwhile, efforts to develop a reliable system for labelling AI-generated content will continue.

There are still no unified criteria for reliably identifying synthetic content and current labels are easy to bypass or remove, especially when working with open-source models. Therefore, new technical and regulatory initiatives aimed at addressing the problem are likely to emerge, Kaspersky adds.

Additionally, open-weight models will approach top closed-models in many cybersecurity-related tasks, which create more opportunities for misuse. Closed models still offer stricter control mechanisms and safeguards, limiting abuse.

However, open-source systems are rapidly catching up in functionality and circulate without comparable restrictions. This blurs the difference between proprietary models and open-source models, both of which can be used efficiently for undesired or malicious purposes.

The line between legitimate and fraudulent AI-generated content will become increasingly blurred. AI can already produce well-crafted scam emails, convincing visual identities and high-quality phishing pages.

Simultaneously, major brands are adopting synthetic materials in advertising, making AI-generated content look familiar and visually normal. Therefore, distinguishing real from fake will become even more challenging, both for users and for automated detection systems, the company notes.

Edited by Chanel de Bruyn
Creamer Media Senior Deputy Editor Online

Article Enquiry

Email Article

Save Article

Feedback

To advertise email advertising@creamermedia.co.za or click here

Comments

Showroom

Hanna Instruments (Pty) Ltd
Hanna Instruments (Pty) Ltd

We supply customers with practical affordable solutions for their testing needs. Our products include benchtop, portable, in-line process control...

VISIT SHOWROOM 
Sika South Africa
Sika South Africa

Sika South Africa is a trusted partner for the nation’s infrastructure, commercial, residential, and industrial sectors.

VISIT SHOWROOM 

Latest Multimedia

sponsored by

Photo of Martin Creamer
On-The-Air (12/12/2025)
12th December 2025 By: Martin Creamer
Magazine round up | 10 December 2025
Magazine round up | 12 December 2025
12th December 2025

Option 1 (equivalent of R125 a month):

Receive a weekly copy of Creamer Media's Engineering News & Mining Weekly magazine
(print copy for those in South Africa and e-magazine for those outside of South Africa)
Receive daily email newsletters
Access to full search results
Access archive of magazine back copies
Access to Projects in Progress
Access to ONE Research Report of your choice in PDF format

Option 2 (equivalent of R375 a month):

All benefits from Option 1
PLUS
Access to Creamer Media's Research Channel Africa for ALL Research Reports, in PDF format, on various industrial and mining sectors including Electricity; Water; Energy Transition; Hydrogen; Roads, Rail and Ports; Coal; Gold; Platinum; Battery Metals; etc.

Already a subscriber?

Forgotten your password?

MAGAZINE & ONLINE

SUBSCRIBE

RESEARCH CHANNEL AFRICA

SUBSCRIBE

CORPORATE PACKAGES

CLICK FOR A QUOTATION







301

sq:0.583 0.729s - 198pq - 2rq
Subscribe Now