AI Has Potential to Produce 10,000 Malicious Variations, Avoiding Detection in 88% of Instances
A recent discovery by cybersecurity experts suggests that significant language models (SLMs) can be utilized to produce fresh iterations of harmful JavaScript code on a large scale, increasing the likelihood of bypassing detection.
“Even though SLMs face challenges in generating malware from the ground up, malicious actors can seamlessly utilize them to modify or disguise existing malware, thereby heightening the complexity of detection,” stated researchers from Palo Alto Networks Unit 42.
“Even though SLMs face challenges in generating malware from the ground up, malicious actors can seamlessly utilize them to modify or disguise existing malware, thereby heightening the complexity of detection,” stated researchers from Palo Alto Networks Unit 42.
