![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR9W6BvipfgpVQum9py1LOo4cleD_MO0yPeKZaqqDX-egJpBB8NQYh-s0VbDB2z_Hq_-WlK4xNjOropj_wlk4awHnqsFLaX7Ox2qFi-JkO41oulJG5tk8IMv5WDbqGuuxwBjEMsgrh8XbfqAencRn9EtGt1xDm8T-vF3OxF97eM310msPLhwlxtorBhDcR/s1600/ai.jpg)
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules.
“Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates,” Recorded Future said in a new report shared with The Hacker News.
Source:: The Hackers News