Large language models (LLMs) such as ChatGPT have the ability to write computer code in addition to generating human-like text, posing a threat to cybersecurity. Researchers have demonstrated that computer viruses can use LLMs to modify their code, making it difficult for detection. This allows the virus to create customized emails that appear authentic, spreading itself through email attachments. David Zollikofer from ETH Zurich and Benjamin Zimmerman from Ohio State University are concerned about this capability and its potential misuse by malware known as metamorphic malware.