PRODUCTS

Fortra and Terranova Security. How to guard against the dangers of AI

Fortra’s Terranova Security’s chief information security officer explains the dangers associated with the advancement of AI and how businesses can better prepare against attacks.

The rise of artificial intelligence is being accompanied by a rise in consciousness of the risks to cybersecurity. 

“Hackers are utilising AI to develop more advanced attacks and evade detection from security tools,” says Theo Zafirakos, chief information security officer at Fortra’s Terranova Security. “Businesses need to be aware of the various ways that hackers may manipulate them, from malware designed to bypass detection to more sophisticated and targeted phishing attacks.”

For instance, scammers are now exploiting AI technology to impersonate people by creating voices that convincingly portray victims’ coworkers. This phishing technique can deceive employees into providing sensitive information.

AI can also be used to gather sensitive data. “Every industry is grappling with an enormous amount of data,” says Zafirakos. “Attackers are employing AI to analyse and collect data more quickly. Healthcare providers, manufacturers and financial services organisations handle large amounts of data to drive innovation and inform decision-making. Bad actors will target that sensitive data to either disrupt operations or gather further information.”

There are steps that organisations can take to protect themselves. One of the most important is cybersecurity awareness training, which can enhance an enterprise’s ability to identify and mitigate AI-related security threats.

“As with any other cybersecurity concern, knowledge and proper employee education are the best defence,” says Zafirakos. And AI can be put to good use here. “Chatbots can be employed to educate users on how to protect their devices and personal information. Similarly, machine learning on employee awareness levels can be utilised by team leaders to identify gaps in employee knowledge of security awareness.”

Furthermore, employees can learn to detect AI-enabled or AI-generated attacks and avoid falling victim. They can also learn about the acceptable use of AI tools for business operations in the process, such as enhancing productivity. For example, they can learn to fact-check emails through phishing awareness training and avoid opening unsolicited software that could be AI-generated malware.

“Detection and prevention technologies, such as intrusion protection systems and intrusion detection systems, and user-behaviour analysis can monitor and alert users to any suspicious activity on their networks or devices in real time,” explains Zafirakos. “AI can also be used to automate threat responses to swiftly mitigate damage and prevent its spread to other infrastructure components. This will significantly reduce the costs associated with data protection, awareness training and data-breach responses.”

As AI continues to evolve, organisations must take proactive measures to stay ahead of emerging threats and vulnerabilities.

“Understanding how AI can disrupt or improve an organisation is essential for successful operations,” says Zafirakos. “I urge business leaders to establish an internal acceptable use policy for AI tools so that employees can enhance their workloads, and to incorporate content related to AI risks and threats within their security awareness programmes so that everyone is equipped to protect against AI-related attacks.”

Source: Fortra and Technology Record