The rapid progress of artificial intelligence (AI) has opened up a new realm of possibilities. At the same time, it’s also unleashing a series of social engineering threats, raising concerns within the cybersecurity community. Forbes discussed how this is becoming a problem and the best way to combat it in a recent article.
Social engineering is the art of manipulating, influencing, or deceiving users to gain control over computer systems. This form of cyberattack has witnessed a new wave of innovation with the integration of AI technology. Threat actors, phishers, and social engineers are quick to embrace new tactics that provide them with a competitive edge. As such, they are leveraging AI for advanced social engineering attacks in various ways, the article explains.
One of the ways AI is helping threat actors is by helping them improve phishing attacks. In the past, phishing emails and texts were rife with grammatical errors and spelling mistakes. That made them fairly easy to recognise. The article explains how AI-powered tools like ChatGPT can now help attackers draft realistic-looking emails that are difficult to distinguish from messages written by humans. These tools use advanced technology to correct grammar and spelling errors, resulting in highly sophisticated emails.
Scammers can also use AI to create synthetic videos and fake virtual identities that convincingly mimic real people, such as senior executives or partners. Victims can be manipulated into divulging sensitive information, conducting financial transactions, or spreading misinformation. Similarly, scammers may use AI voice cloning technology to impersonate family members and deceive victims into transferring money, under the pretext of a family emergency.
The article also reveals that researchers have used complex techniques like Indirect Prompt Injection to successfully manipulate AI chatbots into impersonating trusted entities. The chatbots then generate phishing messages to deceive users into revealing sensitive information.
AI-driven autonomous agents, clever scripting, and automation tools enable threat actors to conduct highly targeted social engineering attacks at an industrial scale. These attacks range from target selection to delivering phishing emails with human-like responses in chat boxes or phone calls, reports the article. AI’s ability to learn and evolve allows it to adapt phishing tactics based on feedback, increasing the success rate of attacks.
Having stated how AI is being used by threat actors for social engineering attacks, the article then explains how to safeguard against them. It claims that AI-based cyberattacks are expected to grow significantly in the coming years. As a result, businesses need to adopt certain best practices to mitigate the risks posed by AI-enhanced social engineering attacks.
According to the article, training users to detect social engineering attempts is of prime importance. The human element plays a critical role in social engineering attacks. Regular training, phishing tests, and simulation exercises empower users to identify, block, and report suspicious actions promptly. The article cites studies that show that organisations that invest in security training observe a significant drop in their average phish-prone percentage.
Whilst hackers are using AI to design better social engineering attacks, businesses could use the technology to deploy security controls. The article urges companies to implement security tools equipped with AI technology to analyse, detect, and respond to advanced forms of social engineering. AI can inspect the content, context, and metadata of messages and URLs, detecting tell-tale signs of phishing attempts. Additionally, AI can aid in incident response by swiftly isolating infected devices, notifying administrators, and collecting evidence for investigation.
Additionally, the article emphasises the importance of implementing more robust authentication techniques. Multi-factor authentication (MFA) is a powerful security measure to prevent unauthorised access even if credentials are compromised. However, investing in phishing-resistant MFA solutions is essential to thwart both AI and human adversary attempts to manipulate victims’ MFA.
As AI technology advances at an unprecedented pace, hackers are increasingly developing custom AI applications to elevate social engineering tactics to a new level. Organisations must remain vigilant and adopt a defence-in-depth approach, combining AI-based security tools, training, and robust procedures to combat AI-driven social engineering threats effectively.
For example, proactive cybersecurity initiatives like social engineering penetration testing services through managed security service providers such as DigitalXRAID could help businesses identify weak points in their defence. They may also want to keep up with cybersecurity trends that evolve with the attack vectors. By prioritising cybersecurity measures and adapting to the evolving threat landscape, businesses can safeguard their operations and data from the impact of AI-based social engineering on an amplified scale.
Parallel House, 32 London Road
Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Smart Herald journalist was involved in the writing and production of this article.