AI is fueling the biggest financial scams ever—cyber safety experts are fighting back
The recent Bybit hack has highlighted an alarming trend: cybercriminals are increasingly leveraging AI alongside advanced social engineering techniques.

The recent Bybit hack – the largest crypto heist in history – has highlighted an alarming trend: cybercriminals are increasingly leveraging AI alongside advanced social engineering techniques such as deepfake technology and targeted phishing to execute financial scams at an unprecedented scale.
The sophistication of these attacks underscores the urgent need for continuous innovation in cybersecurity to counter evolving threats in the digital landscape. As AI continues to shape the future of financial fraud, the primary challenge is ensuring that security measures evolve by defensively leveraging AI to outpace increasingly intelligent and deceptive cybercriminal tactics.
How AI is Transforming Scams
The number of financial scams is on the rise, and while not all are driven by AI, the technology is clearly amplifying both their scale and success rate. AI is being used to personalize scams, making them more convincing and difficult to detect. A recent notable example saw deepfake videos of Elon Musk promoting fraudulent cryptocurrency giveaways, exploiting Musk’s trusted public persona.
Victims were lured in through hijacked social media accounts and fake investment schemes, believing they were engaging with legitimate opportunities. The scam resulted in over $7 million in stolen funds before it was detected and shut down.
Another concerning trend is the rise of AI-powered phishing attacks. Unlike traditional phishing emails, which often contain errors and generic language, AI-generated phishing campaigns use machine learning to tailor their language and formatting, significantly enhancing their believability. These attacks are further enhanced by AI chatbots programmed to engage with victims in real time, convincing them to divulge sensitive information or transfer funds.
AI is Turning Social Media into a Hotbed for Phishing Scams
AI-powered scams are increasingly being distributed through social media. Platforms with targeted advertising are becoming prime targets for fraudsters, enabling precise demographic targeting with highly convincing scams.
According to Gen’s Q4/2024 Threat Report, Facebook accounted for 56% of total identified social media threats, followed by YouTube at 26%, X (formerly Twitter) at 7%, with Reddit and Instagram each accounting for 5% and 4%. Scammers exploit these platforms to distribute deceptive online ads (malvertising), fake e-shops, and phishing campaigns, leveraging AI to enhance the believability and reach of their schemes.
As AI continues to enhance scam sophistication, social media platforms remain one of the most vulnerable spaces, requiring both users and businesses to be increasingly vigilant. Without stronger safeguards, cybercriminals will continue to manipulate these platforms, making it imperative for security measures to evolve in tandem with these emerging threats.
The AI Arms Race Between Cybercriminals and Cybersecurity Defenders
While AI is being leveraged by fraudsters, it is also a crucial tool in countering cybercrime. AI-driven security systems can detect fraudulent activity in real time by analysing behavioral patterns and identifying anomalies. These technologies help flag suspicious behavior, detect deepfake content and prevent financial fraud before it occurs.
AI-powered fraud detection tools can now recognize scam patterns across multiple platforms, including social media, email and messaging apps. Automated threat response mechanisms help intercept phishing attacks before reaching users, while AI-enhanced content verification can identify and mitigate deepfake scams. As threats continue to evolve, these technologies play an essential role in securing both individuals and businesses from financial fraud.
Staying One Step Ahead of AI-Powered Scammers
Individuals should remain vigilant with unsolicited financial requests, verify identities during high-stakes interactions, and use multi-factor authentication to secure their accounts. While it's helpful to look for deepfake indicators — like unnatural blinking or mismatched lip movements — AI-generated videos are becoming so realistic that spotting them with the naked eye is quickly becoming impossible. That’s why it’s essential to rely on verification practices, not just visual cues. Avoiding oversharing personal information on social media is also crucial, as scammers can exploit this data to craft highly targeted and convincing attacks.
Businesses should adopt a proactive approach. Employee training on AI-driven scam tactics is essential, as is implementing strict financial verification procedures that require multiple approvals for large transactions. Companies should deploy AI-based fraud detection systems to identify anomalies in financial transactions and proactively monitor brand impersonation attempts on social media and the web. Additionally, fostering a security-aware culture within the organization strengthens overall defense.
As AI continues to shape both cyber threats and defenses, security strategies must evolve at the same rapid pace. Integrating AI-driven security automation is no longer optional – it is essential for staying ahead of increasingly sophisticated fraud tactics.
Check out the best cloud firewall.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro