

Visual deepfakes, such as face swaps, reenactments or attribute manipulation, create videos or images that make someone appear to say or do things they did not. These can be used to impersonate individuals for financial scams or to defame public figures (Akhtar, 2023; Singh & Dhumane, 2025). Audio deepfakes, including voice cloning and text-to-speech manipulation, allow scammers to mimic someone’s voice on phone calls. This enables “vishing” (voice phishing) to extract credentials, authorise payments, or manipulate employees (Zhang et al., 2025; Pedersen et al., 2025). Multimodal attacks combine audio and video, increasing believability and bypassing single-channel checks (Singh & Dhumane, 2025).
Text-based generative models can also contribute to scams by creating phishing messages or social media communications that match a target’s writing style. Combined with synthetic media, this can prime victims for later deception (Lyu, 2024; Pedersen et al., 2025). Human observers and many automated detection systems struggle with realistic fakes, especially when compressed, edited, or generated using novel methods not included in detection training data (Singh & Dhumane, 2025; Akhtar, 2023).
Deepfake scams are illegal crimes. Individuals can take the following actions to reduce risk:
By following these measures, individuals can significantly reduce the risk of falling victim to deepfake scams while recognising that these offences are criminal acts with legal consequences for perpetrators.
Deepfake scams exploit both technological realism and human trust. While detection tools are improving, they are not perfect. The most effective defence is a combination of technical safeguards, cautious verification practices, reduced public exposure of personal media, and prompt reporting of suspicious activity (Singh & Dhumane, 2025; Zhang et al., 2025; Pedersen et al., 2025). Staying alert, using multi-factor authentication and confirming unexpected requests through independent channels remain the most practical steps individuals can take to protect themselves.
Akhtar, Z. (2023). Deepfakes generation and detection: A short survey. Journal of Imaging, 9(1), Article 18. https://doi.org/10.3390/jimaging9010018
Lyu, S. (2024). DeepFake the menace: Mitigating the negative impacts of AI-generated content. Organizational Cybersecurity Journal: Practice, Process and People, 4(1), 1–18. https://doi.org/10.1108/OCJ-08-2022-0014
Pedersen, K. T., Pepke, L., Stærmose, T., Papaioannou, M., Choudhary, G., & Dragoni, N. (2025). Deepfake-driven social engineering: Threats, detection techniques, and defensive strategies in corporate environments. Journal of Cybersecurity and Privacy, 5(2), Article 18. https://doi.org/10.3390/jcp5020018
Singh, S., & Dhumane, A. (2025). Unmasking digital deceptions: An integrative review of deepfake detection, multimedia forensics, and cybersecurity challenges. MethodsX, 15, 103632. https://doi.org/10.1016/j.mex.2025.103632
Zhang, B., Cui, H., Nguyen, V., & Whitty, M. (2025). Audio deepfake detection: What has been achieved and what lies ahead. Sensors, 25(7), Article 1989. https://doi.org/10.3390/s25071989
Should you find any content in these articles in any way distressing, please seek support via New Zealand telehealth services.
Seek Help