HomeDigital WellbeingQuizNewsReadWatchResearchSurveyAboutcontact
CONTACT

Virtual Wellness New Zealand

Internet guidance
ContactAboutQuizCoachingArticlesTypesHome
Close
HomeDigital WellbeingQuizNewsReadWatchResearchSurveyAboutContact

Digital Wellbeing

VIRTUAL WELLNESS NEW ZEALAND
Digital Safety

Deepfake Scams

Deepfake scams are illegal crimes that involve the use of artificial intelligence to create highly realistic synthetic audio, video, images and other media to commit fraud, extortion, identity theft or reputational damage. These scams exploit human trust and the plausibility of manipulated content to deceive victims. Individuals can be targeted across multiple platforms and media, making awareness and protective measures essential (Akhtar, 2023; Pedersen et al., 2025).
Topic:
Deepfake Scams
Return to Home Page

How deepfake scams work across media

Visual deepfakes, such as face swaps, reenactments or attribute manipulation, create videos or images that make someone appear to say or do things they did not. These can be used to impersonate individuals for financial scams or to defame public figures (Akhtar, 2023; Singh & Dhumane, 2025). Audio deepfakes, including voice cloning and text-to-speech manipulation, allow scammers to mimic someone’s voice on phone calls. This enables “vishing” (voice phishing) to extract credentials, authorise payments, or manipulate employees (Zhang et al., 2025; Pedersen et al., 2025). Multimodal attacks combine audio and video, increasing believability and bypassing single-channel checks (Singh & Dhumane, 2025).

Text-based generative models can also contribute to scams by creating phishing messages or social media communications that match a target’s writing style. Combined with synthetic media, this can prime victims for later deception (Lyu, 2024; Pedersen et al., 2025). Human observers and many automated detection systems struggle with realistic fakes, especially when compressed, edited, or generated using novel methods not included in detection training data (Singh & Dhumane, 2025; Akhtar, 2023).

Principal safety risks

  1. Financial fraud and account takeover. Impersonation of executives, family members or vendors can lead to unauthorised transfers or disclosure of credentials (Pedersen et al., 2025).
  2. Identity theft and biometric spoofing. Synthetic images or voices can be used to bypass security systems or create fake identities (Singh & Dhumane, 2025).
  3. Reputational harm and extortion. Non-consensual sexual content, fabricated statements, or false “evidence” can be used to blackmail or damage reputations (Akhtar, 2023; Lyu, 2024).
  4. Organisational risk. Deepfake-enabled social engineering may trick staff into revealing credentials, installing malware, or approving fraudulent transactions (Pedersen et al., 2025).
  5. Erosion of trust. High-quality deepfakes amplify disinformation, making it harder to distinguish real from fake content (Lyu, 2024; Singh & Dhumane, 2025).

Practical protections for individuals

Deepfake scams are illegal crimes. Individuals can take the following actions to reduce risk:

  1. Be sceptical of unexpected requests. If a call, video or message asks for money, personal information or urgent action, verify through a separate channel. Contact the person or organisation directly using a known phone number or email rather than responding to the request itself (Pedersen et al., 2025).
  2. Use multi-factor authentication. Enable multi-factor authentication on email, banking and social media accounts. This adds a layer of security and makes it harder for attackers to gain access through impersonation (Singh & Dhumane, 2025).
  3. Verify media authenticity. Be cautious of videos or audio claiming to show a person doing or saying something unexpected. Ask for original files, corroborating evidence or other verification before taking any action (Akhtar, 2023).
  4. Limit publicly available audio and video. Reduce the amount of high-quality images or voice recordings you post online, as these can be used to train AI models for impersonation (Zhang et al., 2025).
  5. Report suspicious activity. If you suspect a deepfake scam, report it to the platform, local law enforcement and your financial institution. Preserve all evidence, including messages, videos and call logs (Pedersen et al., 2025).
  6. Protect personal information. Avoid sharing sensitive details publicly or with unverified contacts, as scammers may combine deepfakes with personal information to increase credibility (Singh & Dhumane, 2025).

By following these measures, individuals can significantly reduce the risk of falling victim to deepfake scams while recognising that these offences are criminal acts with legal consequences for perpetrators.

Conclusion

Deepfake scams exploit both technological realism and human trust. While detection tools are improving, they are not perfect. The most effective defence is a combination of technical safeguards, cautious verification practices, reduced public exposure of personal media, and prompt reporting of suspicious activity (Singh & Dhumane, 2025; Zhang et al., 2025; Pedersen et al., 2025). Staying alert, using multi-factor authentication and confirming unexpected requests through independent channels remain the most practical steps individuals can take to protect themselves.

References

Akhtar, Z. (2023). Deepfakes generation and detection: A short survey. Journal of Imaging, 9(1), Article 18. https://doi.org/10.3390/jimaging9010018

Lyu, S. (2024). DeepFake the menace: Mitigating the negative impacts of AI-generated content. Organizational Cybersecurity Journal: Practice, Process and People, 4(1), 1–18. https://doi.org/10.1108/OCJ-08-2022-0014

Pedersen, K. T., Pepke, L., Stærmose, T., Papaioannou, M., Choudhary, G., & Dragoni, N. (2025). Deepfake-driven social engineering: Threats, detection techniques, and defensive strategies in corporate environments. Journal of Cybersecurity and Privacy, 5(2), Article 18. https://doi.org/10.3390/jcp5020018

Singh, S., & Dhumane, A. (2025). Unmasking digital deceptions: An integrative review of deepfake detection, multimedia forensics, and cybersecurity challenges. MethodsX, 15, 103632. https://doi.org/10.1016/j.mex.2025.103632

Zhang, B., Cui, H., Nguyen, V., & Whitty, M. (2025). Audio deepfake detection: What has been achieved and what lies ahead. Sensors, 25(7), Article 1989. https://doi.org/10.3390/s25071989

‍

Do you require support?

Seek Help

Should you find any content in these articles in any way distressing, please seek support via New Zealand telehealth services.

Seek Help
more articles

More Risk Areas

View Type
Online Radicalisation
Digital Safety
View Type
Cyberbullying
Digital Safety
View Type
Catfishing
Digital Safety
About us

Virtual Wellness New Zealand helps people who are seeking information and support with managing their internet usage.

Navigation
HomeDigital WellbeingQuizNewsReadWatchResearchSurveyAboutContact
Contacts
Based in New Zealand
info@virtualwellness.nz
+64 27 707 1467
message us
Send Message
Copyright Virtual Wellness New Zealand 2025
Powered by Rapid Evolution