In July 2025, a chilling case emerged from Assam that rocked India’s digital world — the deepfake cybercrime involving Archita Phukan, a popular Assamese influencer whose face was stolen and turned into a tool of exploitation.
A fake Instagram account called ‘babydoll archi’ surfaced, featuring glamorous AI-generated content using Archita’s superimposed face. None of the followers suspected the deception — until Archita’s family discovered viral images falsely showing her with adult film star Kendra Lust. The account had amassed over 1.3 million followers, blending bold reels, trendy dance videos, and suggestive imagery — all manufactured through AI.
The person behind the account? Pratim Bora, her ex-boyfriend and a mechanical engineer from Tinsukia. Using private images sourced from Archita’s social media, he weaponized deepfake tools to humiliate and monetize her likeness. His actions led to a police complaint in Dibrugarh and his arrest under the IT Act.
The Real-Life Consequences of a Digital Crime
Archita’s ordeal is not just a cautionary tale of technology gone wrong; it’s a harsh reminder of the psychological trauma that digital impersonation can cause. Her public image was shattered, personal life disrupted, and mental health severely impacted.
Legal expert Adv Nrupen Vadakken explains that Sections 66C (identity theft), 66E (privacy violation), and 67 (obscene content) of the Information Technology Act offer protection, but often after irreversible damage.
Deepfake Technology: Innovation or Invasion?
According to Kerala tech analyst Nandakumar, deepfakes go beyond morphing — they replicate expressions, voices, and even eye movements using AI models trained on real images. With enough data, anyone’s identity can be recreated digitally.
Apps that promise facial transformations or AR filters are often the gateway to collecting such data, yet most users remain unaware.
How to Protect Yourself from Deepfake Exploitation
To safeguard your digital identity:
Avoid posting high-resolution, multi-angle selfies
Keep social media private
Use tools like PimEyes, Google Reverse Image Search
Apply watermarks or subtle filters to your photos
Limit app access to your photo gallery
Detection tools include:
Microsoft Video Authenticator
Deepware Scanner
Sensity AI
Legal Protections and Rapid Action in India
India is catching up fast. In addition to the IT Act, the new Bharatiya Nyaya Sanhita (BNS) includes laws against digital defamation, cyberstalking, and harassment. Victims are encouraged to contact cybercrime cells immediately and preserve digital evidence.
Archita Phukan’s story is a wake-up call: AI isn’t dangerous in itself — but how we use it determines whether it builds or destroys lives. Her courage in confronting this issue publicly must inspire others to take back control of their digital identity.
You Must Know:
What happened in Archita Phukan’s deepfake case?
Her ex-boyfriend created a fake Instagram account using her AI-generated face in explicit content, leading to widespread defamation and emotional trauma.
What laws cover deepfake crimes in India?
Sections 66C, 66E, and 67 of the IT Act, and BNS laws concerning defamation, identity theft, and stalking.
How can someone detect if their image is misused?
Use tools like PimEyes, Google Reverse Image Search, or TinEye to monitor image usage online.
Can deepfake victims get justice in India?
Yes, many arrests have been made in recent cases. Prompt reporting to cybercrime cells is key.
What apps or tools can detect deepfakes?
Sensity AI, Microsoft Video Authenticator, and Deepware Scanner are leading tools in this space.
Are deepfakes always illegal?
Not always. They’re legal in satire or entertainment, but criminal when used for defamation, impersonation, or harassment.
জুমবাংলা নিউজ সবার আগে পেতে Follow করুন জুমবাংলা গুগল নিউজ, জুমবাংলা টুইটার , জুমবাংলা ফেসবুক, জুমবাংলা টেলিগ্রাম এবং সাবস্ক্রাইব করুন জুমবাংলা ইউটিউব চ্যানেলে।