For Media, Social Networks and Messengers: MTS Web Services Unveils a New‑Generation Deepfake Detector
A new Russian deepfake‑detection system aims to reduce the spread of synthetic media across digital platforms and strengthen national information security

Reducing Disinformation Risks
MTS Web Services (MWS) has introduced a next‑generation deepfake detector designed for media platforms, social networks and messaging apps. The system automatically filters synthetic content before publication, helping prevent the circulation of fabricated videos involving public figures or emergencies.
Current detection accuracy for media generated by Veo 3 and Sora 2 ranges from 84% to 93.9%, depending on the analysis method, with developers targeting above 98%. The technology is already being tested on MTS Link and MTS Defender platforms, as well as by a Russian government service and three major banks.

For Russia’s IT sector, this marks an important milestone: a domestic deepfake‑detection solution that supports national information‑security goals. The rollout reduces disinformation risks, helps safeguard online platforms and strengthens public trust in digital media. If the tool proves competitive, it could enter global markets and contribute to international anti‑deepfake standards.
Export Prospects and Broad Application
From an export standpoint, the solution shows strong potential. MWS is already working with banks in CIS countries and evaluating integrations with foreign messengers and state platforms. If the detector reaches 98%+ accuracy and maintains performance parity with international competitors, it could be in demand across CIS, the Middle East and Africa — regions facing rising threats from AI‑generated content.
Deployment models may include SaaS licensing, on‑premise delivery or integration into media outlets, social platforms, government services and financial institutions.
Within Russia, demand for the system is expected to be widespread: media platforms, social networks, messengers, government services, online education and fintech. Its adoption aligns with ongoing digital‑transformation initiatives and tightening requirements for AI‑content verification. As such, the detector could become an essential element of Russia’s ‘digital economy’ infrastructure and national data‑security strategy.
Global Race Against Deepfakes
MTS AI’s early announcement of a deepfake‑detection service came in January 2025. The solution identifies major deepfake types, conducts audio detection of AI‑generated speech with 99% accuracy throughout entire conversations, and flags prohibited content such as depictions of alcohol or narcotics.

In May 2025, MTS integrated VisionLabs’ DeepFake Detector into its MTS ID KYC identification service, achieving 99.3% accuracy in detecting altered videos and images. This enabled banks, microfinance organizations and insurers to more effectively counter digital fraud. The algorithms continuously retrain to identify fakes produced by emerging tools.
In August, HSE University and MTS partnered to advance deepfake detection and develop multimodal AI models for content generation, along with an expanded dataset to test biometric‑identification resilience.
By late October, Russia’s Federation Council began drafting legislation to ban malicious deepfakes. Key steps include formalizing a legal definition of ‘deepfake’ and distinguishing creative from harmful content. The effort highlights the intersection of technological development and regulatory frameworks.
Worldwide, synthetic‑content detection is accelerating. In 2023, roughly 500,000 deepfakes circulated online; by 2025, the number reached into the millions. AI now produces near‑indistinguishable images, audio and video — tools used for political manipulation, personal compromise or fabricated evidence. Solutions include ensemble detection systems, real‑time analysis and explainable AI approaches.

Meeting Global Standards
The launch of the MWS detector represents meaningful progress for Russia’s digital‑security landscape. Over the next one to two years, adoption is expected across media companies, social networks, government platforms and financial‑technology services. If accuracy targets (98%+) are achieved by 2026–2027, commercial expansion into CIS markets and emerging regions is likely.
By 2030, the system could become part of comprehensive anti‑fake platforms that combine detection, moderation and AI‑content monitoring. In the long term, global markets will increasingly require compliance with international standards such as AI‑content labeling, cross‑platform interoperability and certification — further driving demand for reliable detection technologies.









































