AI to Be Barred From Manipulating Users in Russia
Russia is drafting legislation that would prohibit artificial intelligence systems from manipulating human behavior, a move that could shape regulatory approaches beyond the country if adopted.

Russia’s Ministry of Digital Development, Communications and Mass Media (Mintsifry) is preparing a draft law on artificial intelligence that includes a ban on using AI systems to influence or steer human behavior. The same package outlines additional baseline safeguards, including a citizen’s right to opt out of AI-based services and a requirement to disclose when AI is used in products and services. According to media reports, the draft is currently undergoing interagency review.
Today, AI systems can influence individual choices across a wide range of contexts, from recommendations on online marketplaces to voice assistants embedded in public service platforms. Government policy aims to manage these risks without slowing technological progress, an approach set out in the national AI strategy. The same objective guides the presidential commission on AI development.

Toward a “Trusted AI” Model
The proposed law is expected to serve as a foundation for step-by-step regulation. Initial provisions would establish general prohibitions and obligations, followed by sector-specific rules and guidance. Services where AI could affect human rights, financial security or safety are likely to receive the closest scrutiny. Businesses would need to label AI interactions, offer users the option to switch to a human operator and assess AI-driven scenarios for signs of undue influence.
Defining manipulation remains the most complex issue. Policymakers will need to draw clear lines between acceptable recommendations and pressure, and determine how to link service outputs with user behavior. Buyers are already evaluating not only functionality but also regulatory maturity. If Russia establishes a clear “trusted AI” framework, it could strengthen the global competitiveness of its solutions. At the same time, export markets will depend on alignment with foreign regulatory regimes.

A Gradual Path to Regulation
Russia adopted a code of ethics for AI in 2021, setting out core principles for responsible use, including prioritizing human rights and interests. In the years since, industry participants and experts have increasingly called for comprehensive AI regulation.
In December 2025, the Ministry of Justice proposed mechanisms to respond to harmful AI-generated content. In March 2026, a State Duma committee approved a ban on political campaigning using AI-generated human images. The proposed ban on manipulation extends this regulatory trajectory.
Similar efforts are underway globally. The European Union has already adopted and begun enforcing the AI Act, which prohibits certain “unacceptable risk” systems, including those that rely on manipulative techniques or exert undue influence on human behavior.

New Market Opportunities for AI Governance
If adopted, the provision would become a core principle of Russia’s AI regulatory framework. The government is expected to continue expanding AI adoption while introducing mandatory safeguards in areas involving human rights, consumer choice, politics, biometrics, safety and critical infrastructure. As a result, residents would be better protected from hidden digital influence, potentially increasing trust in AI systems and easing their deployment across sectors.
In the next few years, demand is likely to grow for AI system audits, legal reviews of algorithms, AI content labeling tools and solutions in AI Governance and AI Compliance. For Russian developers, the shift creates both constraints and opportunities: those that embed transparency and human oversight into their products early are likely to gain a competitive edge.









































