AI in Russia to Become More Predictable
Russia is moving from experiments to clear rules in artificial intelligence. A forthcoming law could make the domestic market more transparent and help Russian AI technologies find their way to global markets.

AI as a Separate Category
Russian lawmakers are preparing a bill to regulate artificial intelligence. According to expert commentary from Olga Chernokoz of the Chamber of Commerce and Industry, the initiative effectively recognizes AI as an independent legal category. Expected requirements include “human-in-the-loop,” algorithmic transparency, and stronger protection of personal data.
A parliamentary working group has already drafted a definition of AI, while the Ministry of Digital Development has presented a regulatory concept stretching to 2030. This marks a shift from theory and pilot projects to a formal legal framework. A law of this type would create predictable rules, stimulate demand for data audits, MLOps oversight, and AI compliance tools.

For the public, the changes mean AI systems will become more transparent and less prone to discrimination. For the state, the law offers a chance to build an approach comparable in scale to the EU’s AI Act (adopted in August 2024, with phased implementation from 2025 to 2027). The U.S. also moved ahead with a presidential executive order in October 2023. In this context, Russia’s alignment with external markets takes on added importance.
Export Potential and Certification Rules
Clear rules could make Russian AI products more attractive in regulated export markets, particularly the European Union. Compliance will require transparency and risk assessment standards. Inside Russia, new rules are expected to lead to certification and auditing of AI in sensitive industries such as healthcare, education, and manufacturing. Regulators are also likely to mandate human oversight in critical applications.
For businesses, this means checking internal processes against the updated National AI Strategy and preparing risk management documentation. It also requires ensuring robust data governance systems and embedding explainability into models. Companies may need to budget separately for internal compliance and risk control systems.

A Global Trend
Russia’s forthcoming legislation reflects broader global momentum on AI governance. In 2024, the country updated its National AI Strategy, refining principles of safety and ethics while laying the groundwork for regulation.
Meanwhile, the EU’s AI Act, fully enforceable by August 2026 (and for some categories by 2027), is set to become the global benchmark. The U.S. presidential executive order on AI in 2023 reaffirmed standards for safety, transparency, and data protection, building on the 2022 “Blueprint for an AI Bill of Rights.” China also issued interim rules for generative AI in 2023, including mandatory labeling of deepfakes, security requirements, and provider liability.

Opening a New Market Niche
This evolution shows that Russia is transitioning from strategy to concrete legislative drafts. Predictable requirements for data and models will drive demand for compliance infrastructure, creating opportunities for domestic integrators and independent software vendors.
Still, regulatory risks remain. Overly strict rules, if not accompanied by detailed guidelines, could slow down pilot projects. Synchronization with industry regulators and international practices—particularly those of the EU and U.S.—is critical to maintaining export potential.