bg
News
17:52, 10 February 2026
views
3

Russia Is to Require Security Certification for AI Used in Critical Infrastructure

Russian authorities are developing a mechanism to verify the safety of artificial intelligence systems deployed in the most sensitive sectors of the economy.

The Ministry of Digital Development has drafted legislation that would introduce mandatory certification for high-risk and critical-risk artificial intelligence systems used at information infrastructure facilities. After testing at a specially designated site, such systems would need to receive confirmation of compliance with security requirements set by FSTEK (Federal Service for Technical and Export Control) and the FSB (Federal Security Service). Only then could they be deployed by government agencies and in the energy, transport, financial, and telecommunications sectors.

The draft law also proposes a ban on the use of artificial intelligence in critical infrastructure if the rights to that technology are owned by foreign entities.

Artificial Intelligence Center

Risk classification and the maintenance of a registry of approved systems would be handled by the Natsionalnyi tsentr iskusstvennogo intellekta v sfere gosudarstvennogo upravleniia (National Center for Artificial Intelligence in Public Administration) under the Russian government. At a later stage, a separate document will define the certification requirements in detail, including conditions for adding AI systems to the registry of Russian or Eurasian software.

Light-Touch Regulation

The Ministry’s proposal fits into Russia’s broader approach to regulating artificial intelligence. The framework avoids hard bans or pressure on developers. Instead, it focuses on clear, advance rules designed to enable AI deployment across the economy and public administration while managing risks. Discussions on AI regulation are taking place, for example, within the State Council’s commission on digital transformation, which brings together regional authorities, профильные federal agencies, and business representatives.

Softer policy tools are already in place alongside this effort. In 2021, the AI Alliance, with government backing, adopted a voluntary Code of Ethics for Artificial Intelligence. The document outlines baseline principles for developers and users, including respect for human rights, transparency and explainability of algorithms, the prevention of discrimination, and personal accountability for how the technology is used.

As a result, Russia’s AI regulation is taking shape as an open and predictable system. Rather than tightening controls, policymakers have emphasized trust, security, and the creation of conditions that allow domestic AI developers to grow and apply their products across a wide range of industries.

like
heart
fun
wow
sad
angry
Latest news
Important
Recommended
previous
next