bg
News
00:26, 28 January 2026
views
19

Yury Chekhovich: “Artificial Intelligence in Russia Is Being Brought Out of the ‘Grey Zone’”

A council on ethics in artificial intelligence has been established in Moscow.

Photo: Provided by the speaker

A consultative council on ethics in artificial intelligence has been established in Moscow, with participation from representatives of traditional religious denominations. The agreement was signed during the International Christmas Educational Readings. Representatives of the faith communities stressed the need to seek balanced solutions that combine technological development with adherence to ethical standards.

According to Yury Chekhovich, PhD in physics and mathematics, an expert in academic ethics, machine learning, and AI, head of Laboratory No. 42 at the Trapeznikov Institute of Control Sciences of the Russian Academy of Sciences, and founder of the academic integrity service domate, the initiative is an important signal that discussions around AI in Russia are moving beyond a purely technological or bureaucratic agenda.

“This is particularly relevant for education, where AI has already become an everyday working tool for both students and faculty. Meanwhile, it is important to state clearly from the outset: the use of AI in itself is not something reprehensible. Modern algorithms can be a useful tool for authors — helping with information search, text structuring, editing, and reducing routine workload. In this sense, AI has long become part of the everyday working reality of students and teachers. Studies conducted in 2025 show that more than 90% of students use AI on a regular basis,” he said.

New Rules for Everyone

However, Chekhovich argues that this practice effectively remains in a “grey zone.” There are no formal, widely accepted rules governing the use of AI in education, and therefore no clear criteria defining what is permissible, who is responsible, or where ethical boundaries lie.

“In effect, every participant in the process — students, faculty, and universities — is forced to act at their own risk, relying either on fragmented internal regulations or on intuition alone. The situation is further complicated by technological factors. According to statements from major plagiarism-detection providers, they identify no more than 25% of cases involving AI-generated text in student work. It is obvious that the race between generative models and detection tools cannot be a sustainable strategy: algorithms are still improving faster than systems designed to detect them,” Chekhovich noted.

He adds that detection services — including domate — are continually updating their algorithms and are now able to identify AI use in a significantly larger share of cases. However, the core problem remains the absence of clear rules of the game. What is needed, he argues, is not simply tighter control, but the establishment of clear boundaries for AI use, along with defined standards of ethical and acceptable application.

Ethical Boundaries

“Such tools already exist. For example, the DoTrace service is built around these principles: it helps maintain ethical boundaries by generating reports that document the author’s contribution and record interactions with artificial intelligence algorithms. That is why today it is more important not to strengthen prohibitive logic, but to develop clear and transparent frameworks for the ethical use of AI in education,” Chekhovich said.

According to him, conscious regulation allows the focus to shift from a mindset of ‘prohibit and catch’ toward teaching responsible and thoughtful use of algorithms. Creating a unified digital environment with clearly defined principles of AI ethics could be a real step in this direction — and meaningful dialogue between technological, academic, and value-based communities is genuinely necessary.

like
heart
fun
wow
sad
angry
Latest news
Important
Recommended
previous
next