A First Step: AI Will Be Used to Evaluate Russia’s Nationwide Assessment Tests
Russia is beginning to test AI-powered evaluation tools for nationwide student assessments, aiming to reduce teacher workload while preserving fairness and transparency in the exam process.

Socratic Judges?
Rosobrnadzor, Russia’s federal education quality authority, has long maintained that teachers—not machines—must remain the moral compass of the learning process. As its head, Anzor Musaev, said in 2024, “A machine will never master the human, emotional part. In today’s world, with new technologies in schools and universities, the teacher’s role is to help students navigate the flow of information and distinguish right from wrong.”
While this stance has not changed, the agency is launching an experiment: testing AI for automated evaluation of the All-Russian Verification Works (VPR). The goal is pragmatic—free educators from routine paperwork. The only policy update so far is scheduling: schools may now conduct VPR no earlier than April 1, pushing assessments closer to the end of the academic year.

As for subject selection, the agency insists that the current random assignment of two subjects remains optimal, preventing a ‘race for results’ and rote preparation.
Searching for Anomalies
AI tools will not create exam questions for VPR, OGE, or EGE. Their current function is narrower: technical support for maintaining fairness.
Algorithms are being trained to compare handwriting samples and detect anomalies such as suspicious behavior during exams. The system does not make final decisions—humans always review flagged cases.
According to Musaev, handwriting mismatches are rare, but cases of impersonation—when someone arrives with another student’s passport—do occur. Here, AI acts not as a judge but as a guardian of exam integrity.

Preparing the Ground
The road to AI-assisted grading has been long. Digital assistants appeared first: in 2023, Russian schools began testing a tool called OKO VPR, which analyzed student data and suggested how to improve results without resorting to excessive drilling.
By 2025, Rosobrnadzor automated the process of entering VPR results into federal systems—a clear sign that any repetitive task should be handled by technology. During the 2025 VPR cycle, more than 30,000 schools used this software. Meanwhile, global research demonstrated new possibilities: the CHECK-MAT project showed that AI models can read and grade handwritten solutions to complex math problems, and AINL-Eval 2025 focused on teaching algorithms to recognize scientific texts produced by other AI systems.
The large-scale introduction of VPR in 2016 laid the foundation for this progress by generating a massive data pool for analysis and automation.

Career Growth for AI
If the first phase of testing succeeds, AI could evolve from a monitor into an analyst. By examining aggregated results, it may detect trends invisible to the human eye—identifying schools or topics where students repeatedly struggle. These ‘digital risk zones’ could help educators tailor interventions more precisely. This analytical potential may eventually bring AI to Russia’s major exams: OGE and EGE.
Long-term, Russia could develop its own AI-driven assessment systems attractive to international partners. However, ethics—not technology—pose the main constraints. Algorithms must be transparent and fair, and student data protected. Rosobrnadzor emphasizes that in any disputed case, the final decision must remain with the teacher.









































