Russian Scientists Develop Technology to Make AI Answers More Reliable
Researchers at Novosibirsk State University have unveiled a new approach that significantly improves the accuracy of large language models, reducing the risk of AI “hallucinations” and bringing more trustworthy artificial intelligence closer to real-world use.

Hope for Trustworthy AI
Large language models (LLMs) are increasingly used in education, healthcare, law and business. Yet their core weakness remains the same: a tendency to “hallucinate.” In the AI community, this term describes situations where models confidently produce incorrect or entirely fabricated information. In critical fields, this behavior undermines trust and limits adoption. Russian researchers now say they have developed a solution that could fundamentally change this equation.
A research team from Novosibirsk State University, working with colleagues from Lomonosov Moscow State University, Immanuel Kant Baltic Federal University, MISIS, Far Eastern Federal University and ITMO University, has created a software library called RAGU (Retrieval-Augmented Graph Utility). The tool boosts the accuracy and reliability of LLM outputs by reducing hallucinations through the integration of knowledge graphs, structured representations of real-world relationships between facts and concepts.
How RAGU Works: Smarter, Not Bigger
RAGU’s key innovation lies in combining effectiveness with efficiency. Instead of relying on massive language models with tens of billions of parameters, the team trained a relatively compact model of about 600 million parameters to build knowledge graphs in multiple steps. This design allows the system to capture context and logical relationships without overwhelming computing resources. By comparison, conventional approaches often require models up to 50 times larger to achieve similar results.

This architecture opens the door to what researchers describe as the democratization of reliable AI. Organizations with limited computing power can deploy accurate, trustworthy models without relying on hyperscale cloud providers or expensive infrastructure.
Why It Matters Beyond Academia
The impact of RAGU extends far beyond research labs. More reliable AI systems directly affect everyday life. Students could receive accurate explanations from educational chatbots, doctors could rely on well-grounded diagnostic suggestions, and lawyers could be guided to correct regulatory references. As AI becomes more deeply embedded in public-sector and commercial services, the value of dependable outputs grows sharply.

Globally, the AI community has been racing to address hallucinations. Since 2022, scientific publications on the topic have surged. Companies such as Microsoft and OpenAI have issued guidance, while researchers worldwide experiment with verification chains and structured learning methods. The Russian team’s work fits squarely into this global effort, offering an original and efficient approach.
From Research to Global Markets
RAGU has strong export potential. Hallucinations are a challenge for AI developers in the United States, China and Europe alike. Within the next one to two years, the technology could be integrated into Russian AI platforms, while international publications and conference presentations may pave the way for collaboration with global players such as Hugging Face or Google.
Equally important, the project highlights the growing capabilities of Russia’s AI research ecosystem. Collaboration among leading universities strengthens scientific and technological self-sufficiency and lays the groundwork for commercializing domestic innovations.

RAGU is not just another algorithm. It represents a concrete step toward responsible AI that serves people without distortion or error. As neural networks become ever more embedded in daily life, such technologies move from being useful to being essential. That this solution is emerging from Russian laboratories is, for its creators, a point of scientific and technological pride.









































