bg
Science and new technologies
15:41, 19 July 2025
views
11

Russian Scientists Advance AI Safety with Confidence-Aware Neural Networks

Artificial intelligence is everywhere — from healthcare diagnostics to industrial control systems. Now, Russian researchers are pioneering a new method to enhance AI reliability.

Why Confidence Matters

A team of scientists from Skoltech's Artificial Intelligence Center and the Kharkevich Institute for Information Transmission Problems of the Russian Academy of Sciences (IITP RAS) has developed a method called Confidence Aware Training Data. This approach enables neural networks to evaluate the reliability of their own predictions. Validated on real-world datasets, including medical blood typing tasks, the method promises significant improvements in healthcare AI applications.

A Leap Toward Safe AI

The challenge with most neural networks is that they are often overconfident—or conversely, uncertain—when making predictions. This inconsistency is particularly dangerous in critical fields like healthcare and industrial control. The new method not only makes predictions but also assigns a confidence score, paving the way for more responsible AI use. Scientifically, it marks progress toward creating ‘safe AI’ capable of mathematical self-correction and adaptive probability estimation.

We’ve taught neural networks not just to make decisions, but to flag when those decisions are risk-prone. This is a critical enhancement for fields like medicine, where errors carry high stakes
quote

Strategic and Societal Impact

For Russia’s IT sector, the development strengthens the nation’s standing in responsible AI. With growing global concern over ethical and safe AI practices, innovations like this may give Russia a competitive edge. Societally, the method offers tangible benefits: fewer misdiagnoses in healthcare, greater automation safety in industry, and stronger public trust in AI-driven systems.

Potential for Global Deployment

The new system could soon see implementation in Russian medical technology firms and industrial platforms. Integration into domestic tools such as Yandex Practicum or national medical software registries would highlight Russia’s capacity to build not only innovative but also safe AI technologies. International markets with strict safety standards—such as Europe and Asia—are likely prospects for export.

Historical Context and Technological Evolution

The concept of confidence estimation in AI has matured rapidly over the past five years. Between 2021–2022, calibration layers emerged. By 2023, ensemble models like CAMEL (featured at WACV 2025) were used in retinal image analysis. In 2024, deep ensembles gained traction. And by 2025, models that adapt in real time based on confidence—like OT VP—have started surfacing. The Russian-developed method fits seamlessly into this trend and could represent its next stage.

What’s Next?

Looking forward, research is expected to focus on soft labeling, real-time model adaptation, and hybrid confidence systems. Industrial applications in automotive, manufacturing, and health safety are imminent. Regulatory bodies may soon require AI systems to report confidence levels. Meanwhile, Russian startups are likely to package confidence modules into commercial AI offerings.

like
heart
fun
wow
sad
angry
Latest news
Important
Recommended
previous
next
Russian Scientists Advance AI Safety with Confidence-Aware Neural Networks | IT Russia