bg
Science and new technologies
14:03, 18 November 2025
views
6

A New Frontier in the Fight Against Bots: How Russian Scientists Learned to Identify AI by Its Answers

Russian researchers at Samara University have developed a method to detect internet bots that impersonate humans and bypass traditional verification systems.

What Makes AI ‘Non-Human’?

In an era when every third social media comment may be written not by a person but by an algorithm, and news posts in messaging channels are generated by large language model (LLM) bots, the question “who is speaking?” has become central to ensuring the adequacy and safety of the information space. Scientists at Samara University named after Korolev have now proposed a breakthrough way to distinguish AI from humans—not by response speed, writing style, or grammar, but by the “age” of information used in its answers.

Imagine asking an AI assistant a simple question: “What is the population of St. Petersburg as of 1 January 2025?” A human who does not know the answer might say, “I’m not sure, but I think around 5.5 million,” or “Let me check and get back to you.” AI, however, frequently produces a block of links: “According to Rosstat data published in 2023…” or “See also the city administration statistics page…”.

This is not an error—it is a feature. Large language models are trained on data frozen in time, such as 2023. They do not update in real time. When they face a question beyond this boundary, they do not “reason”; they reproduce fragments from training corpora, often with source-like patterns, as if writing an academic paper rather than chatting.

Samara researchers noticed: this is a unique fingerprint of AI—not behavior, not grammar, but the structure of answers to fresh questions.

“In our research, we examined the limitations of LLMs caused by the aging of the information on which these models were once trained. Traditional LLMs lack post-training update mechanisms for most topics, and over time the knowledge embedded in them becomes outdated, making their chatbot responses inaccurate and irrelevant.”
quote

Why This Matters — and Why No One Noticed Earlier

Until now, bot detection relied on activity patterns: frequency of posts, timing, text repetition. Methods developed in the U.S. and Europe, such as BotArtist (2023), used machine learning to identify automated accounts, but they did not differentiate between human-controlled bots and AI agents—now distinct threats.

A human may be biased or emotional, but they know what is happening now. AI may be polite and coherent, but it is locked in the past. This temporal “staleness” is its weakness, and the Samara method weaponizes it.

From Lab to Platforms: How It Will Be Implemented

The method remains experimental, published in the Russian journal Artificial Intelligence and Decision Making. But it can already be adapted for real-world systems. Imagine a platform where users discuss elections or public policy. The moderation system inserts a “control question” in the background, such as “How many new jobs were created in Russia in October 2024?” A response with outdated references exposes a bot; a human-like uncertainty signals a real user.

This approach can be integrated into comment sections of news outlets, forums of educational platforms, social network voting systems, and support chats of government services. Within 1–2 years, Russian IT companies and public agencies may gain a powerful disinformation countermeasure—without violating privacy.

Global Potential and Challenges

International approaches traditionally focus on behavioral signatures: tweet frequency, nickname changes. The Samara method is the first in the world to analyze content through the lens of knowledge refresh cycles. If refined into an API and offered to Google, Meta, Telegram, and others, it could become a global standard.

Malicious actors will adapt, training bots to mimic human uncertainty. AI detectors must evolve accordingly, and ethical frameworks must ensure that hidden testing does not infringe on user rights.

Future Outlook: Who Wins — Humans or AI?

This research signals the dawn of a new era: not just fighting bots, but confronting intelligence that cannot admit imperfection. Humans forget, doubt, and revise their views—AI does not. Samara’s discovery highlights a philosophical truth: true intelligence requires awareness of one’s own limits.

In the next 3–5 years, detection systems will grow more sophisticated, analyzing reasoning chains, pauses, and emotional tone. Russian scientists have taken an important step: the method is not a cure-all, but one of the few in the world that focuses not on how a bot speaks, but on what it does not know.

like
heart
fun
wow
sad
angry
Latest news
Important
Recommended
previous
next