bg
Point of view
16:25, 15 September 2025
views
1

“There’s No Such Thing as Pure AI”: Robert Vasiliev on What Artificial Intelligence Really Is – and Whether It Could Take Over the World

Artificial intelligence is one of the most talked‑about technologies of our time: feared, worshipped, debated over dinner tables. At the same time, it’s entering our daily lives and workplaces. But what exactly lies behind the term? We spoke with Robert Vasiliev, Vice President of the Association of AI Laboratories (ALRII), to cut through the hype. Where does AI genuinely deliver results, where is it overrated, and what might the future hold?

Robert, let’s start with the popular myths. AI is surrounded by misconceptions. Which ones do you think are most damaging to understanding the technology?

Probably the biggest myth is that we call everything “artificial intelligence.” In reality, it’s a patchwork of specific models, each with a beginning and an end, designed to operate within its algorithm. What we see in ChatGPT or other large language models is actually a combination of approaches. That’s what multimodality is.

We call it AI, we call it a neural network, but in practice it’s not one single entity — it’s a complex of different solutions tuned for different tasks. Even when people talk about AGI — Artificial General Intelligence — they usually mean a more advanced version of the multimodal approach.

So, to sum up: there’s no such thing as “pure” AI in a monolithic sense. What exists are multimodal architectures — various solutions built for specific tasks — and collectively we call this AI.

Some argue AI might take over the world or enslave humanity. Is there any truth to that, even theoretically?

In some very speculative sense, yes — if we drift into extreme futurism. But in practice, AI today is always used as a tool to support decision-making. And in critical domains, the final decision is still made by humans.

An algorithm can suggest the optimal course of action for a given scenario, but the last word rests with people. The “takeover” only becomes plausible if we hand over those final, critical decisions to machines.

Right now, though, even in the most critical systems, humans decide. Which means nothing like that will happen unless someone consciously hands such powers to AI.

What is AI genuinely good at today, and what still lies beyond its competence?

The key is to remember my answer to the first question. AI is not one thing, but a set of tools. It excels at what it’s designed for.

At present, AI is great at analyzing data and predicting target functions we aim to optimize. It has also become skilled at generating additional content.

In truth, it can mimic almost anything at a surface level. What it does less well is actual execution of tasks. Most language models are tuned to “convincingly chat” with users. They are good at giving the impression of knowledge.

But what we get is theoretical generation, and there’s no guarantee it’s true. Generative methods have become excellent at pleasing the user — but whether the answers are accurate and expert is another matter. That’s why new methods are emerging to improve reliability and precision.

In which industries has AI already demonstrated outstanding effectiveness, and where has it fallen short of expectations?

It seems to work well almost everywhere. But especially where data is well-collected, well-structured, and well-maintained. Finance, telecoms, the internet sector, high-tech — these are prime examples. Wherever data is aggregated and cleaned, AI shows its best.

Which directions in AI development do you see as dead ends, or at least overrated?

The biggest gap so far is in areas requiring physical interaction with the world. We’ve learned to collect, digitize, and train models on data. But the internet is not the entire world.

Imagine you’re a brain in a box. You get fragments from five senses: texts, images, explanations — “here’s a cat,” “here’s a dog.” But you don’t see the world, you don’t touch it. All you do is process text and images, never the real world itself.

Humans, on the other hand, learn differently: by touching, moving, falling, recovering, trying, speaking. AI doesn’t yet have that. It can’t learn directly from the real world “here and now.”

The next stage is bringing models into autonomous physical form. Once they start interacting with reality, new opportunities — and challenges — will appear.

But overall, I don’t see an industry where AI can’t be applied. The limit is companies’ willingness to test, adopt, and adapt solutions.

Some researchers talk about AI reaching a “plateau.” Do you agree? What breakthroughs might we see in the next 5–10 years?

I don’t see a plateau. Sure, many foundational approaches were developed decades ago. But, for example, convolutional neural networks running on GPUs is a relatively recent advance.

The explosive growth of the last few years — large language models, multi-agent frameworks — has only accelerated progress. So, no, it’s not plateauing.

The next leap will come from new training techniques, improved adaptation methods, and especially from moving AI into the physical world — where it interacts with reality as we do.

How do you assess Russia’s AI ecosystem? Does the country have a chance to become a technological leader?

Despite the size of markets and funding, there aren’t many independent digital ecosystems globally. There’s the Chinese internet, the American one, the Indian one — all sovereign structures with their own data.

Some sectors in Russia are doing quite well, and there are unique insights and advantages. But it’s not yet enough to create truly unique datasets and AI systems.

The challenges are real, but so are the opportunities. Leadership, however, requires far greater digital independence and infrastructural maturity.

Imagine yourself in 2035. How do you think AI will be woven into everyday life by then?

Humans adapt quickly. Five years ago, would you have believed you could chat on your phone about anything with a model trained on hundreds of billions of parameters?

By 2035, robotics is set to boom. AI will step out of screens and into the physical world. We’ll see robots and androids walking alongside us, helping, taking part in daily life.

Could widespread adoption of AI and automation lead to major social shifts — the disappearance of professions or changes in education?

That’s already happening. Today you can draft a paper quickly if you know how to evaluate AI’s output. With a personal outlet or expert base, you can prepare material in an evening.

This is a challenge for education. The old “submit a paper, get a grade” model is dying. A new criterion will be how well a person understands how the model works and how to interact with it effectively.

Professions will shift. Those unable to work with AI will lose competitiveness. Then it will become a basic skill. But the deeper your expertise, the better you can leverage AI, ask precise questions, and get more valuable results.

And finally, a personal question. Do you fear AI? Or do you believe humanity will cope with the challenges it brings?

I don’t fear it. Maybe because I live inside a professional bubble, working with AI on specific tasks. And as for humanity? I believe we’ll rise to the challenge.

like
heart
fun
wow
sad
angry
Latest news
Important