bg
Cybersecurity
16:17, 10 May 2026
views
12

Russian Firm AppSec Solutions Launches AI Firewall to Protect LLM Systems

AppSec Solutions has developed an AI firewall designed to secure systems powered by large language models, or LLMs. The platform filters prompts sent to LLMs, inspects incoming requests and reduces the risks associated with unsafe AI usage.

The product is aimed at mitigating attempts to attack neural networks, bypass model restrictions or trigger the leakage of confidential information. AppSec.AIGate performs prompt inspection, access control, security policy enforcement and automatic attack blocking. The technology is becoming increasingly relevant as AI services are integrated more deeply with corporate datasets, internal systems and customer-facing applications.

As enterprises continue adopting security tools for generative AI systems, demand for those products is expected to grow sharply. According to estimates from ONSIDE and Just AI, Russia’s domestic generative AI market was projected to expand from 13 billion rubles in 2024 to roughly 58 billion rubles (about $740 million) in 2025, while reaching approximately 778 billion rubles (around $9.9 billion) by 2030. The more actively companies embed LLMs into real business operations, the more urgent the need becomes for securing those systems.

Data from ICT.Moscow indicates that about 77% of the GenAI market in 2025 was concentrated in the B2B segment. At the same time, roughly 55% of projects remained at the pilot stage, about 30% were in scaling phases and approximately 15% had already reached production deployment. That distribution suggests the market is moving beyond experimentation toward operational adoption, creating sustained demand for AI security technologies.

Approaches to AI Regulation

Russian companies have already begun deploying enterprise AI assistants and autonomous agents at scale. According to Yandex B2B Tech and Yandex Cloud, businesses are using those systems for technical support, enterprise search, customer operations and workflow automation. Against that backdrop, major players including Sber, through its GigaChat enterprise product line, are promoting corporate AI platforms with advanced data protection capabilities, including support for private cloud and on-premises deployments.

At the same time, policymakers in 2026 are debating regulatory approaches for AI systems and the status of “trusted” models. Kommersant reported that a new version of a draft law proposes training sovereign national models on government datasets and linking trusted-model status to inclusion in a special registry. That discussion is intensifying attention around AI security and increasing the relevance of technologies such as AI firewalls.

Embedding Language Models Into Business Operations

In February 2024, Russia updated its National AI Development Strategy through 2030. The revised framework outlined goals and implementation measures for using artificial intelligence to support scientific and technological development as well as national strategic priorities.

That shift tied AI development more directly to the country’s broader digital sovereignty agenda. Against that backdrop, Yandex accelerated its push into enterprise generative AI services. Yandex B2B Tech launched the AI Assistant API, allowing companies to build AI assistants tailored to specific operational tasks. Yandex Cloud also reported a 1.6-fold increase in demand for machine learning services. Together, those developments marked the beginning of large-scale integration of language models into business processes.

By 2025, Russia’s generative AI market had grown to 58 billion rubles (about $740 million). Most of that activity came from the B2B sector, highlighting how LLMs were evolving into a core layer of enterprise infrastructure. During the same period, Swordfish Security introduced an AI security framework tailored to Russian operational environments, including a threat map and mitigation measures.

Russian-language cybersecurity communities also began circulating analyses of the OWASP Top 10 risks for LLMs. Those discussions focused on threats such as prompt injection, where attackers manipulate model inputs to influence system behavior. Against that backdrop, the 2026 launch of AppSec Solutions’ AI firewall represented a logical next step in the market’s evolution. The security platform performs prompt inspection, access control, security policy enforcement and attack blocking, including attempts to bypass model safeguards or extract sensitive information.

Becoming a Standard Layer of Security Architecture

AppSec Solutions’ market entry signals a broader maturation of Russia’s generative AI sector. Companies are moving beyond simply deploying LLMs and are beginning to build dedicated security layers around them, including policy enforcement, filtering, auditing, access control and defenses against data leakage and manipulation. Over the next several years, analysts expect broader adoption of AI Gateway, LLM Firewall, AI DLP and AI security assessment technologies. Large enterprises are likely to pilot those systems first before the capabilities become embedded directly into enterprise platforms, chatbots and domestic LLM stacks.

Over the longer term, within the next three to five years, AI firewalls could become a standard architectural component for enterprise language model deployments. That scenario becomes particularly plausible if regulators introduce formal requirements for the secure use of AI in critical industries. Even so, those tools are expected to remain only one layer within a broader cybersecurity ecosystem. They will likely operate alongside DLP, IAM, SIEM/SOC systems, DevSecOps practices and model risk management frameworks, complementing rather than replacing comprehensive security strategies.

AppSec.AIGate analyzes not the request itself, but the context and semantics of the request. In other words, it evaluates what the user is trying to obtain from the model and what data the user is providing to it. A security engineer will see alerts appear on the dashboard stating that a specific request was blocked because it represented an attempt to compromise the model. Other alerts may indicate that a request contains sensitive information or that it was blocked because its content belongs to prohibited categories
quote
like
heart
fun
wow
sad
angry
Latest news
Important
Recommended
previous
next