AI as critical infrastructure. How Gemini 3 is changing the enterprise operating model

Although Google boasts about Gemini 3's record-breaking performance, the real revolution lies in the quiet transformation of the AI assistant into an autonomous operating framework for the entire enterprise. When the model gains direct access to the core of business processes, it becomes a new, critical attack vector that traditional security systems are unable to effectively protect against.

8 Min Read
Gemini 3

The tech world is alive with headlines about the new leap in performance Google is offering with the launch of Gemini 3. Benchmarks, token processing speeds and ‘human’ conversational fluency, however, are just a facade. The real revolution – and the associated risks – is taking place quietly, in the architecture of IT systems. Experts are increasingly loudly pointing out that with this update, artificial intelligence is no longer just a tool in the hands of an employee. It is becoming the operational backbone of the enterprise, and this is completely changing the rules of the game in cyber security.

Until now, the relationship between business and generative artificial intelligence has resembled working with a capable intern. Models such as early versions of Copilot or ChatGPT were helpers: they summarised reports, prompted the content of emails, generated code. If the ‘intern’ made a mistake, the consequences were limited and easy to catch. With the advent of the Gemini 3 era, this metaphor loses its meaning. We are no longer dealing with an assistant, but with a new operational foundation.

AI leaves the chat window

Google makes no secret of the fact that full integration is the goal. Gemini 3 is not just a chatbot in a browser window; it is a technology that permeates the working environment. What is being created is what could be called a unified AI grid. In this ecosystem, the model’s interactions extend to email, cloud documents, storage and collaboration tools.

The most important change that IT managers need to understand is the transition of AI to an ‘active infrastructure’ role. The system does not passively wait for the user’s command. With native integrations, the model is constantly ‘listening’, processing and combining facts from the company’s various data sources. This is a huge process convenience, but at the same time the point at which AI becomes the new security perimeter (security perimeter). Every document that the model has access to becomes part of this perimeter – a point that must be protected with the same rigour as email servers or databases were once protected.

An agent who can do too much?

Gemini 3 accelerates the trend of equipping AI with agentic capabilities. This is a key term for understanding today’s threat landscape. The model is no longer just there to answer questions (Q&A), but is capable of autonomous action. It can transcribe documents, forward them, respond to content in the inbox and even control APIs.

This is where the risk of operational depth comes in. The attack surface grows exponentially, going beyond classic security controls. If an agent’s permissions are configured too broadly – which often happens in the rush to implement innovations – and its actions are not verified by a human-in-the-loop, the company exposes itself to uncontrollable processes. A misinterpretation of one email can set off a chain of events in ERP or CRM systems that will be costly and difficult to undo.

PDF as a weapon, or invisible attacks

In the new reality, traditional firewalls and EDR (Endpoint Detection and Response) systems are proving insufficient. Why? Because the threat no longer comes in the form of a `.exe’ file or a malicious script, but in semantic form.

We are talking about the phenomenon of Indirect Prompt Injection. This is a technique in which the attacker does not need to crack passwords or take over a user account. All they need to do is craft a document – such as a PDF CV or a web page – that contains hidden instructions for the AI model. When Gemini 3 processes such a file (e.g. summarising it for an HR employee), it will execute the instructions sewn into it. The user will not see anything suspicious, but the model can exfiltrate the data or change the parameters of its work in the background.

Moreover, this problem scales with multimodality. Since Gemini 3 ‘sees’ and ‘hears’, any data format becomes an attack vector.

Audio: transcriptions of recordings may contain commands that are inaudible or unintelligible to humans, but interpretable to AI as system commands.

Image: manipulated screenshots or images can influence a model’s decisions in ways that classic content security filters cannot detect.

Therefore, treating malicious media as viable attack vectors, rather than scientific curiosities, is becoming a necessity for SecOps teams.

A race against time and cost

Despite these threats, business is not slowing down. Security readiness reports (GenAI Security Readiness) are ringing alarm bells: companies are deploying AI far faster than they can secure it. There is often a lack of basic ‘guardrails’ (guardrails), monitoring of agent activities or testing pipelines to check resistance to hostile attacks (adversarial testing).

However, a technical and economic nuance is worth noting. Initial analyses indicate that Gemini 3 in the Pro Preview version shows a high degree of robustness in terms of security, provided it is configured appropriately – e.g. with enforced security prioritisation and a self-assessment layer. However, such a configuration comes at a price: it drastically increases the computational effort (and cloud costs).

In comparison, competitor models such as Claude 4.5 Haiku offer similar levels of security at significantly lower operating costs. This presents IT decision-makers with a dilemma: should they invest in a powerful but ‘heavy’ model and its security features, or seek optimisation? The key conclusion, however, is one: a model alone is not a security strategy. Even the best algorithm without proper configuration, prompt engineering and multi-layered security features will remain vulnerable to attacks.

New task for the board

The lessons from the Gemini 3 capability analysis are clear: the responsibility for AI is shifting from innovation departments directly onto the shoulders of the board of directors and CISOs. The decisive question to ask suppliers and IT teams is no longer: “How intelligent is this model?”. It is the question of 2023.

Artificial intelligence today creates an extreme edge in corporate security. If companies allow it to grow into their processes without being aware of the risk of ‘underestimated dependency’, the consequences could be far-reaching. Gemini 3 is a powerful tool, but it is up to us to make it the foundation of success or the weakest link in the security chain.

Share This Article