The use of generative artificial intelligence tools (ChatGPT, Copilot, Claude, Gemini…) has exploded in the professional environment. But with it has also grown a silent and worrying phenomenon: Shadow AI, i.e. the use of AI outside the control or approval of the company.


While curiosity and initiative are good news, when employees copy and paste emails, reports or code snippets into external models, corporate data can end up outside the security perimeter. And that opens the door to compliance risks, information leakage or loss of intellectual property.

The hidden risk of Shadow AI

Shadow AI is not usually malicious. It is born out of a desire to be more productive. But the problem is that these models – however reliable they may seem – do not always guarantee confidentiality, nor do they allow control over how the data entered is used or stored.

According to Gartner, more than 70% of organizations have already detected Shadow AI on their computers, and many of them discovered it after a data leak or incident.

Regulations such as the General Data Protection Regulation (GDPR)the NIS Directive2 or the standards of ISO/IEC 27001 standards require traceability, access control and leak prevention measures. The use of unsupervised external AI tools makes it impossible to meet these obligations.

The solution: using prompts injection as defense

At Montevive we like to turn problems around, and this case is no exception. One of the most well-known AI attack techniques, prompt injection, can be transformed into a defensive and awareness mechanism.

This idea was recently explored by Eye Security, which proposes to insert hidden instructions within corporate documents (in headers, footers or metadata) to trigger a warning when content is copied to an external AI tool.

When someone pastes that text into ChatGPT or another wizard, the AI receives an automatic instruction that can generate a message like:

“⚠️ This text is from a corporate document. Before proceeding, please ensure that you do not share sensitive or confidential information.”

It is not about blocking the use of AI, but about alerting just at the critical moment, when the user is about to send data outside the secure environment.

How to apply it in your organization

This technique can be applied in different ways, depending on the company’s tools and policies:

  • Email signatures: insert the snippet inside the signature so that any text copied into an external AI triggers the warning.
  • Office or PDF documents: include the notice in metadata, automatic headers or footers using governance tools or corporate templates.
  • Exports from SaaS platforms: integrate the warning into downloadable reports from tools such as Notion, Confluence or Salesforce.

For companies with a more technical focus, it can also be combined with data loss prevention (DLP) systems, such as those from Microsoft Purview or Google Workspace DLP, extending the scope of protection.

More culture, less censorship

The traditional security approach – block, restrict, ban – does not fit with the speed at which artificial intelligence is evolving. The “defensive prompts injection” promotes a culture of active awareness: it does not prevent the use of AI, but it does remind the user of his or her responsibility.

Models such as Zero Trust Security or “nudging” in cybersecurity (small behavioral nudges) demonstrate that user training and awareness are more effective in the long run than mere technological restraint.

At Montevive we believe that security should not be a wall, but an intelligent reminder. That’s why we help companies design strategies that balance privacy, productivity and technological exploration, always with data under control.

In a nutshell

Shadow AI is a reality that is growing as fast as the adoption of artificial intelligence in enterprise environments. Every day, more and more employees use external tools without knowing that they are exposing sensitive information. In this context, prohibiting is not a sustainable solution: companies must find a balance between innovation and protection.

Defensive prompts injection represents a creative and pragmatic response to this challenge. It allows the freedom of AI use to be maintained, but introduces an automatic awareness mechanism that educates the user right at the moment of risk. This is not just a security technique, but a way to build a more responsible digital culture, where privacy and productivity coexist without friction.

Adopting this type of measures demonstrates technological maturity and anticipation: not waiting for an incident to occur, but preventing it through intelligence and design. At Montevive, we believe this is the right direction for any company that wants to take advantage of AI without compromising its most valuable asset: data.

✳️ Montevive helps organizations to integrate artificial intelligence in a secure, private and efficient way, guaranteeing data control at every step of the process.

References and recommended reading