Generative AI services like ChatGPT, Copilot, and Gemini have brought organizations massive opportunities – but also a new, less-discussed risk landscape. Many employees experiment with and use AI on their own without IT knowing. This phenomenon is called Shadow AI. Without management, AI use can lead to data leaks, regulatory violations, and reputational damage. An AI firewall brings control to AI adoption.

Why Is Shadow AI a Risk?

Many assume AI use is harmless: the traffic is encrypted and outsiders can’t see it. But the real risks come from the service provider itself and the lack of organizational control:

  • Data leaks: An employee might accidentally input customer data, personal information, or source code into an AI service. In free or Plus versions, this data can be used to train models.
  • Compliance: GDPR and client contracts may prohibit data transfers outside the EU.
  • No audit trail: IT doesn’t know who is using which service and with what content.
  • Quality management: AI can generate content that employees use as-is, without sources or validation.

The AI Firewall Concept

New solutions introduce the idea of an AI firewall, which sits between the user and AI services. Its tasks include:

  • AI Firewall: Blocks sensitive information from being entered into consumer-grade AI services.
  • AI Security Posture Management (AI-SPM): Provides visibility into which services are being used and with what content.
  • Governance and compliance: Enables auditing, reporting, and meeting regulatory requirements (e.g., GDPR).
  • In practice, an AI firewall makes it possible to leverage AI securely – IT gains visibility and control, and employees can use an official AI platform without outright bans.

How Is Control Implemented in Practice?

Since there are hundreds of AI services globally, API integration alone isn’t enough. A combination of methods is needed:

  • API integrations – with enterprise AI services like Microsoft Copilot and ChatGPT Enterprise for deep visibility and policy control.
  • Category and domain filtering – solutions like Cisco Umbrella, Zscaler, or NGFW block all other AI services based on URL and DNS classification.
  • Allowlist – only the company’s official AI tools are permitted.
  • Endpoint agent – ensures visibility for remote work, when traffic goes directly to the internet without VPN.

Example Scenario: Me and Customer Data

As a project manager, I received an Excel spreadsheet from a client and wanted a quick report. I copied the data into ChatGPT’s free version and asked the AI to analyze it.

  • The prompt contained customer lists and contract information → the data ended up under OpenAI’s control.
  • The company lost control, GDPR agreements were violated, and the client could demand an explanation.

If an AI firewall had been in place, I would have received a warning: “You cannot input customer data into a personal ChatGPT account.” Instead, I would have been redirected to use Copilot Enterprise, where the data stays within the organization’s tenant and Purview DLP protects it.

Paid Version – A Solution or Not?

Many ask: “Doesn’t the risk go away if we use the paid version of ChatGPT?”

ChatGPT Plus (~$20/month, personal account) is not a solution – data may still be stored, and IT has no oversight. ChatGPT Enterprise, however, offers zero data retention, auditing, and IT control. This is a truly safe enterprise option – but still requires trust in the vendor.

Cisco Umbrella and Other Protection Layers

Services like Cisco Umbrella classify generative AI as its own category. This allows you to:

  • Block the entire category and permit only the official AI platform (e.g., copilot.microsoft.com).

But it’s important to note that dozens of new AI services and websites appear worldwide every week. Not all are categorized immediately. This means some new AI tools may briefly appear under general categories (e.g., Business Tools, Technology). IT must therefore complement protection with custom block/allow lists to ensure control even for brand-new services.

The Finnish Reality – SMB Challenges

In Finland, the majority of companies are small and medium-sized. Many lack an internal IT department that could actively monitor AI use or build complex AI firewall policies. This makes Shadow AI particularly problematic:

  • Employees (or external contractors) adopt new tools independently.
  • Data can move into uncontrolled cloud services without anyone noticing.
  • Resources and expertise for building detailed security policies are limited.

For SMBs, the key is not to start with heavy platforms but with the basics:

  • Define a clear AI usage policy for staff.
  • Block free versions (e.g., ChatGPT Free, Gemini Free).
  • Allow only official AI services with clear contracts and data protection guarantees (e.g., Copilot Enterprise).
  • Test and Verify

This way, AI benefits can be realized without exposing an SMB to unnecessary risks.

Three Key Tips for AI Governance

  • Visibility first – map which AI services employees use (via proxy/DNS logs or surveys).
  • Block all others, allow the official one – banning generative AI in general and permitting only approved services removes most risks.
  • Educate and instruct – explain why data must not be entered into free AI tools. Clear guidelines prevent more problems than technical controls alone.
  • Also: update your NDAs to define AI use explicitly (when handling client data).

Conclusion

AI adoption at work is no longer “if” but “when.” Organizations can’t stop employees from experimenting with new tools – but they can manage the risks.

  • For large organizations, the solution is an AI firewall and centralized control with standardized services.
  • For SMBs, it’s enough to start with clear guidelines, basic blocking, and selecting one approved AI platform.
  • When AI is introduced in a controlled way, it becomes a strength – not a risk.

 

Hannu Rokka, Senior Advisor

5Feet Networks Oy