It’s no secret that generative AI is transforming the way we work. Tools like ChatGPT, Microsoft Copilot, and Google’s Gemini (formerly Bard) promise to boost productivity, streamline repetitive tasks, and unlock new insights from data. But as AI adoption spreads—from the back office to the shop floor—it’s opening up new cybersecurity challenges that many organizations haven’t yet prepared for.
As PMMI members explore AI-driven innovation in packaging and manufacturing, it’s crucial to understand the risks beneath the surface—and how to manage them before they become a liability.
Shadow AI: The New Shadow IT
Just as “shadow IT” once described the unsanctioned purchase of cloud applications used outside of IT’s visibility, “shadow AI” is emerging as a fast-moving threat. Employees may be pasting sensitive customer data, product specs, or internal financials into free AI tools to get quick answers—without realizing that these platforms might retain, learn from, or expose that information.
💡 Reality check: Some free AI tools log user inputs to improve their models. That means your proprietary data could become part of someone else’s AI output.
Prompt Injection: The AI Version of a Cyberattack
Generative AI systems rely on user prompts to function and train the large language model over time. But what if those prompts are manipulated?
Prompt injection is an emerging attack where hackers trick AI systems into revealing secrets, rewriting code, or taking unintended actions—sometimes by embedding malicious instructions in files, web pages, or even emails. As more business tools incorporate AI (think: CRM assistants or customer support bots), the risk of prompt abuse grows.
Data Exposure Through Integrations
Many AI tools integrate with cloud storage platforms, calendars, messaging apps, and CRMs. That convenience can create unintended exposure.
If your IT and security teams aren’t aware of AI’s permissions, you may be inadvertently opening the door to a breach.
What Your Organization Can Do Now
You don’t need to slam the brakes on AI adoption—but you do need to establish some guardrails.
1. Establish an AI Usage Policy
Make it clear which tools are approved, how they can be used, and what types of data are off-limits. Communicate this early and often—especially to departments experimenting with AI for customer service, marketing, or R&D. And provide training to your employees on the responsible use of AI and precautions that staff are responsible for.
2. Vet AI Tools Like You Would Any Vendor
Ask questions like:
3. Audit Permissions and Integrations
If your AI connects to cloud tools, make sure it follows the rule of least-privilege for access. Don't let it read everything unless it absolutely has to.
4. Educate Staff on What’s Safe to Share
Employees may not know that copying a pricing sheet into ChatGPT could violate confidentiality agreements. Awareness is your first line of defense.
Embracing AI Responsibly
AI isn't the enemy—it’s an incredible tool when used thoughtfully. But as with any technology shift, security must evolve in parallel.
By putting policies, guardrails, and awareness in place now, PMMI members can harness the power of AI without compromising their intellectual property or customer and supplier trust.
Have a strategy to share? Let’s keep the conversation going at cyberhealth@pmmi.org.