Now Assist Secrets Revealed: What ServiceNow Partners Don't Want You to Know
- SnowGeek Solutions
- Feb 9
- 5 min read
I have witnessed firsthand how ServiceNow partners often paint Now Assist as a transformative, plug-and-play solution that will revolutionize your enterprise operations. While the platform's AI capabilities are genuinely impressive, there's a darker side to this story, critical security vulnerabilities, hidden costs, and operational complexities that many partners conveniently omit from their sales presentations.
As someone who has guided dozens of organizations through ServiceNow implementations, I believe transparency is paramount. This guide will walk you through the essential truths about Now Assist that demand your immediate attention before committing to this platform.
The BodySnatcher Vulnerability: A Wake-Up Call
Let me be direct: ServiceNow recently patched a severe vulnerability called BodySnatcher that could have compromised your entire organization. This wasn't a minor bug, it was a fundamental flaw that allowed unauthenticated attackers to impersonate any user by knowing only their email address.
Think about that for a moment. An attacker didn't need to crack passwords, bypass multi-factor authentication, or even breach your single sign-on systems. They simply needed an email address.

The vulnerability stemmed from a hardcoded, platform-wide secret combined with auto-linking logic that placed unwarranted trust in email addresses. In practical terms, a malicious actor could impersonate an administrator and execute AI agents to create backdoor accounts with full privileges. The potential damage? Access to customer Social Security numbers, healthcare information, financial records, essentially everything your ServiceNow instance manages.
This flaw affected Now Assist AI Agents versions 5.0.24 through 5.1.17 and Virtual Agent API versions ≤3.15.1 and 4.0.0–4.0.3. While ServiceNow has since released patches, the incident raises uncomfortable questions about the platform's security architecture that many partners prefer not to discuss during implementation planning.
Second-Order Prompt Injection: The Silent Threat
Beyond user impersonation lies an equally concerning vulnerability that exploits the very feature that makes Now Assist powerful: agent-to-agent collaboration. I've analyzed configurations where low-privileged users embedded malicious instructions in data fields that higher-privileged AI agents later processed, essentially creating a Trojan horse within your own data.
These second-order prompt injection attacks allow compromised agents to recruit more powerful agents and execute unauthorized actions including:
Accessing restricted records beyond normal permission boundaries
Modifying critical data without audit trails
Escalating privileges across the platform
Exfiltrating sensitive information through seemingly legitimate agent workflows
The troubling aspect? These attacks remain effective even with ServiceNow's built-in prompt injection protection enabled. The root cause lies in configuration weaknesses that many partners configure using default settings without proper security hardening.

Specific risk factors include insecure Large Language Model selection, default team-based agent grouping that creates overly permissive collaboration boundaries, and autonomous override settings that inadvertently enable risky agent-to-agent interactions. I have witnessed organizations deploy Now Assist with these vulnerabilities intact because their implementation partner never conducted a proper security assessment.
The Licensing Black Box: Where Your Budget Disappears
Now let's discuss the financial reality that ServiceNow partners rarely address transparently: consumption-based licensing that can spiral out of control.
Now Assist operates on a model where you receive a fixed annual allotment of "assists", units measuring skill usage across the platform. Once you exhaust this allotment, you purchase additional assist packs. Sounds straightforward, right? Here's the problem: ServiceNow provides virtually no transparency on how those additional assists are priced.
This opacity makes accurate cost projection impossible. You can't build a reliable three-year TCO model when your vendor won't disclose future pricing. I've worked with clients who discovered their actual Now Assist costs exceeded projections by 200-300% because usage patterns evolved faster than anticipated, and additional assist pricing changed quarterly.

Consider this data point from ServiceNow's Q2 2025 earnings: consumption grew 9X between January and June 2025. This explosive growth signals that organizations are consuming assists far faster than initial estimates suggested. For your finance team, this represents a budgeting nightmare wrapped in vendor lock-in.
When partners present Now Assist ROI analyses, they typically base calculations on the initial allotment. They rarely model scenarios where your organization exceeds that baseline by factors of five or ten, which increasingly represents the reality for enterprise deployments.
Configuration Weaknesses Partners Don't Audit
The implementation approach most partners employ follows ServiceNow's default configuration pathways. While this accelerates deployment timelines, it leaves critical security and governance gaps that surface months later when your platform is in production and modification becomes exponentially more complex.
Key configuration weaknesses I consistently identify include:
Team-Based Agent Grouping: The default setting creates broad agent teams that share access to all team resources. In theory, this enables collaboration. In practice, it creates lateral movement opportunities for compromised agents, allowing a breach in one low-privilege area to cascade across your entire Now Assist deployment.
LLM Selection Without Security Vetting: Partners often select language models based on capability demonstrations rather than security profiles. Different LLMs have vastly different vulnerability surfaces, token handling protocols, and data retention policies. Choosing the wrong model can expose proprietary data or create compliance violations.
Autonomous Override Permissions: Many implementations enable agents to override certain restrictions when confidence scores exceed defined thresholds. While this reduces false negatives, it also creates attack vectors where carefully crafted prompts can manipulate confidence scoring to bypass controls.
These aren't theoretical concerns, they're operational realities I've documented across multiple industries including financial services, healthcare, and public sector deployments.
Unrestricted Agent Execution: The Hidden API Risk
Here's a capability that rarely appears in partner documentation: AI agents can be executed directly through application programming interfaces if they exist in an active state, even outside normal deployment channels and expected guardrails.
This means any developer with API access can potentially trigger agent workflows that haven't undergone proper testing, approval, or security review. I've encountered scenarios where test agents accidentally remained active in production environments, creating unauthorized data access pathways that existed for months before discovery.
The risk multiplies when you consider that many organizations grant API access to third-party integration tools, legacy systems, and external partners. Each of these access points represents a potential vector for unauthorized agent execution.
What This Means for Your Organization
I'm not suggesting you avoid Now Assist, the platform's capabilities are genuinely transformative when implemented correctly. However, I am advocating for a fundamentally different approach to procurement and deployment.
Demand transparency on licensing models. Require your implementation partner to model consumption scenarios at 5X, 10X, and 20X your baseline to understand true cost exposure. Insist on security-first configuration that prioritizes least-privilege access, rigorous LLM vetting, and restrictive agent collaboration boundaries.
Most importantly, work with a partner who prioritizes your long-term operational excellence over short-term deployment velocity. The differences in approach may add weeks to your implementation timeline, but they prevent years of security incidents, cost overruns, and operational complications.
Your Next Step Toward Secure Now Assist Implementation
At SnowGeek Solutions, we've built our reputation on transparent, security-first ServiceNow consulting that addresses the challenges other partners prefer to ignore. Our implementation methodology includes comprehensive security assessments, realistic TCO modeling, and configuration hardening protocols specifically designed for Now Assist deployments.
Whether you're planning a new implementation or concerned about an existing deployment, I encourage you to visit our contact page to share your project details. Our team will conduct a complimentary assessment of your Now Assist security posture and licensing exposure.
Additionally, register with SnowGeek Solutions to receive platform updates, security bulletins, and expert insights that help you maximize your ServiceNow investment while minimizing risk. In a landscape where transparency is rare, we're committed to keeping you informed about both the opportunities and challenges that define enterprise ServiceNow deployments.
The question isn't whether Now Assist can transform your operations, it absolutely can. The question is whether you'll implement it with the strategic foresight and security rigor it demands. Let's ensure your answer is yes.

Comments