Like cloud computing in its early days, AI today is another black box—powerful, mysterious, and slightly terrifying. But just like we learned to ask the right questions about the cloud, we need to ask the right ones about AI.
Back then people were asking:
"Can we even put this data in the cloud?"
"Is it safe?"
"What happens if the vendor goes down?"
I remember being both optimistic and skeptical—a position I still find myself in today with AI.
Back then, I had the sense that everything was about to change. I feel that same shift happening now. But if we don’t learn from our cloud-era mistakes, we’ll be rebuilding this mess again in ten years. Only this time, the damage will be harder to detect and more difficult to repair.
The AI wave is moving faster than most orgs can handle. And while the conversation around "governance" and "responsible AI" is well-intentioned, it’s not enough to actually protect the systems we’re deploying.
Let me put it plainly: you can’t govern what you don’t understand.
And right now, a lot of teams are flying blind—adopting models, tools, and pipelines without a clear picture of how data flows, where decisions happen, and what’s vulnerable.
I’ve sat in more meetings than I can count where a company says they're “AI-ready” because their vendor has a SOC 2 report. That’s not readiness. That’s paperwork.
Security isn't something you can outsource to an audit. It has to be embedded—culturally, architecturally, operationally.
We’ve all seen it. A tool gets deployed, a board gets briefed, and someone says,
"Don’t worry—we’re compliant."
Cool. But have you actually tested what happens when your AI system gets hit with malformed inputs? Or verified that all your guardrails are in place? Or understood how the agentic system access outside tools and data stores?
There’s a critical distinction between being compliant and being resilient. Compliance might keep you from getting fined. But it won’t help you recover when your AI workflow starts leaking data through a misconfigured retrieval layer—or worse, makes a decision your team can't explain.
Real security is proactive. It’s system-aware. It doesn’t rely on secrecy.
In fact, the best systems are those that stay secure even when attackers understand how they work. If you spent any time with me, you’ve heard me say… Compliance doesn’t equal Security, but Security equals Compliance. Let’s focus on Security first, compliance will be addressed.
In the early cloud days, people used to think encryption solved everything. Now, it’s AI's turn to wear the cape. And much like back then, there’s a dangerous assumption in the air:
"We don’t need to worry about this yet—it’s just a pilot."
That mindset is what gets us into trouble. Pilots become platforms faster than anyone expects. What starts as a test quickly gets embedded in business logic, powering decisions about customers, claims, prices, and outcomes. Suddenly, the “experiment” is mission-critical.
And it’s still insecure.
Security isn’t something you apply after a system stabilizes.
It’s the thing that allows it to stabilize in the first place.
To make this real, let me show you what I look for when evaluating the readiness of an AI system.
Can you explain how the system makes decisions?
If your answer is “we just feed it prompts,” we have a problem.
Is your AI integrated with live systems or data pipelines?
Then you need to treat it like you would any exposed API—complete with access controls, input validation, and observability.
Do you have a feedback loop when things go wrong?
You don’t need perfection, but you do need a process. What happens if a model produces a hallucinated answer? Who’s responsible? How is it flagged?
Are you auditing your vendors or just trusting them?
If your vendor says their model is “secure,” ask how they handle prompt injection. Ask what testing they've done. Ask if they’re building for transparency—or just retrofitting policies after deployment.
Despite all this, I’m still a believer.
I’ve never been more excited about the potential AI brings to the enterprise—especially in security. We’re already seeing AI-powered anomaly detection, smarter threat modeling, and real-time alerting that actually works. And we’ve only just scratched the surface.
But we can’t innovate without infrastructure. And we can’t scale without trust.
That’s why I’ve always argued for clarity over cleverness. Obfuscation doesn’t keep systems safe. Understanding does. And if you love the magic of AI like I do, you should want to understand how it works—so you can protect it, scale it, and make it real for the long term.
I didn’t join dais because they were selling AI. I joined because they were building the thing I wished every enterprise had when they rolled out cloud in 2008:
At dais, we think governance isn’t just about oversight. It’s about enablement. Our architecture bakes in observability, permissioning, orchestration, and secure integration by design—not as an afterthought.
Because if we want to future-proof our systems, we have to stop treating security like something you “add later.”
It has to be the blueprint.
If you’re reading this and your team is halfway through an AI integration or stuck in the planning phase—ask yourself:
If you can’t answer all three, that’s okay. That’s where we come in.
Let’s build it right the first time.