GrailCast Live Ep. 4 Rewind: What Legal AI Actually Looks Like in Practice ft. Vanessa Gage
Vanessa Gage, General Counsel and Chief Compliance Officer at Cedar, joined DataGrail CEO Daniel Barber for Episode 4 of GrailCast Live to talk candidly about how in-house legal teams are actually navigating AI today.
The conversation covered late adoption, hot takes on overhyped tools, untapped opportunities for legal AI, and the one question every GC should ask before buying anything.
Listen to Vanessa’s full conversation with Daniel Barber on GrailCast Live.
AI crept in before anyone called it a strategy
Vanessa’s honest take: she was a late adopter. The early promise of AI for legal purposes didn’t match the reality she found in the tools. But over the last 12 to 18 months, something shifted.
“I don’t think of AI as this thing I go do anymore,” she said. “AI is now just a part of how I operate.”
That evolution didn’t come from a big platform overhaul. It came from AI features quietly showing up inside the tools Cedar already used. Their CLM, their GRC tooling, their contract markup. Rather than chase new AI-specific products, Vanessa took a deliberate approach: max out what existing tools could do, then identify the real gaps.
That discipline paid off. Cedar already ran a high-tech legal stack, and adding more tools would have increased burden without clear return. Instead, the team focused on use cases first, and tools second.
The A-ha moment that changed how she works
Vanessa described a specific moment that reframed how she thinks about AI entirely.
She was reviewing a legislative response letter that had been written two years earlier. A colleague flagged that some of the underlying law may have changed. In a pre-AI world, that meant reading articles, pulling legislation, cross-referencing sources, and spending the better part of a day to find a two-sentence answer.
With AI? Fifteen seconds. A clear, relevant summary of what had changed in a 2,500-page piece of legislation, mapped to her specific question.
“That was really changing how I think and operate,” she said.
It’s a simple example. But it captures exactly why AI is becoming indispensable for in-house teams, not because it replaces legal judgment, but because it removes the friction that used to stand between a question and an answer.
Hot take: the tool is not the transformation
Vanessa’s biggest concern with current AI adoption is the assumption that buying a capable tool means the hard work is done.
She shared two cautionary examples from Cedar’s own experience. The first: a GRC tool with impressive questionnaire-completion features. When fed a single, high-quality questionnaire as a training dataset, the output was excellent. When the team fed it the full historical repository, quality collapsed. Older questionnaires were inaccurate. Company positions had changed.
The second: CLM tools trained on playbooks that were no longer current. When contract positions shift over time and the database isn’t kept clean, the AI starts working against you instead of for you.
The takeaway isn’t that these tools fail. It’s that the foundational data management work still matters enormously, and most organizations underestimate how much it takes to get there.
The untapped opportunity: mapping controls to your actual business
Vanessa pointed to one area where she believes AI still has a lot of room to go: helping legal and compliance teams map regulatory obligations to the specific realities of their business.
The concept is something like this. Feed an AI system everything it needs to know about your company. Ask it which laws apply. Then, rather than stopping at identification, use it to surface where those laws actually intersect with your operations, your products, your data flows.
Today’s tools are getting better at the identification layer. The application layer, where law meets operational context with precision, is still largely manual.
DataGrail’s AI Privacy Agent, Vera, gives privacy teams a policy engine that is tailored to the business, so that the right questions surface faster and compliance decisions are grounded in context, not guesswork.
The discipline behind the hype
What makes Vanessa’s perspective stand out is the combination of genuine curiosity about AI and serious operational discipline. She isn’t anti-AI. She has clearly experienced the step-function change that well-deployed AI can deliver. But she has also watched tools underperform when the underlying data, processes, or use-case fit weren’t right.
For legal leaders navigating this moment, that balance matters. Mature privacy and legal programs are not built by chasing the newest tool. They are built by understanding the problem deeply, choosing technology that fits the organization, and doing the data management work that makes everything else possible.