AI Governance in Education: Building Responsible Innovation with Keith Stouder of ACT

In a recent episode of Risk Management: Brick by Brick, host Jason Reichl sits down with Keith Stouder, Vice President of Privacy and AI Governance at ACT, for a masterclass in building AI governance frameworks that enable innovation rather than stifle it. From quantifying AI risk with hard numbers to implementing a "green until red" adoption strategy, Keith shares insights from nearly 30 years in IT that could transform how your organization approaches artificial intelligence.
To find out how TrustLayer manages risk so that people can build the physical world around us, head to TrustLayer.io.
Keith opens with a governance approach that challenges conventional wisdom: "Most companies, I think, start with a red light and then work towards green. We have a secured enclave, so we're green until we need to turn red because we want the adoption."
This isn't reckless abandon—it's strategic enablement. ACT manages one of the world's largest educational data stores with hundreds of millions of student records across 22 states. They can't afford data breaches or ethical violations. But they also recognize that overly restrictive policies drive employees to unsanctioned tools, creating shadow IT risks that are far worse than controlled experimentation.
The key? Building a secured enclave through Microsoft Azure where data never leaves the company and doesn't feed external AI models. Within those guardrails, ACT encourages exploration, shares use cases on an internal governance website, and meets biweekly to discuss what teams are learning. The message is clear: we trust you to innovate responsibly within these boundaries.
From Gut Feelings to Hard Numbers: Quantifying AI Risk
Here's where Keith gets into the mechanics that separate mature AI governance from checkbox compliance. When asked how ACT evaluates AI risk differently from traditional operational risks, his answer reveals a sophisticated quantitative approach.
"For education-related records, the average cost to the company is about $150 to $300 per record," Keith explains. "What we approach risk from, in most cases, is to come up with scenarios because scenarios usually paint what are your threats, what vulnerabilities do you have, what countermeasures do you have in place—so what are your controls? And that gives you the residual risk."
From there, ACT uses Monte Carlo simulations and statistical modeling to move beyond vague "high-medium-low" categories. Keith can walk into executive meetings and say: "There's a probability of 40 to 50% that we might incur a loss of X dollars. But if we add remediation or controls, in this case for AI, we might be able to lower that risk to a tolerance level that is acceptable to leadership."
This approach transforms risk conversations from subjective debates into data-driven business decisions. It also provides the concrete ROI analysis that executives need to approve AI investments.
The Two-Tier Strategy: Meeting Users Where They Are
Not everyone at ACT is a data scientist eager to experiment with AI models. Keith's team recognizes this reality and built a two-tier adoption strategy that accommodates different skill levels and use cases.
First tier: Browser-based Microsoft Copilot available to all employees for task management, email drafting, and administrative work. No company data involved—just productivity enhancement for everyday tasks.
Second tier: Copilot 365 within a secured Azure enclave for approved teams working with proprietary data. This requires management approval, documented use cases, and adherence to established governance policies.
"You're going to have some early adopters," Keith acknowledges. "The data science and the research team have been great at adoption, probably going faster than we can manage because they want to kick the tires on it and see what the value is."
For less tech-savvy employees, Keith's team takes a different approach: "Those are the people that we are meeting with, usually in team meetings within the business unit, and just giving demos of what you can do with Copilot in the browser and Bing and how that could be used." By showing concrete examples of how AI can automate time-consuming tasks, they help people envision practical applications in their own work.
Human-in-the-Loop: The Non-Negotiable Principle
Despite ACT's progressive approach to AI adoption, Keith draws a clear line when it comes to high-stakes decisions. The organization uses AI as one reader for written exam portions, with a human providing the second review—but they'll never use two AI readers.
"We will probably never use two AIs because we want humans in the loop," Keith states firmly. "We're able to now, at a lower cost, get through a lot more of the exam reviews than we would previously."
This principle extends beyond exam scoring. Before any AI-generated content appears in production, it must be reviewed and approved. Any use of learner personal information or intellectual property requires advance permission. Quality assurance testing checks for bias, ethical concerns, and prompt injection vulnerabilities using OWASP-style frameworks adapted for AI models.
The message: AI augments human judgment; it doesn't replace it—especially in education where stakes are high and trust is paramount.
The Vendor Liability Problem Nobody's Solving Yet
When the conversation turns to contracts and vendor relationships, Keith's frustration becomes evident. "Most software companies don't want liability. So whatever means they can take, they will probably pass that on to the customer or the client."
This creates a fundamental challenge for organizations trying to use AI responsibly. Vendors provide powerful tools but push all risk management responsibility onto their customers. Some are even lobbying for a 10-year enforcement ban on AI regulations—a prospect Keith finds deeply troubling.
"You can't encourage people to drive fast in the car if you don't have safety features," he argues. "We have to have consideration for that."
For now, ACT manages this risk through rigorous third-party vendor assessments. Keith's team asks pointed questions: How are you using AI? Are you processing our information with it? What controls do you have in place? What testing are you doing? They're also watching for AI-specific sections to appear in SOC 2 audits and other third-party attestations.
As for AI-specific insurance coverage? Keith hasn't seen it emerge yet but wouldn't be surprised when it does. The liability landscape is simply too uncertain for the current status quo to persist.
Keeping Governance Current in a Rapidly Evolving Field
Perhaps the hardest challenge in AI governance isn't building the initial framework—it's keeping pace with the technology's breakneck evolution. Keith's approach combines multiple touchpoints to ensure the governance committee stays current.
The committee meets every four to six weeks, with task groups convening biweekly to share discoveries. A Microsoft Teams channel serves as a continuous information feed where members post relevant articles, share business unit applications, and promote adoption through peer learning.
Most importantly, ACT implemented mandatory 30-minute AI training during annual compliance sessions. Every employee, regardless of role, goes through an introduction covering what AI is, how it's best used, organizational challenges, and acceptable use policies.
This isn't about control—it's about enablement through education. When everyone understands both AI's potential and its risks, they make better decisions without constant oversight.
Final Thoughts
Keith concludes with a perspective that captures the moment we're living through: "I think what's happening is science fiction is becoming nonfiction. We've talked about AI for 50 years, but now it's hitting mainstream, and everyone can participate."
The organizations that will thrive aren't those that restrict AI out of fear or adopt it recklessly without guardrails. They're the ones building governance frameworks sophisticated enough to quantify risk, flexible enough to enable innovation, and principled enough to keep humans in the loop for decisions that matter.
As Keith emphasizes, successful AI governance requires getting past subjective risk ratings to actual numbers, meeting employees where they are rather than where you wish they were, and recognizing that your organization—not your vendors—bears ultimate responsibility for using AI ethically and legally.
To hear more insights about building AI governance that enables rather than restricts, tune in to this episode of Risk Management: Brick by Brick.
👉 Spotify: https://bit.ly/3J5WB2j
👉 Apple Podcasts: https://apple.co/4hiik3Q
👉 YouTube: https://youtu.be/MS8GpoeRAF0
Podcast Host: Jason Reichl
Executive Producer: Don Halliwell