Skip to content
AI Security 14 min read

Your Board Asked About AI Risk — Here's What to Actually Do

Every company shipping LLM features today has AI risk. Most can't articulate what that risk actually is, which is exactly what boards are worried about. This is the framework we use to turn a vague board question into a concrete risk register and governance plan.

Why this matters

The board asked. You need an answer that isn't "we use OpenAI so we're fine." Every company shipping LLM features today has AI risk. Most can't articulate what that risk actually is, which is exactly what your board is worried about. Not the technology—the liability, the data exposure, the regulatory uncertainty, the reputational tail risk. Boards don't care about model architectures. They care about whether you've identified the risks, measured them, and built controls around them. This post is how you walk in to that meeting prepared.

Diagram

1. What the board is actually asking

When a board member asks "what's our AI risk," they're asking four simultaneous questions and won't always say it directly:

  • Liability: If an LLM feature causes harm (wrong diagnosis, biased decision, leaked data), who gets sued? Us or the vendor?
  • Data exposure: What customer, employee, or proprietary data are we sending to these models? Who owns that data? Can we get it back?
  • Regulatory: The EU AI Act exists now. SEC guidance on AI disclosure is live. Are we compliant? Are we disclosing material risks?
  • Reputation: If one of our AI features fails publicly, how bad is the headline?

Have answers to all four before you walk into the room. If you don't have them yet, that's the primary risk you're managing this quarter.

2. The three categories of AI risk that matter for governance

Forget the academic taxonomy. Your board understands risk categories because they use them for operational risk, credit risk, and market risk. AI risk breaks into three categories that map to your existing risk frameworks:

  • Model risk: The model itself is wrong, biased, unreliable, or adversarially manipulated. Includes hallucinations, prompt injection, model poisoning, distribution shift. This is the "AI does something unexpected" bucket.
  • Data risk: Sensitive data leaks through the model, is logged by the vendor, or is used to train the next version. Includes data exfiltration, privacy violations, data retention failures.
  • Operational risk: The model fails, the vendor goes down, the model gets discontinued, or your team doesn't understand how the model actually works in production. Includes vendor lock-in, lack of reproducibility, skill gaps.

Every AI feature you ship creates risk in at least one of these buckets. Some features create risk in all three. The board wants to know which risks you've identified, which ones matter, and what you're doing about them.

3. What "we use a hosted model" does and doesn't cover

This is where most conversations break down. Engineering teams often assume that using OpenAI or Anthropic means the vendor owns the risk. That's backwards. You own the risk of how you deploy it. The vendor owns the risk of the model itself. It's a shared responsibility model, and your board needs to understand the split:

Vendor's responsibility: Model safety (training data filters, constitutional AI, RLHF quality), platform uptime, infrastructure security, terms of service compliance.

Your responsibility: Input validation, output monitoring, data governance (what you send to the model, what you do with the response), access controls, incident response, disclosure to customers if your use of the model causes harm.

If your product is "wrap an API call in a UI and ship it," you own the risk of that wrapper and the risk of whatever sensitive data you decide to run through it. That's not transferable to the vendor.

4. The regulatory landscape in 60 seconds

This is moving fast. You don't need to be a lawyer, but you need to know the three things that will hit you first:

  • EU AI Act (live now for high-risk systems): If your AI system is "high-risk" (used in hiring, credit decisions, fraud detection, authentication, etc.), you need documented risk assessments, data governance, human oversight, and audit trails. NIST AI RMF is a good scaffold for this.
  • NIST AI Risk Management Framework: Not a law, but it's the standard. Four functions (GOVERN, MAP, MEASURE, and MANAGE). Start with MAP and MEASURE—you need to know what risks exist before you can govern them.
  • SEC guidance on AI disclosure: If AI risk is material to your business (it probably is), you may need to disclose it in your risk factors. Don't overstate, don't understate.

Talk to your legal team. But also understand that regulatory compliance is a floor, not a ceiling. Your board cares more about demonstrated governance than regulatory checkbox-ticking.

5. Building an AI risk register

Your board understands risk registers because you already use them for operational risk and security risk. Build one for AI and use the same format. You need:

  • Risk ID: A name, not jargon. "Customer data exfiltration via model context window" is better than "data risk #3."
  • Category: Model, data, or operational.
  • Description: What could go wrong, why it matters.
  • Likelihood: High/medium/low. Base this on real data: is this scenario plausible given your current controls?
  • Impact: High/medium/low. Financial loss, regulatory action, customer churn, reputational damage.
  • Current controls: What are you already doing to prevent or detect this?
  • Owner: Who's accountable?
  • Target state: What control do you want to add or improve?

Start with the risks that are high likelihood and high impact. You'll probably find 8–15 risks worth tracking. That's normal. You don't need to solve all of them tomorrow. You need to be able to explain why each one matters and what you're doing about it.

6. The assessment you should run before the next board meeting

You need real data on your current risk posture. Run these three assessments:

  • OWASP LLM Top 10 gap analysis: Go through the OWASP Top 10 for LLMs (prompt injection, insecure output handling, training data poisoning, etc.). For each item, write down how exposed you are. This takes a day, max. It gives you a map of where your defenses are weak.
  • Data flow mapping: Trace what data actually flows into your LLM features. What production data? What customer data? What employee data? How is it stored? How long? Who can access it? This is often a revelation.
  • Third-party model inventory: List every model you're using (OpenAI GPT-4, Anthropic Claude, open-source LLAMA, fine-tuned models). For each one: terms of service, data retention policy, SLA, incident response contact, security certifications. You need to know what you're relying on.

These three pieces become the foundation of your board presentation. They show that you've done the work.

7. How to present findings to the board

Three principles:

  • Quantify, don't dramatize. Don't say "AI is risky." Say "We've identified 12 material AI risks, 3 are high likelihood and high impact, and we have mitigation plans for all 3." That's honest and specific.
  • Show controls, not just risks. For each high-impact risk, describe the control you're building or reinforcing. "We're mapping every data flow into models and setting guardrails on PII." That's actionable.
  • Use the format they already understand. A risk register slide with your AI risks, likelihood, impact, and owner looks like every other risk register they've seen. That's good. It means they can evaluate it using the same mental model.

Have a slide that shows which risks are trending down (you added a control and measured the result). That's the conversation they want to have.

8. The governance structure you need

Define who owns AI risk. It's not "engineering owns it" and it's not "security owns it." It's both. You need:

  • An AI risk owner: Someone with budget and authority. Often a VP of Eng or Chief Information Security Officer (CISO). This person is accountable to the board.
  • A cross-functional AI governance committee: Engineering, security, product, legal, compliance. They meet quarterly to review the risk register, assess new AI features before they ship, and respond to incidents.
  • A pre-deployment review checklist: Before any AI feature ships to production, it goes through this checklist. What data does it touch? What model is it using? What controls are in place? Who tested it?
  • An incident playbook: If an AI feature fails in production, who gets called? What's the first hour response? Who tells customers? You don't need this until you need it—and then you need it immediately.

This sounds heavyweight. It's not. A quarterly meeting, a one-page checklist, and a playbook take a few days of work to build and save you weeks of reactive firefighting later.

Field observation: Teams that embed this governance in Q1 and run it through Q4 ask very different questions in year two. They're not "Do we have AI risk?" They're "How do we prioritize between model robustness, data isolation, and operational resilience?" That's the conversation that makes your board confident you've got it under control.

Field observation: The companies we work with that have sailed through AI risk conversations with boards are the ones that showed up with a documented register, named owners, and a quarterly governance rhythm. It's not the companies with the most sophisticated controls. It's the ones that can explain what they're doing and why.

The short version

Your board is asking about AI risk because liability, regulation, and reputation all hinge on your ability to identify and manage it. You need a documented risk register (model, data, operational categories), a shared responsibility model that explains what the vendor owns versus what you own, and a governance structure with an owner, a review cadence, and a deployment checklist. Start by running an OWASP LLM Top 10 gap analysis, mapping your actual data flows, and inventorying your models and their terms of service. Then build your risk register and present it to the board in the format they already understand. Quarterly governance meetings and incident playbooks are the last mile. This isn't a security problem. It's a governance problem, and boards know how to evaluate governance.

Want us to build your AI risk register?

We run the OWASP LLM Top 10 gap analysis, map your data flows, inventory your models, and hand you a board-ready risk register with controls. Senior AI security practitioners.