What Risk Practitioners Actually Think About AI in Risk Management

The gap everyone sees, the AI that impressed them, and the honest feedback — straight from the people who do this for a living.

Mark Scales LinkedIn

Minimalist illustration of speech bubbles converging on a central document with subtle AI circuit motifs, representing practitioner feedback on AI-enabled risk management, in soft blues, greys, and warm amber highlights.

Over the past year, I’ve been showing StartRisk to experienced risk professionals and asking them to put it through its paces. Not a sales demo — I give them access, ask them to set up a real company, and tell me what they think. No filter.

I’ve now had these conversations with dozens of practitioners. Big Four consultants, bank auditors, people who’ve built GRC platforms from scratch, independent advisors working across Asia-Pacific, North America, UK, and the Middle East. Their experience spans industries, geographies, and every level of risk maturity.

What surprised me wasn’t that they had opinions - it’s how consistent those opinions were. Not just about StartRisk, but about where risk software actually stands and where AI fits into the picture.

Here’s what I’ve learned.

The Middle of the Market Isn’t Well Served

There’s a consensus among risk practitioners I spoke to that the risk software market has a hole in it. There are spreadsheets, and there are enterprise GRC platforms. There’s almost nothing in between.

One practitioner, who’d spent years working with Archer at a major university, put it bluntly. He said he had “a passionate, deep hatred” for it — not because it was a bad product, but because it was wildly over-engineered for organisations that just needed to manage risk effectively. The features were there. The usability wasn’t.

Another, coming from 16 years at one of Australia’s largest banks, described the GRC world he lives in as “beyond fathomable” in terms of admin overhead. When he looked at what we’re building, his phrase was telling: “less risk admin and more risk management.” He called it a sweet spot in the market that he’d never seen anyone else occupy.

If experienced practitioners — people who’ve used the enterprise tools — are saying the mid-market is underserved, that’s not a niche problem. That’s the majority of organisations. Most businesses don’t have a dedicated risk team. Most can’t justify six-figure GRC implementations. They need something that works without requiring a department to run it.

AI Can Do the Translation — But Context Matters

I’ve always said that a big part of risk consulting for me has been translation. Someone tells you about their business — their worries, their plans, what keeps them up at night — and you convert that into risk language. Statements, controls, framework terminology. You’re a human interface between plain English and risk-speak.

AI is now great at that translation. Several practitioners tested our AI by setting up companies they’d actually worked with — real businesses with specific risk profiles — and were impressed that the system produced relevant risks from minimal input. One deliberately tried to break it with a geopolitical supply chain scenario involving a specific conflict zone. His response: “I want to applaud however you set up that AI.” Then, after a pause: “Once this gets big, I’m going to have no jobs.” He was half-joking. Maybe less than half.

But there’s a consistent and fair criticism. The AI-generated risks are well-matched to business type at the initial level, but they don’t yet go deep enough into geographic, political, or industry-specific nuance. One practitioner tested across multiple countries and felt the outputs didn’t pick up the nuance.

The real value of risk management isn’t in generating a list — it’s in generating the right list for your organisation. AI gets you 80% of the way there much faster than any human process. But that last 20% — the context that comes from actually understanding the operating environment — is still where experienced practitioners or risk consultants will earn their keep.

Risk Identification Is Solved. What Comes Next Is Yet To Be.

A theme that came up across multiple conversations: most of the industry’s energy has gone into the front end of risk management — identifying risks, writing statements, recommending controls. And AI has made that dramatically faster.

But what happens after that? One practitioner, who manages governance for a major bank, made the point well. The initial setup gets strong marks. The ongoing assurance piece — reviewing, incorporating real world incidents, proving that controls actually work over time — is where the real gap opens up.

He flagged something I’ve heard from several people: without a way to record incidents and tie them back to specific risks, periodic risk reviews stay theoretical. You’re attesting that controls are working, but there’s a need for real-world evidence feeding into the assessment. The next step in maturing StartRisk is applying our approach to this challenge.

Risk identification is rapidly becoming table stakes. The real opportunity — for organisations and for the industry — is in applying AI to the full management cycle: the ongoing, iterative work of monitoring, reviewing, and adjusting. That’s largely uncharted territory, and it’s where the biggest gains are waiting.

The AI Governance Question Nobody’s Answered Yet

The sharpest question came from practitioners who regularly present to boards and directors themselves. They raised a tension that I think defines the next chapter of AI in risk management.

If the AI generates a risk statement and it’s accurate, what does the board want to see? Evidence that someone reviewed it? Evidence that someone changed it? What if there’s nothing to change? How do you demonstrate human oversight in a system that’s designed to reduce the need for human intervention?

This isn’t a theoretical question. As AI tools get better, every organisation using them will face it. And boards are going to start asking — not because they doubt the AI, but because governance requires them to.

I think the answer has three parts.

First, the audit trail needs to show the thinking, not just the sign-off. “Reviewed by: Jane Smith” tells a board nothing. What they need to see is what the review actually tested — what criteria the output was assessed against, whether it was accepted or modified, and on what basis. A decision record, not a signature. This scales even when the reviewer agrees with the AI, because the rationale for agreement is documented. Over time, this gives boards a traceable thread from AI output to human judgement to organisational decision.

Second, calibration reporting. Rather than asking boards to trust that every AI output was individually reviewed, show them how the system performs. Sample AI-generated assessments, compare them against independent expert judgement, and report the results: agreement rates, divergence cases, patterns. Boards understand sampling and statistical confidence. Give them a governance metric they can track over time — one that actually measures AI reliability rather than just confirming a human was present.

Third, independent review of the oversight process itself. The question isn’t just “is someone reviewing the AI?” — it’s “is the review process effective?” This is where third-party audit comes in. Just as organisations audit their internal controls, the AI governance framework needs periodic independent assessment. Are reviewers qualified? Is the challenge function genuinely independent? Are escalation triggers working? Boards are used to this from internal audit and assurance — the concept translates directly.

But there’s a harder question underneath all of this, and several practitioners raised it directly.

What happens when the AI is better than the reviewer?

Today, we assume the human is the expert and the AI is the tool. But AI is improving fast. If it consistently outperforms a generalist reviewer — someone who isn’t a risk specialist but is responsible for signing off risk assessments — then you have an inversion. The human-in-the-loop becomes the weaker link.

This cuts both ways. AI will increasingly have advantages that humans simply can’t match: total recall across every dataset it can access, perfect consistency in applying criteria, the ability to process and cross-reference thousands of risks and controls simultaneously, and no fatigue or cognitive bias affecting its judgement on a Friday afternoon. On the other hand, humans bring things AI still can’t access: organisational politics, stakeholder dynamics, ethical trade-offs, institutional memory that never made it into a document, and the kind of contextual judgement that comes from actually sitting in the room when the decision was made.

The question isn’t whether AI will outperform humans in some areas — it already does. The question is what organisations do about it. A few factors to consider:

  • Liability inversion. Right now, the assumed risk is “we relied on AI and it got it wrong.” Soon the risk may equally be “a human overrode the AI, and the human was wrong.” Governance frameworks need to account for both directions.
  • The deskilling trap. If humans stop doing the cognitive work because AI handles it, they gradually lose the ability to meaningfully oversee it. This is the autopilot problem from aviation — and it argues for actively maintaining human competence even when AI is doing the heavy lifting.
  • Reviewer qualification thresholds. If the AI is better than a generalist, organisations need to define what level of expertise qualifies someone to review and override AI output. A rubber stamp from a non-specialist isn’t oversight — it’s theatre.
  • Escalation design. Not every AI output needs the same level of review. High-impact, high-ambiguity decisions need specialist human judgement. Routine, well-bounded outputs may need less. The governance model should reflect that distinction rather than applying a blanket “human reviews everything” rule.
  • Transparency obligations. Regulators and stakeholders will increasingly ask not just “was a human involved?” but “what did the human actually contribute?” Organisations that can answer that clearly will be ahead of the curve.

None of this means human oversight is obsolete. It means the nature of oversight needs to change. The human role shifts from generating and reviewing outputs to governing the system, challenging assumptions, and owning decisions the AI genuinely can’t make. And board reporting needs to reflect that shift — not “a human signed off” but “here’s how our oversight system is performing, here’s where humans added value, and here’s where the AI outperformed us.”

That’s a more honest governance posture. And honestly, it’s a more useful one.

Where This Is Heading

Every practitioner I spoke to circled the same set of challenges — just from different angles. The mid-market needs tools that actually work for them. AI has solved the hardest part of getting started, but the real test is what happens after the initial setup: ongoing assurance, evidence-based reviews, and controls that hold up under scrutiny. And sitting over all of it is a governance question that gets harder as AI gets better — one that boards are only just starting to ask.

These aren’t separate problems. They’re layers of the same one. Organisations that treat risk identification, ongoing management, and AI governance as a connected system will build something that’s genuinely resilient. Organisations that solve them in isolation will end up with a better-looking version of the same tick-box compliance they already have.

That’s what we’re building toward at StartRisk — not just faster risk registers, but a platform that supports the full cycle: from first risk statement through to board-level reporting, with governance baked in rather than bolted on.

If any of this sounds like the conversation you’re trying to have inside your organisation, we should talk.