AI Governance for Small and Medium Businesses and Not-for-Profits

Your staff are already using AI — has your governance kept up?

Mark Scales LinkedIn

Clean, minimalist illustration showing AI governance concepts with abstract professionals, checklists, documents, and a central AI chip, using soft blue and grey tones to convey trust and structure for Small and Medium Businesses and not-for-profits.

AI governance isn’t about slowing innovation. For regulated small and medium businesses and non-profits, it’s about creating clarity, confidence, and control before AI use runs ahead of governance.

If your staff are already experimenting with tools like ChatGPT, Copilot, or Gemini — and they almost certainly are — then AI governance is no longer a “future” issue. It’s a current risk and opportunity that needs a clear, proportionate response.

In this article we offer an AI governance policy and staff guidance template drafted specifically for regulated small and medium businesses and Non-profit environments. It’s the same template we use with our clients, so we know its not shelfware, but a practical risk control that enables safe, valuable AI use.

If you want the template, there’s a downloadable Word document here.


Why Regulated Small and Medium Businesses and Non-profits Need AI Governance

For regulated organisations, AI introduces a unique tension:

  • The opportunity: faster drafting, better analysis, reduced admin load, improved consistency
  • The risk: privacy breaches, poor decision-making, regulatory non-compliance, reputational damage

Unlike large enterprises, Small and Medium businesses and Non-profits:

  • Don’t have dedicated AI governance teams
  • Often operate with lean IT and risk functions
  • Are held to the same regulatory expectations with far fewer resources

That’s exactly why clear, simple governance matters more, not less.

A good AI policy doesn’t try to solve AI forever. It answers three practical questions:

  1. What are we comfortable with staff using AI for?
  2. Where are the hard “no’s”?
  3. How do we stay in control as this evolves?

Section 1: Purpose - Setting Organisational Posture, Not Just Rules

The Purpose section of the template does more than introduce the document. It sets your organisational posture toward AI.

For regulated Small and Medium Businesses and Non-profits, this is critical.

Why this section matters

Regulators, auditors, and boards are less concerned with whether you use AI, and more concerned with:

  • Whether its use is intentional
  • Whether risks have been considered
  • Whether leadership has set clear expectations

Tailoring considerations

When adapting this section, your organisation should:

  • Explicitly link AI use to business or mission outcomes
  • Acknowledge the regulated context (privacy, client vulnerability, donor trust, accreditation)
  • Reinforce that AI is being adopted deliberately, not accidentally

This is where you quietly answer the regulator’s unspoken question:

“Did this organisation think before it acted?”


Section 2: Guiding Principles - Translating Risk Appetite into Behaviour

The guiding principles are the heart of the policy. They express your risk appetite for AI in plain English.

For regulated Small and Medium Businesses and Non-profits, risk appetite for AI is usually:

  • Moderate to cautious for client-facing or regulated decisions
  • More open for internal productivity and drafting tasks

That’s normal - and appropriate.

Why this section matters

Principles guide behaviour when:

  • Tools change
  • Use cases evolve
  • Staff face grey areas not covered by rules

Key principles and how to tailor them

Value and Impact

Ask: Where does AI genuinely help us?

Regulated organisations should prioritise:

  • Admin reduction
  • Drafting and summarisation
  • Internal analysis and brainstorming …not autonomous decision-making.

Transparency and Control

If staff can’t explain how an AI output was produced or reviewed, it shouldn’t be relied on — especially in regulated contexts.

Documentation

This principle is often overlooked but is vital for auditability. Documentation doesn’t need to be complex just enough to show:

  • What AI is used
  • For what purpose
  • With what safeguards

Human in the Loop

This is the clearest expression of risk appetite.

For regulated Small and Medium Businesses and Non-profits, the line is simple:

AI can assist — but humans remain accountable.

Risk-Aware

This reframes AI governance from fear to confidence. It signals that:

  • Risks are acknowledged
  • Controls are proportionate
  • Innovation is still encouraged

Section 3: Acceptable Use - Turning Risk Appetite into Clear Boundaries

This section answers the question staff care about most:

“What can I actually use — and what can’t I?”

Why this section matters

In the absence of clarity, staff will:

  • Use personal accounts
  • Upload sensitive information
  • Make inconsistent judgment calls

That’s not a people problem - it’s a governance gap.

Tailoring considerations

Regulated Small and Medium Businesses and Non-profits should be explicit about:

  • Approved vs experimental tools
  • Use of organisational credentials
  • Hard boundaries around client, donor, or sensitive data

This is where risk appetite becomes operational:

  • Low tolerance for data exposure
  • Higher tolerance for low-risk internal efficiency gains

Section 4: Key Risks of AI Usage - Naming the Risks You’re Accountable For

This section is not about being alarmist. It’s about being realistic.

Why this section matters

Boards and regulators expect organisations to be able to articulate:

  • What could go wrong
  • Why it matters
  • Who escalates concerns

The risks outlined [privacy, misinformation, bias, reputation] are particularly acute in regulated environments where trust is central.

Tailoring considerations

Organisations should:

  • Use examples relevant to their sector (e.g. client communications, funding submissions, regulatory reporting)
  • Clarify escalation pathways
  • Reinforce that uncertainty is a signal to pause, not push ahead

Section 5: Approach to AI Adoption - Balancing Opportunity and Control

This section is where many policies fail - by either banning too much or allowing too much.

Why this section matters

A structured approach signals maturity, not resistance.

For regulated Small and Medium Businesses and Non-profits, the most defensible posture is:

  • Proven tools first
  • Defined use cases
  • Pilot before scale
  • Education over prohibition

This aligns with a measured risk appetite - enabling benefit while avoiding regulatory shock.


Section 6: Governance and Oversight - Keeping It Light but Real

Governance doesn’t have to mean committees and paperwork.

Why this section matters

Without ownership, AI use becomes invisible - and invisible risk is the hardest to manage.

Tailoring considerations

For small organisations:

  • Keep governance centralised but light
  • Focus on visibility, not bureaucracy
  • Ensure someone is accountable for:
    • Reviewing use cases
    • Maintaining tool lists
    • Updating guidance as AI evolves

This is where AI governance quietly integrates into your broader risk management framework.

For organisation with existing governance structures, don’t reinvent the wheel. Incorporate AI governance into an existing governance forumn.


Staff Guidance: Where Policy Becomes Culture

The staff guidance section is just as important as the policy itself.

Why this matters

Policies set rules. Guidance shapes behaviour.

Good guidance:

  • Explains the why
  • Builds confidence
  • Reduces fear of “getting it wrong”

For regulated Small and Medium Businesses and Non-profits, this is how you:

  • Empower staff
  • Reduce shadow AI use
  • Create consistent, defensible practice

Final Thought: AI Governance Is a Risk Control - Not a Brake

For regulated Small and Medium Businesses and not-for-profits, AI governance is about:

  • Clear intent
  • Proportionate controls
  • Aligned risk appetite
  • Practical guidance staff can follow

When done well, an AI policy doesn’t slow organisations down - it removes uncertainty, protects trust, and allows teams to use AI with confidence, not caution.

This is exactly where AI-enabled risk management tools like StartRisk can support organisations: helping them document intent, assess emerging risks, and keep governance aligned as technology evolves - without adding unnecessary complexity.

If you’d like a free, no-obligation discussion about how to tailor this template to your organisation - including aligning it to your regulatory context and risk appetite for AI - you can book a conversation with one of our experienced risk consultants here.

Don’t forget to download the AI Governance template here, it’s a simple and easy place to start on your business’s AI journey!