Improving board reporting in an AI world

One of the recurring patterns I see in organizations—across sectors and governance models—is that insufficient time, attention, or understanding given to board reporting eventually shows up as frustration in the boardroom.

You’re probably familiar with the symptoms, whichever side of the boardroom table you sit on. Directors feel they are wading through operational detail. Executives complain directors are “in the weeds.” Strategic conversations get lost, and the quality of dialogue suffers.

Whether management is bringing forward a new strategy, an analysis, or a brief on a decision to be made, the temptation to stick a cover sheet on a management report and circulate it to the board can be high (I’ve done it myself!). After all, executives are working at pace and board reporting cycles are always tight.

But a board report is not simply an update: it’s a tool for oversight and decision-making. It should clarify what matters strategically, what risks require attention, what trade-offs exist, and what, precisely, is being asked of directors.

Writing effectively for a board is a distinct skill, needing clarity about the difference between operational management and governance oversight. Many executives have never been explicitly taught how to make that shift. The growing interest in AI raises an interesting question: can it help?

Let me be completely clear: AI is not (currently) a substitute for human judgement. It does not understand context, history, political nuance, or fiduciary duty in the way experienced leaders do. It can oversimplify, hallucinate, and generate inaccuracies. Any use of AI  in board reporting must sit firmly within your organizational guardrails around privacy, confidentiality, and regulatory compliance. But within those boundaries, AI can be used thoughtfully to strengthen board reporting, by sharpening thinking rather than replacing it.

First, you need a strong board reporting template that ensures that all reporting going to the board answers the following questions:

  • What is the purpose of this report?

  • How does it connect to our strategy?

  • What options have been considered?

  • What are the risks and financial implications?

  • What is management recommending, and why?

  • Thought-starter questions for the board: where is board input required?

  • What happens next?

 

Once that foundation is in place, you can experiment with using  AI to interrogate and refine draft material (again—always within the context of your organizational AI use policy!).

Ask AI to “interview”  you to help you establish what, exactly, you’re seeking from the board. If the ask is not clear in your mind, it won’t be clear to directors. Here are some prompts that might then help you create the clearest board report:

  • Can you help reframe this operational report into a board-level paper focused on oversight and decision-making?

  • What have you left out, and why?

  • Where am I describing activity rather than outcomes or impact?

  • What strategic implications are implied but not clearly articulated?

  • Where might this inadvertently invite the board into operational detail?

  • What appears to be missing?

Used in this way, AI can expose gaps and increase governance-level clarity.  Crucially, the judgement behind all board reporting must remain human.  Before a paper goes forward, you must pause and review the whole of its content. Is its purpose explicit? Has important nuance been captured? Is it accurate and complete? Does it truly help the board focus on what matters most?

When board reporting improves, the impact is very human: more strategic conversations with greater clarity, are a better use of everyone’s time and attention.  Within the guardrails I’ve described, I can see real scope for using AI—judiciously and thoughtfully—to help frame issues in ways that support those outcomes.

I also know that many governance thinkers remain deeply sceptical about AI in board work. That scepticism makes sense. Governance rests on judgement, context, accountability, and experience—none of which we should be handing to a tool. At the same time, AI is already finding its way into executive workflows. Rather than pretend it’s not happening, organizations need to approach it consciously: with clear boundaries and expectations.

This space is evolving extraordinarily quickly. What we describe as leading practice today will shift, probably within months. The fundamentals, however, will not: governance is, at its core, human work. If AI can be used in ways that genuinely support clearer thinking, better conversations, and stronger oversight—without diluting accountability—then it deserves careful, thoughtful engagement.

Shona McGlashan is a Fellow of the Chartered Governance Institute. She As principal at McGlashan Consulting, she works with boards and executive teams on board effectiveness, culture, EDI, and leadership. She lives and works on the the xʷməθkwəy̓əm (Musqueam), Skwxwú 7mesh (Squamish), and Səl ̓ ı́ lwətaʔ/Selilwitulh (Tsleil-Waututh) Nations.

Next
Next

Governance in transition: lessons from credit union mergers