Why AI belongs in your ESG conversations

ESG frameworks were built to hold organizations accountable for the things that matter beyond the bottom line: environmental impact, social responsibility, and governance integrity. For years, ESG practitioners, governance professionals, and dedicated board directors have worked to make these frameworks meaningful—to move them from checkbox compliance toward something that actually influences how people within organizations act and make decisions.

But as things stand today, AI—the technology that is arguably having the fastest and most far-reaching impact on our workforces, our communities, and our decision-making—is almost entirely absent from the ESG conversation.

Glass Lewis recently examined AI governance disclosure among S&P 100 companies. Just over half disclosed board-level AI oversight, and fewer than one in three disclosed both oversight and a formal AI policy. If that's where the largest, most heavily scrutinized companies are, the picture elsewhere is almost certainly more inconsistent.

Canada has no comprehensive AI legislation to fill that void. The Artificial Intelligence and Data Act died on the order paper in early 2025. Standards are still emerging, and they’re far from being established. Which means the organizations doing this well are rare enough that I definitely want to find them (more on that below).

While I’m not sure that anyone has this figured out yet, I don’t think we have the luxury of waiting for the perfect framework to emerge before starting the conversation. Here are three areas where I suggest we start:

Human-centred AI use

The "S" in ESG asks us how organizations treat people—employees, members, clients, communities. AI use absolutely belongs in that conversation.

How is your organization using AI in ways that advance the wellbeing of the people it serves and employs? What decisions about workforce reduction, restructuring, or automation are being made—with what oversight, and what consideration for the people affected? Are AI tools being deployed in ways that actually support people, or are they creating increased stress and pressure around productivity, job losses, “AI brain fry”?

These are questions that organizations are confronting right now; but are they even visible to boards? Boards need to consider:

  • Developing a clear position on workforce impacts of AI adoption

  • Measuring the human outcomes, not just the efficiencies

Ethical AI use

This connects to both the "S" and the "G". We know that AI systems carry and reproduce bias. They reflect the data they were trained on, and that data is based on the world as it has been, not as it should be. Organizations that are using AI in hiring, lending, service delivery, or any decision that affects people have an ethical obligation to understand what's embedded in the tools they're using—and to actively mitigate harm.

This is a governance question, not a tech question. Who in your organization is accountable for understanding and managing AI-related bias? What guardrails exist? When something goes wrong (and eventually, something will) who is accountable?

Sustainable AI use

Let’s turn to the “E” in ESG. AI has an environmental footprint that is large and often undisclosed. Training and running AI models consumes significant energy and water. The infrastructure behind the tools we're all using daily has a material environmental impact that most organizations haven't begun to quantify, let alone report on.

This doesn't mean organizations should stop using AI. But it does mean that organizations with serious climate commitments need to be asking whether their AI use is consistent with those commitments, and whether they have any visibility into the environmental costs of the tools they've adopted.

Do boards know the environmental footprint of their organizations’ AI use, and have they assessed how that aligns with existing sustainability commitments?

Doing something is better than doing nothing

These frameworks are going to evolve. I suspect that the questions I’m posing in 2026 will look hopelessly outdated in 18 months.

We still have to have these conversations now, because AI is no longer an emerging issue on the horizon. It's here, it's moving fast, and organizations are making important decisions about it right now; and as far as I can tell, they’re mostly doing it outside any ESG accountability framework.

Governance professionals and ESG practitioners are well placed to lead this conversation. We understand how to build accountability structures around complex, fast-moving issues. We know that what gets measured—and disclosed—is what gets managed.

AI governance doesn't need its own separate framework; it needs to be part of the one we're already building.

Who's already doing this well?

I’d love to know which organizations—in Canada or elsewhere—are integrating AI considerations meaningfully into their ESG reporting and strategy. Are there practitioners, frameworks, or conversations worth paying attention to? Who’s doing this well, and what can we learn from each other?

If you’re working on this, let me know how it’s going. Where are you making the biggest impact? What huge questions remain?

Next
Next

Improving board reporting in an AI world