Picture
Guest blog: Dr. Cletus Kadzirange (GBS Oxford University, United Kingdom)
By now, almost everyone has heard that artificial intelligence is revolutionising the commercial world. In addition to creating customer insights and automating procedures, it offers advice on hiring, pricing, and medical diagnosis. Around board tables, the atmosphere is frequently positive—AI is quick, intelligent, and full of potential. 
While boards are positive about possibilities, are they prepared to govern AI?
This is a governance question, not a technological one. The most progressive boards are starting to realise that monitoring AI requires far more than a digital strategy, because AI has the potential to affect reputation, social license, compliance, ethics, brand, and more besides. Questions boards should consider centre on accountability, transparency and long-term risk management:
  • Who is at fault when AI fails? This is a question of accountability. Apple's credit card algorithm made headlines in 2021, when it was revealed it gave women much lower credit limits than men with comparable financial backgrounds. Apple blamed its banking partner, Goldman Sachs. Regardless of who is at fault, boards cannot afford to wash their hands. Instead, they need to lean in, consider who is responsible for the performance and outputs of the AI systems and satisfy themselves everything is OK. Before systems behave in unpredicted ways (and they will), boards should check escalation processes and remedial procedures. Accountability is not about assigning blame, but about having foresight, to not only minimise the possibility of unintended outcomes but also respond well. The best companies embed clear accountability lines and practices during the design and implementation of AI systems, to facilitate good governance responses downstream.
  • Is it possible to see inside the black box? This is a question of transparency. Understanding AI's conclusions can be a challenge, even for the people who designed and trained the system! However, businesses that cannot explain the workings of their AI systems are coming under great pressure from consumers and authorities who want greater openness. Consider COMPAS, the system used by US courts to determine recidivism risk when sentencing criminals. Investigative journals discovered the system was skewed against black defendants. When challenged, the corporation that built the system refused to reveal the inner workings, citing trade secrets. Predictably, public disapproval and general suspicion rose sharply. The lesson here is that transparency is a reputational issue as much as a technological one. Boards should ensure management understands how AI systems work, and that credible non-technical explanations are available if required.
  • Are we ready for the new wave of regulation? This is a question of long-term risk. Regulation of AI is advancing rapidly. The Artificial Intelligence Act, which was ratified by the EU in March 2024, established stringent requirements for high-risk systems. A Presidential Executive Order signed in October 2023 moved the US in a similar direction. Provisions such as these expose businesses that cannot exhibit moral AI practices to the risk of fines, legal action and, even, system usage prohibitions. Boards can get ahead of the regulatory curve by regularly reviewing their AI policies against current and proposed regulations, and by calling for reports to confirm that systems are fair in use. 
AI is no longer a back-office technology. Already, it has emerged as an important enabler, influencing operational, strategic and reputational performance. Consequently, boards that ignore AI as someone else's problem may be blindsided. Boards need to ask questions to ensure AI literacy is adequate, risks have been well-assessed and that governance practices are fit-for-purpose. This is not a matter of dreading the unknown: it is about providing effective steerage and guidance.
Has your board discussed AI governance in a genuine, systematic way yet? It not, it might be time to get started.
About Dr. Cletus Kadzirange:
Cletus is a pracademic in corporate governance and company law who consults, trains and writes on various aspects of corporate law, directors' duties and governance. His specific expertise lies in implementing forward-thinking governance frameworks and sustainable practices that foster long-term value and ethical stewardship.