Peter Crow
  • Home
  • About
  • Musings
  • Research
  • Contact

Are we prepared to govern AI?

4/9/2025

0 Comments

 
Picture
Guest blog: Dr. Cletus Kadzirange (GBS Oxford University, United Kingdom)
By now, almost everyone has heard that artificial intelligence is revolutionising the commercial world. In addition to creating customer insights and automating procedures, it offers advice on hiring, pricing, and medical diagnosis. Around board tables, the atmosphere is frequently positive—AI is quick, intelligent, and full of potential. 
While boards are positive about possibilities, are they prepared to govern AI?
This is a governance question, not a technological one. The most progressive boards are starting to realise that monitoring AI requires far more than a digital strategy, because AI has the potential to affect reputation, social license, compliance, ethics, brand, and more besides. Questions boards should consider centre on accountability, transparency and long-term risk management:
  • Who is at fault when AI fails? This is a question of accountability. Apple's credit card algorithm made headlines in 2021, when it was revealed it gave women much lower credit limits than men with comparable financial backgrounds. Apple blamed its banking partner, Goldman Sachs. Regardless of who is at fault, boards cannot afford to wash their hands. Instead, they need to lean in, consider who is responsible for the performance and outputs of the AI systems and satisfy themselves everything is OK. Before systems behave in unpredicted ways (and they will), boards should check escalation processes and remedial procedures. Accountability is not about assigning blame, but about having foresight, to not only minimise the possibility of unintended outcomes but also respond well. The best companies embed clear accountability lines and practices during the design and implementation of AI systems, to facilitate good governance responses downstream.
  • Is it possible to see inside the black box? This is a question of transparency. Understanding AI's conclusions can be a challenge, even for the people who designed and trained the system! However, businesses that cannot explain the workings of their AI systems are coming under great pressure from consumers and authorities who want greater openness. Consider COMPAS, the system used by US courts to determine recidivism risk when sentencing criminals. Investigative journals discovered the system was skewed against black defendants. When challenged, the corporation that built the system refused to reveal the inner workings, citing trade secrets. Predictably, public disapproval and general suspicion rose sharply. The lesson here is that transparency is a reputational issue as much as a technological one. Boards should ensure management understands how AI systems work, and that credible non-technical explanations are available if required.
  • Are we ready for the new wave of regulation? This is a question of long-term risk. Regulation of AI is advancing rapidly. The Artificial Intelligence Act, which was ratified by the EU in March 2024, established stringent requirements for high-risk systems. A Presidential Executive Order signed in October 2023 moved the US in a similar direction. Provisions such as these expose businesses that cannot exhibit moral AI practices to the risk of fines, legal action and, even, system usage prohibitions. Boards can get ahead of the regulatory curve by regularly reviewing their AI policies against current and proposed regulations, and by calling for reports to confirm that systems are fair in use. 
AI is no longer a back-office technology. Already, it has emerged as an important enabler, influencing operational, strategic and reputational performance. Consequently, boards that ignore AI as someone else's problem may be blindsided. Boards need to ask questions to ensure AI literacy is adequate, risks have been well-assessed and that governance practices are fit-for-purpose. This is not a matter of dreading the unknown: it is about providing effective steerage and guidance.
Has your board discussed AI governance in a genuine, systematic way yet? It not, it might be time to get started.
About Dr. Cletus Kadzirange:
Cletus is a pracademic in corporate governance and company law who consults, trains and writes on various aspects of corporate law, directors' duties and governance. His specific expertise lies in implementing forward-thinking governance frameworks and sustainable practices that foster long-term value and ethical stewardship.

0 Comments



Leave a Reply.

    Search

    Musings

    Thoughts on corporate governance, strategy and boardcraft; our place in the world; and other topics that catch my attention.

    View my profile on LinkedIn

    Categories

    All
    Accountability
    Artificial Intelligence
    Change
    Conferences
    Corporate Governance
    Decision Making
    Director Development
    Diversity
    Effectiveness
    Entrepreneur
    Ethics
    Family Business
    Governance
    Guest Post
    Language
    Leadership
    Management
    Monday Muse
    Performance
    Phd
    Readings
    Research
    Research Update
    Societal Wellbeing
    Speaking Engagements
    Strategy
    Sustainability
    Teaching
    Time Management
    Tough Questions
    Value Creation

    Archives

    January 2026
    December 2025
    November 2025
    October 2025
    September 2025
    August 2025
    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    December 2021
    November 2021
    July 2021
    June 2021
    March 2021
    February 2021
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    November 2019
    October 2019
    July 2019
    June 2019
    May 2019
    April 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    August 2018
    July 2018
    June 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012

Dr. ​Peter Crow, CMInstD
© Copyright 2001-2026 | Terms of use & privacy
Photo from Colby Stopa
  • Home
  • About
  • Musings
  • Research
  • Contact