• Published on

    Preparing for board meetings: how?

    Picture
    The ways board directors prepare for board meetings is changing. Gone are the days when most directors simply turn up for the meeting, open the supplied packs and rely on their instinct as they sit through presentations by management (read: work it out on the fly). Most directors these days are well-intentioned, having diligently read papers before the meeting (having received them via a portal tool, PDF stack or thick package of printed materials). Some of these directors augment their reading with additional enquiries, in an effort to fill in blanks or formulate suitable questions to ask during the meeting. Though a small coterie still rely on their instinct to listen carefully and discern in real-time (read: work it out on the fly, during the board meeting), the world is moving on, and rapidly so. The emergence of AI assistants is proving a boon for smart directors: they are embracing a new generation of tools to enhance their preparation—on the basis that better preparation is an antecedent of better decisions
    Preparation takes time, of course, and many directors say,  "It'd be fine if I had the time." My response is curt: "Given the duties you owe, and the importance of governing with impact, what else might be more important than preparing well?"
    In the spirit of collegial learning, how useful are Shekshnia and Yakubovich's insights, and how are you using AI to augment your board meeting preparations (if at all)? Please comment below.
  • Published on

    Are we prepared to govern AI?

    Picture
    Guest blog: Dr. Cletus Kadzirange (GBS Oxford University, United Kingdom)
    By now, almost everyone has heard that artificial intelligence is revolutionising the commercial world. In addition to creating customer insights and automating procedures, it offers advice on hiring, pricing, and medical diagnosis. Around board tables, the atmosphere is frequently positive—AI is quick, intelligent, and full of potential. 
    While boards are positive about possibilities, are they prepared to govern AI?
    This is a governance question, not a technological one. The most progressive boards are starting to realise that monitoring AI requires far more than a digital strategy, because AI has the potential to affect reputation, social license, compliance, ethics, brand, and more besides. Questions boards should consider centre on accountability, transparency and long-term risk management:
    • Who is at fault when AI fails? This is a question of accountability. Apple's credit card algorithm made headlines in 2021, when it was revealed it gave women much lower credit limits than men with comparable financial backgrounds. Apple blamed its banking partner, Goldman Sachs. Regardless of who is at fault, boards cannot afford to wash their hands. Instead, they need to lean in, consider who is responsible for the performance and outputs of the AI systems and satisfy themselves everything is OK. Before systems behave in unpredicted ways (and they will), boards should check escalation processes and remedial procedures. Accountability is not about assigning blame, but about having foresight, to not only minimise the possibility of unintended outcomes but also respond well. The best companies embed clear accountability lines and practices during the design and implementation of AI systems, to facilitate good governance responses downstream.
    • Is it possible to see inside the black box? This is a question of transparency. Understanding AI's conclusions can be a challenge, even for the people who designed and trained the system! However, businesses that cannot explain the workings of their AI systems are coming under great pressure from consumers and authorities who want greater openness. Consider COMPAS, the system used by US courts to determine recidivism risk when sentencing criminals. Investigative journals discovered the system was skewed against black defendants. When challenged, the corporation that built the system refused to reveal the inner workings, citing trade secrets. Predictably, public disapproval and general suspicion rose sharply. The lesson here is that transparency is a reputational issue as much as a technological one. Boards should ensure management understands how AI systems work, and that credible non-technical explanations are available if required.
    • Are we ready for the new wave of regulation? This is a question of long-term risk. Regulation of AI is advancing rapidly. The Artificial Intelligence Act, which was ratified by the EU in March 2024, established stringent requirements for high-risk systems. A Presidential Executive Order signed in October 2023 moved the US in a similar direction. Provisions such as these expose businesses that cannot exhibit moral AI practices to the risk of fines, legal action and, even, system usage prohibitions. Boards can get ahead of the regulatory curve by regularly reviewing their AI policies against current and proposed regulations, and by calling for reports to confirm that systems are fair in use. 
    AI is no longer a back-office technology. Already, it has emerged as an important enabler, influencing operational, strategic and reputational performance. Consequently, boards that ignore AI as someone else's problem may be blindsided. Boards need to ask questions to ensure AI literacy is adequate, risks have been well-assessed and that governance practices are fit-for-purpose. This is not a matter of dreading the unknown: it is about providing effective steerage and guidance.
    Has your board discussed AI governance in a genuine, systematic way yet? It not, it might be time to get started.
    About Dr. Cletus Kadzirange:
    Cletus is a pracademic in corporate governance and company law who consults, trains and writes on various aspects of corporate law, directors' duties and governance. His specific expertise lies in implementing forward-thinking governance frameworks and sustainable practices that foster long-term value and ethical stewardship.

  • Published on

    What lies ahead, in 2025?

    Picture
    I had the good fortune to catch-up with a dear friend and professional associate yesterday; someone I have not had the chance to interact with for nearly nine months.
    Tony and I chatted about all manner of things: his new barn (read: man cave and office); our exploits with Rosa (read: 1951 MG Y-type); geopolitics; ChatGPT; and more besides. What was fascinating was that we both found ourselves chatting as if the last time we spoke was yesterday. ​Before we knew it, some 75 minutes had passed by. ​My father told me that this is a good thing; a sign of true friendship.
    One aspect of our conversation that piqued my attention was Tony’s investigations around artificial intelligence and board reports—or, more specifically, his application of large language model tools to discern and make sense of board reports. The rapid progress over the past twelve months is a sight to behold. Tony summarised his experiments and findings. Did you know that if you feed ChatGPT a set of board papers and ask it to summarise the key points, including nuances and appropriate questions to ask in a board meeting, the likelihood of the responses being both insightful and relevant is high? You can also use it to discern whether directors have read and understood the board papers! I have been a sceptic about the application of AI tools for some time but, on the strength of what was outlined, I’m ready to believe ChatGPT (or Claude, or other) can be a real boon for directors struggling to make sense of large data sets. While context eludes ChatGPT (and all other LLMs), and meaning and reasoning too, the direction and pace of travel seems to be reasonable. Certainly, progress is rapid.
    I went to bed after our call pondering a plethora of options, including whether board directors might be supplanted by machinery in future. Of this, I am doubtful. But where LLMs could be quite valuable is to distinguish between lights in the distance: those that are sunlight at the of the tunnel, and those that are a train heading towards me at great speed.
    And so, with 2025 underway, is your board ready for what lies ahead? Can it, for example, confidently distinguish between [sun]light at the end of the tunnel and a train headlight? Has it carefully considered options having read widely, invoked various tools including AI tools and debated options; or, does it remain reliant on what management feeds up in the board report? To rely on management reports as the sole source of ‘truth’ is not smart; it never has been.
    PS: this is Rosa:
    Picture
  • Published on

    When AI writes the news

    Picture
    Today is the last day of 2024, the day many people reflect on the year gone and ponder what might lie ahead. Everything from checking off goals set twelve months earlier, to setting goals and resolutions for the year ahead. I am amongst those who 'reflect and set' around this time of the year. Normally, the exercise involves reading back through notes and notebooks, and pondering goals. This year, I asked for help; not help from anyone who knows me really well, but from a newly-released LinkedIn feature, Coauthor. 
    This is what Coauthor, an AI tool, curated, in both textual and info-graphical form:

    What happens when a board advisor steps into new territories while staying true to core principles? 2024 showed me. The year brought significant evolution in how I serve boards and directors, particularly through co-founding govern&; with Jurate Stanisauskiene to help boards in the Baltics achieve sustainable outcomes.

    The year brought meaningful progress:

    • Co-founded govern& to serve boards in Baltic region
    • Joined Editorial Board of Advances in Corporate Governance Journal
    • Completed significant governance review work for Baptist Union of New Zealand
    • Expanded influence through international speaking engagements

    Yet the core mission remained constant: helping boards govern with impact.

    Picture
    While I may not have highlighted these specifics 'by hand', the general tenor of the summation by Coauthor is pretty good—save one word: expert. While my record implies a level expertise in several areas, I make no claim to be an expert director, expert advisor or even a governance expert. To use 'expert' in this way is, I think, self-aggrandisement. I am, straightforwardly, someone with a deep interest in the performance of organisations and the contributions of boards of directors.
    So far, so good. But what of the future? How does AI do when looking ahead? What does Coauthor have to say in relation to 2025? This:

    govern& will expand its impact in the Baltics while I continue advancing thought leadership in corporate governance. The focus remains helping boards see around corners and make decisions that drive sustainable outcomes.

    This is a reasonable attempt, as far as it goes. What Coauthor does not, and cannot, 'know' is what sits in the wings, much less how other as yet unknown factors might influence me in 2025. My intent to finish writing Boardcraft: The art of governing with impact is not mentioned, nor is a significant initiative to support boards in several developing nations, or speaking engagements at conferences in New York and Milan. And therein lies a critical limitation. When AI writes the news, it can but summarise the past. And, generally, speaking, it does this very well. Making statements about what might lie ahead is much more difficult; anything requiring mimicry of human traits—such as intuition, reasoning, sense-making and undeclared preferences—are beyond its capabilities.
    Boards need to bear this in mind when considering if, how and where AI might 'fit' when considering strategic options. AI can be an incredibly powerful enabler, and its application to drive efficiencies and expose new sources of competitive advantage should be explored. But, great caution is needed: as attractive as the outputs from LLM models appear to be, their predictive power beyond the next word, or ability to credibly simulate social traits, is rather more limited. 
    Regardless, thank you for your supporting 2024, and best wishes for what lies ahead in 2025.
  • Published on

    Artificial intelligence and board work

    Picture
    Several times in the past six weeks, I have been asked to share some thoughts on artificial intelligence and board work; specifically, the impact of emerging AI capabilities on corporate governance and, even, the need for board of directors. The rapid emergence and now widespread awareness of ChatGPT has been a catalyst for many of these enquiries, it seems. I have been fascinated by the unfolding situation, not only because of a longstanding interest (I studied artificial intelligence at university nearly four decades ago), but also the speed by which awareness has spread, and expectations climbed to such stratospheric heights, is unprecedented. Claims have been made that computer-based tools will soon supplant the need for human directors and, with it, board meetings. Some, especially those with jaundiced perceptions of boards, their work and any value they add, have confided this may be a good thing. Others have reserved judgement—for now at least—saying the situation is far too fluid and complex to make anything approaching an informed or reliable decision, much less widespread change.
    That so many people are questioning 'conventional' corporate governance practices feels a little bit like ground hog day. While I do not claim any particular expertise in the topic of artificial intelligence, I have read widely, asked many questions (of myself and others) and pondered both the purported capability and potential impact (of artificial intelligence) on board work. ​
    The departure points for my enquiry has been, as always, definitional. What is artificial intelligence, and what conception of governance does one hold? My responses to these questions are as follows:
    • Artificial intelligence: the development of computer-based capability to perform tasks that normally require human intelligence. In effect, the simulation of [aspects of] human intelligence by machines.
    • Governance: the provision of steering and guiding an organisation [towards an agreed goal], by the board of directors. In effect, governance is the work of the board—the means by which companies are directed and controlled.
    So, the proposition to be considered is, "Can a computer replace a social group charged with steering and guidance an organisation in a complex and dynamic environment?"
    Those people wondering whether AI might be a viable mechanism to support or even replace boards have much to ponder. What is the role of a board of directors in companies? How might the operating context beyond the organisation be assessed? Where does accountability for statutory compliance and overall performance lie? And, to whom should the Chief Executive and management of the company report? If one holds the view that the board is the ultimate decision-making authority within a company (a responsibility delegated by shareholders), and that this (decision-making despite uncertainty and ambiguity) is 'core business', the board has a vital role to play.
    My early training in computers and technology taught me that computers respond to instruction; they cannot 'think' autonomously or handle ambiguity, and they lack feelings and intuition. They do what they are 'told'; if the 'telling' is poor, the result is likely to be poor: the phrase "garbage in, garbage out" springs to mind.
    But that was then. Computing power is far greater today than it was even five years ago, much less forty. Has the evolutionary development of computing capability reached the point whereby computers can displace humans? For a large and growing list of tasks  and activities, yes, of course. The analysis of data is a relevant case in point. But for many other enquiries, the answers remains a resounding no. How might a computer make sense of the unspoken feelings, intuition and biases of staff, customers and board directors, and reach a credible decision? For this, a much higher order of capability is necessary. And, with that, I stand with those reserving judgement. 
    What of the future? AI may become a viable mechanism to expedite board decision-making, of course. But the likelihood  of directors being supplanted any time soon is low (those failing in their duties excepted). For that, artificial general intelligence (AGI) is likely to be necessary, and some moral and ethical questions will need to be resolved as well. If that is achieved, I may take a stronger position.
    Regardless of whether this muse is sound or not, directors, shareholders, regulators and their various advisors need to be alert, because the situation may change quite quickly.