• Published on

    Who’s looking at you?

    Image description

    Have you ever wondered who is looking at your website, and why? My new website was published seven days ago (well, a very similar website), so I decided to look at the analytics, to get an idea.

    To my astonishment, some 40,600 total visits (page hits) have been recorded over the past seven days, from just over 8500 unique visitors. Extrapolated, that points to over two million page hits per year.

    This sounds impressive. I’m not convinced, and closer inspection shows the numbers are not quite what they seemed at first glance. When ‘include Crawlers/Bots’ is de-selected, a clearer picture emerges: the total visitor count drops to 10600-odd. That about three quarters of the traffic to petercrow.com is not by or from real people is good to know. That they are AI-tools and other systems, hoovering around collecting data justifies our investment in appropriate security. That one-in-five visits is from a mobile device suggests our selection of a tool that provides desktop-, tablet-, and mobile-friendly display options—automatically—was a good decision too.

    Turning to the ‘real visitors’ now. If one-in-four Unique Visitors are not bots, about 2100 people visited the some part of the site over the past seven days. Some (most?) will have been curious about the new site. But others looked at one or more Musings articles; and some have checked some other aspect of the capabilities and credentials material.

    Even if one or two per cent of these ‘real people’ are genuinely interested (20 per week), and ten per cent of these get in touch, my decades-long quest (to provoke candid conversations to help boards can govern with impact) has, probably, been worthwhile. Onward.

  • Published on

    On boardcraft

    Image description

    In recent months, there has been a rising level of interest in Boardcraft. Word is getting out it seems, so a précis is probably timely. Curious? Grab a coffee and read on...

    B​oardcraft is a term I coined: a governance-focused initiative help boards operate well in practice—not just describe on paper what they are supposed to do. At its core, Boardcraft is about treating board work (that is, corporate governance) as a practical craft to help boards move from a compliance mindset to a performance mindset.

    Why is this important? Many boards comply with prevailing statutes and governance codes but they, or the companies they govern, still perform poorly. The underlying problem is a barrier lying in plain sight: one cannot comply their way to performance. 

    Boardcraft offers a pathway forward for boards wanting to perform well and govern with impact. 

    The big shift is this: Effective governance is not a product of structures, policies, or independence per se; it emerges from the quality of thinking, interaction, and decision-making in the boardroom. ​What is more, Boardcraft is not something I dreamt up at a whiteboard or while driving my old car: it is the product of ground-breaking research conducted a decade ago. In essence, it helps boards understand:

    • The capabilities, activities and behaviours necessary if boards are to exert influence beyond the boardroom, especially on organisational performance
    • How to make high-quality decisions together
    • How to handle conflict and disagreement
    • How chairs can lead effective discussions
    • The board's role in shaping strategy, not just approving management's proposals

    Ultimately, Boardcraft is a mindset to help boards improve their judgement, oversight, steerage and guidance; work as a functional group and make great decisions (think: positive board dynamics); and, ultimately, drive high levels of organisational performance. In effect, to govern with impact.

    Boards and directors interested to learn about Boardcraft, the Strategic Governance Framework (the underlying foundation), and how to embrace a Boardcraft mindset in practice have several options:

    • Workshops and board development sessions (half-, full- and two-day options, fully curated)
    • Tailored coaching and mentoring for chairs
    • Governance diagnostics (to assess how well a board functions)
    • Real-world case studies, rather than textbook or theoretical models

    What to learn more? Check this article, and get in touch with your questions. I'm available globally.

    PS: The headline picture is not a photo of me; it is an AI-generated image. Pretty good eh?

  • Published on

    Is an elephant [in the room] obscuring our view?

    Picture
    ​The rise of artificial intelligence capabilities over the past 4–5 decades (you read that correctly, not 4–5 months or even 4–5 years) has brought some awkward questions into stark relief.
    • How might AI enable or impair our strategic priorities?
    • Are the data in management reports to the board accurate, and conclusions credible?
    • As directors, we’re supposed to govern with impact. But what matters most amongst the many priorities in the reports from management—and how might we decide?
    • Are the so-called experts that management keeps putting in front of us actually experts, or are they just AI-junkies who have generated content that appears to be informed?
    These questions, and many others like it, highlight an overarching question that has become very real for many directors, more so as the onset of AI-generated content has started to pervade boardrooms, executive suites and beyond:
    The report behind the question brings the problem into stark relief: Many conclusions developed from academic research and peer-reviewed articles may not be reliable. Indeed, many may not be worth the paper (screen) they are written on, despite the seemingly attractive arguments put up by the authors.
    This being the case, how might directors validate the data and reporting in board packs?
    If boards are to govern with impact, they must first ensure the reports they receive are not only accurate but credible. This is a demanding expectation, but it is the baseline. Fortunately, we are not the first people to ponder this matter: This muse explores some of the core considerations.
    The elephant in the room is not AI, per se; it is the directors’ ability to distinguish between what matters and what does not—the signal and the noise.
  • Published on

    What if a board chair was an animal?

    Picture
    “If a high-performing board chair was an animal, what animal would it be?”
    This was the opening question to panelists at a High Performing Chair conversation hosted by the Institute of Directors in Tauranga last evening. I had the privilege of serving on the panel alongside Debbie Ireland and Nathan Flowerday to offer some comments about our experiences chairing the boards of large, medium and smaller organisations. 
    The opening question set the tone for what followed, for it got those in attendance thinking, about the capabilities and attributes of an effective chair, and what distinguishes a good chair from a great one. ​The responses from the panelists were instructive; three different perspectives drawing out critical attributes common amongst highly-effective chairs:
    • Wolf: sometimes out the front, sometimes amongst, and sometimes leading from the rear.
    • Kea: naturally inquisitive, tenacious, asking questions
    • Lion: power by presence, overseeing, exercising strength when needed
    Panelists went on to respond to a wide range of questions from both the moderator and the floor, covering such matters as meeting management, chair–chief executive relations, communications, tenure, balancing priorities, handling crises, continuing development, and strategic decision-making. 
    Thanks to Brian Staunton, for your expert moderation of the panel, and the Institute, for hosting the conversation. ​I came away more well-informed than before, and hope those in attendance did too.
  • Published on

    Preparing for board meetings: how?

    Picture
    The ways board directors prepare for board meetings is changing. Gone are the days when most directors simply turn up for the meeting, open the supplied packs and rely on their instinct as they sit through presentations by management (read: work it out on the fly). Most directors these days are well-intentioned, having diligently read papers before the meeting (having received them via a portal tool, PDF stack or thick package of printed materials). Some of these directors augment their reading with additional enquiries, in an effort to fill in blanks or formulate suitable questions to ask during the meeting. Though a small coterie still rely on their instinct to listen carefully and discern in real-time (read: work it out on the fly, during the board meeting), the world is moving on, and rapidly so. The emergence of AI assistants is proving a boon for smart directors: they are embracing a new generation of tools to enhance their preparation—on the basis that better preparation is an antecedent of better decisions
    Preparation takes time, of course, and many directors say,  "It'd be fine if I had the time." My response is curt: "Given the duties you owe, and the importance of governing with impact, what else might be more important than preparing well?"
    In the spirit of collegial learning, how useful are Shekshnia and Yakubovich's insights, and how are you using AI to augment your board meeting preparations (if at all)? Please comment below.
  • Published on

    Are we prepared to govern AI?

    Picture
    Guest blog: Dr. Cletus Kadzirange (GBS Oxford University, United Kingdom)
    By now, almost everyone has heard that artificial intelligence is revolutionising the commercial world. In addition to creating customer insights and automating procedures, it offers advice on hiring, pricing, and medical diagnosis. Around board tables, the atmosphere is frequently positive—AI is quick, intelligent, and full of potential. 
    While boards are positive about possibilities, are they prepared to govern AI?
    This is a governance question, not a technological one. The most progressive boards are starting to realise that monitoring AI requires far more than a digital strategy, because AI has the potential to affect reputation, social license, compliance, ethics, brand, and more besides. Questions boards should consider centre on accountability, transparency and long-term risk management:
    • Who is at fault when AI fails? This is a question of accountability. Apple's credit card algorithm made headlines in 2021, when it was revealed it gave women much lower credit limits than men with comparable financial backgrounds. Apple blamed its banking partner, Goldman Sachs. Regardless of who is at fault, boards cannot afford to wash their hands. Instead, they need to lean in, consider who is responsible for the performance and outputs of the AI systems and satisfy themselves everything is OK. Before systems behave in unpredicted ways (and they will), boards should check escalation processes and remedial procedures. Accountability is not about assigning blame, but about having foresight, to not only minimise the possibility of unintended outcomes but also respond well. The best companies embed clear accountability lines and practices during the design and implementation of AI systems, to facilitate good governance responses downstream.
    • Is it possible to see inside the black box? This is a question of transparency. Understanding AI's conclusions can be a challenge, even for the people who designed and trained the system! However, businesses that cannot explain the workings of their AI systems are coming under great pressure from consumers and authorities who want greater openness. Consider COMPAS, the system used by US courts to determine recidivism risk when sentencing criminals. Investigative journals discovered the system was skewed against black defendants. When challenged, the corporation that built the system refused to reveal the inner workings, citing trade secrets. Predictably, public disapproval and general suspicion rose sharply. The lesson here is that transparency is a reputational issue as much as a technological one. Boards should ensure management understands how AI systems work, and that credible non-technical explanations are available if required.
    • Are we ready for the new wave of regulation? This is a question of long-term risk. Regulation of AI is advancing rapidly. The Artificial Intelligence Act, which was ratified by the EU in March 2024, established stringent requirements for high-risk systems. A Presidential Executive Order signed in October 2023 moved the US in a similar direction. Provisions such as these expose businesses that cannot exhibit moral AI practices to the risk of fines, legal action and, even, system usage prohibitions. Boards can get ahead of the regulatory curve by regularly reviewing their AI policies against current and proposed regulations, and by calling for reports to confirm that systems are fair in use. 
    AI is no longer a back-office technology. Already, it has emerged as an important enabler, influencing operational, strategic and reputational performance. Consequently, boards that ignore AI as someone else's problem may be blindsided. Boards need to ask questions to ensure AI literacy is adequate, risks have been well-assessed and that governance practices are fit-for-purpose. This is not a matter of dreading the unknown: it is about providing effective steerage and guidance.
    Has your board discussed AI governance in a genuine, systematic way yet? It not, it might be time to get started.
    About Dr. Cletus Kadzirange:
    Cletus is a pracademic in corporate governance and company law who consults, trains and writes on various aspects of corporate law, directors' duties and governance. His specific expertise lies in implementing forward-thinking governance frameworks and sustainable practices that foster long-term value and ethical stewardship.