- Published on
Is an elephant [in the room] obscuring our view?
The rise of artificial intelligence capabilities over the past 4–5 decades (you read that correctly, not 4–5 months or even 4–5 years) has brought some awkward questions into stark relief.
- How might AI enable or impair our strategic priorities?
- Are the data in management reports to the board accurate, and conclusions credible?
- As directors, we’re supposed to govern with impact. But what matters most amongst the many priorities in the reports from management—and how might we decide?
- Are the so-called experts that management keeps putting in front of us actually experts, or are they just AI-junkies who have generated content that appears to be informed?
These questions, and many others like it, highlight an overarching question that has become very real for many directors, more so as the onset of AI-generated content has started to pervade boardrooms, executive suites and beyond:
The report behind the question brings the problem into stark relief: Many conclusions developed from academic research and peer-reviewed articles may not be reliable. Indeed, many may not be worth the paper (screen) they are written on, despite the seemingly attractive arguments put up by the authors.
This being the case, how might directors validate the data and reporting in board packs?
If boards are to govern with impact, they must first ensure the reports they receive are not only accurate but credible. This is a demanding expectation, but it is the baseline. Fortunately, we are not the first people to ponder this matter: This muse explores some of the core considerations.
The elephant in the room is not AI, per se; it is the directors’ ability to distinguish between what matters and what does not—the signal and the noise.
0 Comments