How is your board of directors mitigating AI risk?

Arlen Meyers, MD, MBA, is the President and CEO of the Society of Physician Entrepreneurs on Substack and an advisor at Cliexa, MI10, and Thinkhat.

The impact of artificial intelligence (AI) on the corporate healthcare and life science (HLS) industries is becoming increasingly profound, offering the promise of enhanced efficiency, innovation, and competitive advantage. Yet, as with every new technology, AI also introduces a host of complex risks including: data privacy, security concerns, ethical dilemmas, and regulatory uncertainties. 

As such, boards of directors need to do their part to understand AI risk management in order to ensure they are adequately overseeing their HLS organization’s AI adoption and implementation.

So what are some of the key considerations when it comes to the use of AI in a corporate healthcare structure? A new paper published in Health Affairs in January 2025 outlines four key strategic areas of consideration to ensure AI’s effective integration. While the paper specifically provides guidance on pressing healthcare issues for the incoming presidential administration, these recommendations are important for boards to consider as well.

1. Ensure the safe, effective, and trustworthy use of AI.

“Trustworthiness itself is complex and encompasses concepts of fairness, equity, mitigation of bias, and sustainability,” the authors write. Continuing that, “health AI has experienced challenges in areas such as evaluation of accuracy and reliability in settings in which tools are deployed, translation of goals into practice, problems in data management, decision errors, insufficient workflow integration, and inequitable application, among others.”

These issues are not at all surprising and spring largely from two points: (1) the relative novelty of the technology for those outside of the technology industry, and (2) the quality of the data at hand.

As AI evolves, its ability of analysis and discernment will undoubtedly improve, but in order for that to happen in a positive way, its access to diverse, complex, and accurate data sets needs to be ensured. This is not yet a reality given historic inequities in the healthcare landscape and how they are reflected in data sets. 

However, to address this issue, boards can begin identifying ways that their corporate healthcare institutions can address data diversity in the short, mid, and long term. One place to start is by defining AI’s clear scope of use and application.

As the authors explain, “Although the common use of AI can suggest a single technology, the term actually refers to a set of technologies that can be applied in different ways and with different goals.” They continue, “Clear and concrete definitions of healthcare AI technologies and their applications are critical to ensuring equitable use and to providing stakeholders with a common understanding of the range of technologies, applications, and lessons learned, thereby ensuring that governance strategies are appropriately and reliably formulated.”

2. Promote the development of an AI-competent workforce.

At best tools are useless to those who do not know how to use them; at worst, they are dangerous. This is as true for AI as any other tool. As such, the authors of the paper recommend the development and support of an AI-literate workforce.

“Health care personnel must be informed and discerning users of AI and active participants in establishing the value propositions and requirements of these tools,” the authors state. “In the same way that training programs for physicians and allied health professionals require prerequisites of study in biology, chemistry, statistics, and anatomy, basic knowledge of AI and its applications is needed for all health care personnel.”

This recommendation is important for boards to consider for a number of reasons: (1) AI tools are of little to no use to people who do not know how to use them, (2) the proper training in AI tools mitigates errors or misunderstandings by staff, and (3) through AI training, adoption of new technologies with undoubtedly increase.

These recommendations not only support the proper and ethical use of these tools, they also protect the investment that organizations make in these tools by promoting adoption. While the authors of the paper recommend the expansion of AI education and training in higher education (and others), it is incumbent upon healthcare organizations to take up the mantle of AI education with their own institutions.

3. Support research on AI in health and healthcare.

“AI has emerged as a powerful tool for revolutionizing biomedical research, care delivery, and population health,” the authors observe. “Its ability to process and organize vast amounts of multiscale and multimodal data, recognize patterns therein, and make informed decisions can accelerate and improve human decision making and understanding across a broad range of problem domains.”

Yet, the full scope of AI’s use in the HLS industries is still opaque because, as the authors note, “many research projects use these technologies in supporting ways that are not directly captured as focused on AI.”

This point acts very much as a call to action on the part of boards to meet the needs of government and regulatory partners to understand the nuance and scope of AI usage in their HLS organizations. There have been some steps taken by federal agencies to investigate this scope, for example, “Some of the recent high-profile health-related research AI programs have been developed by the NIH and include the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity program and the Bridge to Artificial Intelligence program,” but the paper calls for more to be done. 

This provides organizations with the opportunity to fill the void and offer partnership to federal entities, not only helping them understand AI’s use and application in the HLS industries, but shape its continued implementation and evolution.

4. Clarify responsibility and liability in the use of AI.

Liability and the use of AI are two intertwined issues that the HLS industry is scrambling to solve. And as the paper points out, HLS industries are at the vanguard of addressing this issue as government regulators and agents are tripping to catch up. “Liability for injury arising from the use of AI in medical settings is a subject of concern for physicians and academics, but it has received relatively little policy focus,” the authors write. “In the United States, neither courts nor the federal government have tackled these issues directly.”

And indeed the issue of liability within a corporate structure as a result of anything, let alone AI, is of paramount importance when it comes to the direction and recommendations of boards. For AI in particular, one of the primary concerns is around the "responsibility gap,” which refers to a situation where no clear individual or entity can be held accountable for harm caused by the use of AI in an organization. In the context of healthcare, this can be deadly and poses a number of legal issues, which is why discussing, addressing, and mitigating this issue is so important. 

To do this, partnering with the government provides a holistic solution. “Policy makers should support and coordinate efforts by professional societies to streamline the responsible adoption of medical AI by clarifying the responsibility and liability landscape for health care professionals,” the authors recommend. What better place to start that hand-in-hand with boards actively working to resolve the same issue?

Boards sit at the crux of AI tension

Boards sit in a unique and important position when it comes to developing guiding principles, ensuring compliance, and monitoring emerging regulations in the AI emerging and establishing AI market—but they first have to prioritize AI literacy early, both in the boardroom, and within their company structure.

The future of AI is now. Is your board ready?