Last year, hospitals got off to a fast start using AI. On the clinical side, AI boosted early intervention, such as augmenting the diagnoses of heart issues or malnutrition and identifying the best patients for certain kinds of brain surgery, to name a few. AI advances helped in charting, budgeting, and administrative and operational efficiencies, too.
This huge leap forward in applications brought some quick, incremental wins, but now what? Frankly, many hospitals were in a “jump on the bandwagon” phase with AI, aggressively trying it out without having a big picture strategy.
Now that their AI feet are wet, so to speak, hospital administrators are stepping back and asking important questions about how to organizationally structure ongoing AI management.
Laying the Foundation for Sound AI Management
With AI tools becoming more adept, the next generation of AI will require more organizational sophistication and internal controls that ensure the best utilization of AI across the organization. That said, having that end goal in mind is not the same thing as knowing how to get there.
Let’s look at three questions hospital leaders are asking as they evaluate how their AI journey has gone so far (in most cases, not as smoothly as planned) and consider what’s needed to prepare their organizations for 2025 and beyond.
1. Who should oversee AI decisions and strategy?
With internal requests for AI tools coming at healthcare leaders from every direction, from board members and department heads to frontline physicians and nurses, every hospital needs a structure in place to make strategic decisions as to the selection and adoption of AI vendors and solutions. Yet there’s often a disconnect among leadership as to who should have a seat at the table.
A recent Accenture poll of healthcare executives found that nearly a third of hospital CEOs (28%) said they should be responsible for redefining jobs and roles impacted by AI, but only 5% of their C-Suite peers agree. In contrast, 80% of respondents said the chief digital officer or chief digital and AI intelligence officer would be best to lead AI initiatives. (Even though only 6% of the top-100 U.S. health systems currently have a CAIO in place.)
So what’s the correct approach?
In our experience helping hospitals achieve AI readiness, we see the benefits of a “both/and” approach to decision-making. Successful hospitals assemble a cross-disciplinary team to make strategic decisions about the selection and adoption of AI solutions. That might include the CEO to provide input on big-picture goals and budgetary and administrative impacts; the CIO or CDO to contribute the technical considerations; and select clinical leaders to weigh in on clinical requirements and staffing.
This approach breaks down silos, promotes unity and compromise, and ensures all perspectives are considered when selecting AI initiatives and vetting potential vendors, leading to better decisions and a greater chance of system-wide support for implementation.
2. What data governance do we need to support AI?
Next-gen tools consume data voraciously, but that doesn’t mean their output is always high-quality or complete. Before implementation, hospitals must have infrastructure in place to continuously clean data, test it for accuracy, and remove bias.
Unfortunately, that’s not happening in the majority of organizations: University of Minnesota researcher Paige Nong found that only 61% of hospitals that use AI of any kind tested those tools for accuracy and just 44% tested them for bias. That’s a recipe for disaster.
Adequate data management involves creating a framework that includes appropriate policies and standards as well as procedures for auditing and cleaning data. Hospitals also need to understand exactly how data flows through their system and have systems in place to identify and protect private health information in compliance with HIPAA regulations.
Due to the complexity of healthcare data governance, having knowledgeable staff or outside experts dedicated to this task before any AI tool is onboarded is critical. Hospitals who have “built the plane while flying it”—launching an AI tool and figuring out data management later—can testify to the difficulties that created.
3. How do we critically evaluate potential AI initiatives and vendors?
There are a lot of great AI tools and vendors out there. But many of these are brand new and untested and an even greater number are simply not the right fit for your organization.
That’s why it’s vital to develop an AI strategy that aligns technology investment with the overall organizational mission and vision and core value proposition. According to the Accenture study, only about half of healthcare executives report strong alignment between the organization’s overall strategy and its technology goals. And some 83% of healthcare C-Suite executives are piloting generative AI tools without having any plan in place.
A disciplined, enterprise-wide AI strategy should identify the types of AI solutions that fit with the hospital’s larger technology and accomplish the organization’s most pressing goals, such as boosting revenue, improving internal efficiency, reducing clinician workload and stress, and fundamentally changing the way the hospital’s business gets done. This allows AI decision-makers to narrow down which solutions to prioritize and in what order.
It also helps hospitals set realistic ROI expectations. For instance, an AI tool may improve employee satisfaction and retention but not show up strongly on the bottom line. That’s still a win for the organization in terms of meeting your strategic objectives.
Finally, hospitals must have a vetting process in place for vendors. It’s tempting to let a demo of the “shiny new thing” (the fancy AI tool) sweep you off your feet. But in the end, the tool will only be as good as the vendor behind it.
Hospital leaders should enlist their multi-disciplinary team (see step 1) to create a list of priorities based on the tool’s functionality and a system to assess each vendor on each. Or, the hospital can enlist the help of a firm specialized in selecting AI vendors.
Examples of key vetting questions include: Does the vendor demonstrate robust compliance with healthcare regulations like HIPAA by properly validating AI algorithms and clearly documenting any clinical decision-making processes? Can the vendor explain how patient data is processed, stored, and protected in a way that satisfies the hospital’s most stringent requirements?
Selecting only vendors that have the most important pillars in place will help ensure any AI implementation is safe, secure, and beneficial for the hospital and its patients.
The bottom line: Incremental adaptation enables better preparation
Now that the gild is off the lily with AI, more hospital administrators and tech leaders are pursuing a “measured adoption,” a gradual, strategic approach to AI instead of a land grab.
Healthcare organizations that take time to develop a strategic infrastructure and approach will discover they don’t fall behind—in fact, they end up ahead of the pack, with AI investments that meet expectations and accomplish goals. You don’t have to go it alone. Innovative offers an AI Foundation Assessment that makes sure a hospital is AI-ready before rushing into solutions they may regret later.