Healthcare’s Guide to Generative AI
When generative AI hit the front pages, companies across industries moved quickly to explore the opportunities the technology had to offer.
In a surprising turn of events, healthcare is among the forefront of industries experimenting with generative AI. Historically, healthcare has been slow to adopt new technologies, trailing behind other industries by as much as a decade, because risks like data breaches and system outages are much greater in an industry where lives are regularly at stake.
However, we’re already seeing electronic health records systems (EHRs) adding generative AI capabilities to make clinical documentation quicker and easier for providers. And that’s just the beginning. Providers are exploring the possibilities of using generative AI to support everything from diagnostics to precision medicine.
With its ability to automate manual tasks and quickly provide context-appropriate information, generative AI can help address long-standing issues in healthcare like clinician burnout, access to care, and patient engagement. Healthcare leaders need to take steps now to prepare for generative AI, or they risk falling behind their peers and missing out on opportunities to improve the care experience for both clinicians and patients.
How Generative AI Can Revolutionize Healthcare
Generative AI offers benefits across the healthcare spectrum, from patient engagement to care delivery and beyond. While the universe of potential use cases is vast, there are three specific areas where generative AI can make a substantial difference now if adopted and implemented correctly.
Empowering Patients
By adding generative AI to online patient portals, patients can more easily find support groups, obtain information on their diagnoses based on context from their charts, schedule their own appointments, and search through doctors’ notes.
For example, currently, patients tend to find information about their conditions by searching online, which can generate inaccurate or incomplete results. By using a generative AI platform that focuses search results on credible medical research, patients can find more accurate information about their disease. Generative AI could even analyze their medical information to provide more customized results — for example, if patients have comorbidities that impact management of their conditions, generative AI could provide information tailored to their specific needs.
Generative AI can help patients feel more engaged with and in control of their own healthcare by providing them with initial information that will inform their discussions with their care team. In the long run, this can encourage patients to follow their care plans and may even contribute to overall population health management.
Managing Care
Generative AI can make it easier for both the patient and provider to manage care by automatically scheduling follow-ups, proactively identifying necessary tests, and routinely setting up prescription refills.
For example, if a patient is due for a mammogram, generative AI can be used to proactively notify the clinician to schedule the appointment with the patient — and enable the patient to book their appointment through the patient portal. This way, things like routine testing and preventive care won’t fall through the cracks, leading to fewer gaps in care.
Generative AI can also make it easier for clinicians to stay on top of care management while also giving patients the power to actively manage their own care, encouraging them to follow their care plan.
Supporting Clinicians
Clinicians are suffering from record levels of burnout as they try to handle high patient volumes and an excess of administrative responsibilities, all while dealing with chronic staffing shortages. Generative AI can help address burnout by taking many administrative tasks away from clinicians. Generative AI can provide clinicians with early drafts of communications and summarize notes, which can help clinicians manage communications more easily. Rather than starting from scratch, the generative AI provides information for the clinician to verify and distribute.
For example, if a clinician has ordered labs for 50 patients, and 45 of those patients’ results are normal, they can automate a standard communication to all 45 of those patients notifying them of their results, which the clinician would then verify before sending. That way, the clinician can turn their attention to the five patients who have abnormal labs.
In this way, generative AI can free up clinicians to focus on dispensing care, enabling clinicians to practice at the top of their licenses. It also allows clinicians to spend more face time with patients, giving the patient the clinician’s undivided attention.
The benefits here are twofold: the patient receives a better care experience, and the clinician has a more manageable workload with fewer administrative tasks.
Understanding Risk
Generative AI offers substantial benefits for both patients and clinicians, making it a crucial new tool for healthcare providers. However, this technology is not without its risks, and the healthcare industry has a particularly low risk tolerance due to its strict regulations and its mandate to protect patients.
For the healthcare industry, three risks are particularly important to explore: AI bias, hallucinations, and data breaches.
AI Bias
AI bias is a risk for any organization using AI, but it’s especially problematic in healthcare. The pandemic years amplified healthcare disparities, which refer to variations in access to care and quality of care between socioeconomic groups. AI bias can exacerbate healthcare disparities and result in inappropriate or insufficient care delivered to marginalized populations.
It’s crucial that the model the AI is built on is designed to mitigate the risk of bias. To that end, the AI must also be appropriate to the healthcare organization. For example, an AI platform trained on patient data gathered from primarily white male patients would not be appropriate for a population that largely consists of Black female patients.
This is also why it’s so crucial to keep a human in the loop — humans can help monitor the AI recommendations and evaluate them to ensure there is no hint of bias, misrepresentation, or misinterpretation. Remember, generative AI is not here to replace clinicians, it’s here to support them. All generative AI tools require proper oversight to ensure they are working correctly.
AI Hallucination
An AI hallucination is when an AI platform returns an answer that is untrue and/or not supported by its training data. Hallucinations are particularly challenging because they can be difficult to identify — if you are not already familiar with the topic you are searching, you may not be able to tell when you are receiving inaccurate information.
AI hallucinations can present a serious issue for healthcare, especially when generative AI is used in patient portals. Consider our earlier example of a patient searching for information on their medical condition: if the AI hallucinates an answer, the patient is in danger of acting on incorrect information, which could have negative health consequences.
One unique challenge is in educating users. Healthcare organizations can train clinicians and staff on how to use generative AI, but they are unlikely to have the same opportunity to educate patients on key skills like prompt engineering. If a healthcare organization plans to add generative AI capabilities to its patient portal, the platform needs to be sufficiently sophisticated to correctly respond to prompts from the general population.
Health leaders will want to select a platform that offers algorithmic transparency and cites sources to validate outputs. They’ll also want to implement validation procedures for the AI’s outputs and test it thoroughly before deploying it to their patient population.
Data Breaches
Healthcare is among the most highly regulated industries. Generative AI, by contrast, exists in a regulatory vacuum. These realities come into conflict around data protection.
In healthcare, many regulations — most notably HIPAA — center around protecting personal identifiable information (PII) and protected health information (PHI). As such, a data breach represents a significant liability for healthcare organizations. Not only is it a legal issue, but a data breach can also put patients at risk if their information ends up in the wrong hands.
Healthcare leaders need to make sure they’re taking all possible precautions to safeguard PII and PHI. They also need to ensure that the generative AI vendor they’re working with is doing the same.
The need to safeguard PII and PHI can impede the ability to train generative AI on an organization’s own data, which can exacerbate issues related to AI bias and accuracy of the tool. This is a challenge the healthcare industry will have to grapple with in the coming years.
Getting Started with Generative AI
While the risks of adopting generative AI represent a challenge for the healthcare industry, the benefits are critical to addressing long-standing problems in healthcare. Healthcare providers should explore generative AI to not only improve their patient experience and support their staff, but to stay competitive with their peers.
Looking to adopt generative AI but unsure where to start? Use our list of questions below to help you lay the right groundwork for successfully deploying generative AI for your organization:
Are you using the right data and training model?
Your tool is only as good as the data it’s trained on — better data means a better output, which means better results.
Look for AI platforms that offer algorithmic and data transparency — you need to know if the data set the tool was trained on is appropriate for your organization. This is particularly important if you are using a third-party platform or off-the-shelf solution. For example, if you’re using a tool trained on urban population data, it may not be appropriate to use in a rural hospital setting. You’ll likely need to customize off-the-shelf solutions if they don’t fully address your organization’s needs.
If you’re using an in-house platform, you should create a system that makes it easy for people to input data and verify that it is correct. Your organization should also regularly validate the data to make sure it is accurate and being used correctly.
Have you considered how your generative AI use will impact your health equity program?
As understanding of the impact of social determinants of health (SDoH) grows, healthcare organizations are increasingly focused on increasing health equity in their communities.
While generative AI offers opportunities to improve health equity — by freeing up doctors to see more patients and analyzing patterns to identify existing healthcare disparities within a population, for example — it can also inadvertently contribute to health inequities. Generative AI is vulnerable to biases gathered from the data it analyzes, which can perpetuate the issues that cause and exacerbate health inequity.
Work closely with your health equity team to understand how generative AI can help their work and what steps to take to mitigate the potential risks. Consider also speaking with clinicians, community leaders, and patients to gain a better understanding of the practical use cases for generative AI that will make a real difference in your community.
Do you have the right oversight?
You’re likely feeling the pressure to adopt generative AI. But before you do, you need to have proper oversight and AI governance. You need to be able to move quickly, but carefully, to adopt generative AI in your organization in a way that mitigates risk.
Consider who in your organization you want to oversee your use of generative AI. These entities may include your ethics boards, quality officers, and informatics officers. Their guidance can help you protect your people and organization while still moving forward on your generative AI goals.
It’s crucial to include the right people in the oversight role without making your governance structure too unwieldy. Remember that being nimble but safe is the key to success with generative AI.
Are you taking the necessary steps to protect patient information?
It’s important that you aren’t using just any generative AI for your organization. Make sure you select a HIPAA-compliant platform that is appropriate to your organization’s needs.
Keep in mind that, in any cloud platform, data security and compliance is a shared responsibility between the vendor and the user. That means it’s your responsibility to take appropriate risk mitigation measures — the responsibility does not solely fall on the vendor.
Be sure that what goes into the generative AI platform is HIPAA-compliant and encrypted. Feeding unencrypted patient data into the platform can be a huge security risk in the event of a hack or leak.
You should also look for any potential vulnerabilities that could compromise the security of the platform. For example, consider a patient who falls for a phishing scam. If the phishing scam is delivered to the email address they use for their patient portal, could this compromise the generative AI platform connected to the portal?
What’s the best use case to test generative AI for your organization?
It can be difficult to decide where to begin when adopting generative AI. Look for a task or function that is low risk, highly manual, and creates unnecessary process overhead — these types of functions are great candidates for testing generative AI.
It may be tempting to use generative AI for complicated processes from the very beginning, such as AR/AP processes. However, it’s critical to test the tool on low-risk functions first — you don’t want to make a mistake where risk is high and the margin for error is slim. Once you’ve tested generative AI on an appropriate use case and have learned how best to use it, you can start scaling up your usage to tackle more complex processes and functions.
What investments do you need to make to adopt generative AI in your organization?
There are certain capabilities you’ll need to have in place to make generative AI work for your organization.
Start by assessing what technology you have now, where you’re already using AI, and your organization’s data maturity level. You may need to make investments to upgrade technology or to improve your data management. You also want to make sure you’re making the most of your available tools before adopting and deploying a potentially expensive new platform.
Determine what changes need to be made to adopt generative AI and how much capital and time it will take to make those changes. From there, you can construct a roadmap to help guide you through the process of deploying generative AI. You can also use this roadmap to track your progress as you move forward on your generative AI journey.
Moving Forward
Generative AI has the potential to bring healthcare into a new era, one that enables better care delivery and better working conditions for clinicians. Healthcare leaders need to move quickly but thoughtfully to take advantage of these opportunities and keep pace with their competitors. Making the right first steps will be key to adopting and deploying generative AI in a way that serves both patients and clinicians.
Contact Us