As healthcare practices begin using AI to automate tasks, cybersecurity implications follow. In fact, hackers themselves are enlisting AI to enhance attacks on healthcare organizations. The U.S. Department of Health and Human Services (HC3) released a threat briefing in July 2023, Artificial Intelligence, Cybersecurity and the Health Sector. HC3’s briefing states that generative AI poses several cybersecurity risks, including phishing attacks, rapid compromise of vulnerabilities, automated attacks, complex malware, and more evasive ransomware. For example, ChatGPT and similar tools can be used to create effective phishing email templates.  

How AI is Changing Healthcare Cybersecurity  

A recent article in the HIPAA Journal highlights a few AI-powered threats to be aware of: 

More effective phishing emails

AI eliminates the need for hackers to be skilled at spear phishing. It also removes language constraints, while lowering the barrier to leverage phishing to obtain user login credentials, deploy malware, and steal healthcare data. In addition, malicious emails generated by AI are more likely to bypass email filters. This is because they often lack grammar and spelling mistakes, use unique lures, and are sent from trusted domains.  

Trickier polymorphic malware

Polymorphic malware is malware that continuously changes its structure and digital appearance. Since it can mutate (rewriting its code and changing its signature files), this type of malware has always been difficult for traditional antivirus software to identify. When it is created by AI, the complexity and speed of the changes escalate, making it even harder for antivirus software to detect.  

Faster credential hacking and database vulnerability detection

Using the latest technology, hackers can attempt logins at a rate of thousands of potential passwords per second. With AI, they can work even faster. Similarly, AI can analyze software and systems and predict vulnerabilities before patches are even available. They do this by crawling through cybersecurity forums and similar platforms to detect hacking trends. In addition, AI can render CAPTCHA ineffective by learning the source code for CAPTCHA challenges and/or using optical character recognition to solve them.  

Increased ability to manipulate data

AI can exploit vulnerabilities in connected medical devices and alter patient data and conversational AI chatbots. This can get in the way of healthcare providers and patient communication, threatening patient safety. By gaining this access, AI can also steal sensitive patient information.  

Overall, generative AI will make it easier for hackers to craft better phishing attempts, write malicious code, and infiltrate systems.  

Preparing for Evolving, AI-Powered Threats  

Healthcare IT executives are taking note of this growing threat. A recent survey by executive search firm Heidrick and Struggles found that nearly half of Chief Information Security Officers (CISOs) cited AI and machine learning as the most significant organizational risk they face. 

AI will impact healthcare leaders in two ways: it will open opportunities for automation, and it will present heightened security risks as bad actors leverage AI themselves. As practices prepare to integrate generative AI tools, it’s important to be aware of the computing power AI models like GPT-3 require. New IT architectures, or “AI supercomputers” will need to be built to support OpenAI’s upcoming language model, as GPT-4 and hardware are already becoming a bottleneck for AI. Overall, AI promises to be a valuable tool in healthcare practices’ tool belts that will enable more informed decision making, less burdensome administrative tasks, and more facetime with patients. 

On the defense side, practices must think beyond legacy cybersecurity methods to protect against AI-based threats. The traditional approach to virus protection—identifying a virus’s entry point, software signature, and pattern to push out a matching defense—is inadequate for threats that use AI to cloak their characteristics. 

Standard antivirus programs require the threat’s profile to protect against it. Hackers today are using AI to conceal that profile—or even to constantly morph it, making it impossible to detect with typical methods. In response, security experts have started to train AI to proactively identify threats posed by AI. 

These AI-based security programs are autonomous programs that scour networks for unusual traffic and changes in workstation activity or user behavior. The program’s sensors constantly assess the entire network, not just areas of known vulnerability. And while they operate outside the direct control of IT staff, they still require 24/7 monitoring. 

Human oversight of these programs is important to whitelist AI-enabled alerts that are false alarms. Examples would include an orthopedist consulting a new database for more information on a particularly unusual case, or an EMR vendor updating a piece of software and needing to modify the underlying operating system for the update. False alarms that go unchecked can lead to system downtime or other frustrating obstacles to care. 

In January 2023, NIST published a new framework for AI risk management, along with a roadmap, playbook, explainer video, and various perspectives on the guidance. The need for safeguards—where humans can evaluate AI’s impacts and stop the program when those impacts are harmful—runs throughout these documents. 

IBM’s 2023 Cost of a Data Breach Report showed that the one third of healthcare organizations that self-detected a data breach (before it was disclosed by the attacker) were also able to contain it faster. According to an article in Healthcare IT News summarizing the report’s findings, “With AI, organizations experienced a data breach lifecycle that was 108 days shorter compared to those in the study that did not deploy these technologies – 214 days versus 322 days. The researchers said that deploying security AI and automation extensively lowered data breach costs by nearly $1.8 million more than organizations that didn’t deploy these technologies.” 

What Your Practice Can Do To Prepare 

As your practice prepares for AI’s growing role in healthcare, we suggest the following investments: 

Security Powered by AI 

As hackers turn to AI to infiltrate healthcare systems and avoid being flagged, AI-powered security solutions are a must.(Examples include a full 24/7 Security Operations Center with MDR and XDR.) Our clients are investing in AI systems backed by our IT security analysts to flag suspicious activity before a human would notice. For example, if a user clicks on a computer link and it doesn’t have a match, AI can compare the activity with normal behavior and flag unusual software. Then, our security analysts can assess that flagged risk and determine whether it is normal or abnormal. From there, they can whitelist or block the flagged software or website to avoid disruption to availability or productivity. 

Our team is working to build software with AI tailored to the systems that ambulatory practices and federally qualified health centers use. When the software finds threats within one healthcare organization, it will flag the threat across multiple healthcare organizations. 

Infrastructure Assessments 

Either as part of integration monitoring or risk management in general, healthcare IT service providers like Physician Select Management can offer consultative guidance on setting up your IT infrastructure to protect against AI-enabled threats. Partners like this can advise and even implement the safest software configurations for certain specialties. This capability may deliver immediate ROI by decreasing cyberinsurance premiums for your practice. 

An IT infrastructure assessment gives you a clear understanding of your IT environment and includes a detailed analysis of the technology, networks, systems, and processes that support practice operations. The result helps practices make informed decisions to improve performance, security, and efficiency. 

Cybersecurity Audits and Remediation 

In addition to infrastructure assessments, it is a good idea to have your practice’s cybersecurity defenses and overall strategy audited. A full security assessment of your cloud, hybrid, and/or on-premise networks and devices can help protect against the latest security threats, including AI. By auditing your cybersecurity posture, you prevent costly data breaches, reduce the long-term costs of cybersecurity threat mitigation, and lessen the risk of HIPPA fines and OCR audit findings. A healthcare IT expert like Physician Select Management can help identify your current vulnerabilities and give your practice a template for future cybersecurity fixes.