The goal of the computer model, which is being developed by Santeon together with software provider Pacmed, is better utilization of scarce ICU capacity. "As the demand for care increases in the future and staff shortages mount, we must look for innovative solutions to keep care running," says Santeon project leader Roald van Leeuwen. "In this, AI can play an important role." Ultimately, the use of AI in healthcare should make ICU care not only more efficient, but also better.
Ideal tool
AI certainly has that potential, notes the Organization for Economic Cooperation and Development (OECD) in a recent report. According to the compilers, 97 percent of available data is not currently used for healthcare purposes. AI can help unlock this mountain of data and turn it into actionable information. This could save lives, according to the OECD. In Europe alone, up to possibly 163,000 people died from medical errors last year. About a third of these were caused by errors in communication. "AI is an ideal tool to improve communication by delivering the right information to the right people at the right time in the right context," the drafters said. "This prevents errors, saves lives and improves healthcare outcomes."
Great potential, ditto danger
But the report, tellingly titled "AI in health - huge potential, huge risks," also highlights the downside. Flawed or colored data or misuse can actually lead to poorer treatment outcomes, according to OECD. This problem can be exacerbated by ambiguities around accountability and liability. Moreover, laborious AI applications risk being imposed on professionals who are already overburdened.
The drafters of the "International Scientific Report on the Safety of Advanced AI" reach similar conclusions. "Many of the potentially most dangerous applications of AI are in the healthcare domain," said the international group of scientists chaired by authoritative AI scientist Yoshua Bengio.
Active abuse
In addition to the intrinsic dangers already mentioned, the group points to several potential forms of active misuse. For example, there is the danger of data theft, with malicious actors draining AI models. According to some predictions, cybercrime damages in healthcare will reach over $10 trillion next year. Ironically, cybercriminals are increasingly using AI in the process.
Improper use of AI by health insurance companies is also high on the list. That this is not imaginary is shown by several examples from the US. There, AI models make the predicted hospital stay coincide exactly with the reimbursement ceiling of the policy. Care after the "predicted" discharge date -even though the patient has not recovered- is not reimbursed. The issue is now the subject of several lawsuits.
Vendor lock-in
The trend of market concentration associated with the continued rollout of AI may also cause problems. As the number of players in the AI market decreases, dependence on healthcare providers grows. Not only could such vendor lock-in cause financial damage, it could also mean no back up in case of system failure.
No good keys
A major problem, according to the scientists, is that, as yet, there is no good way to test the most vulnerable AI models for dangers such as the above. "No existing technique currently provides quantitative guarantees regarding the safety of AI models and systems," the drafters said. "Developers do not yet sufficiently understand how their models work." The scientists criticize the lack of access for outside testers.
Coordinated policies
The OECD argues that inaction also brings dangers. The organization thinks, among other things, of a growing (digital) health gap, slowing scientific development and declining public trust. The OECD therefore advocates operationalizing some widely accepted principles. "Currently, AI is designed, developed and implemented based on local datasets in individual, mostly capital-rich healthcare facilities around the world. Such customized applications carry the danger of fragmentation by not scaling well. To unlock the value of AI for people as widely as possible, strong coordinated policies are needed both within and across national borders."
Stepped route
Santeon seems to have understood that message. Seven top clinical hospitals work together under the Santeon banner, accounting for an aggregate annual turnover of 3.2 billion euros and 11 percent of hospital care in the Netherlands. Despite its efforts to increase scale, for now Santeon is opting for a stepped route to wider application of AI.
The Pacmed Critical model has been used by Maasstad Hospital and OLVG since last year. The Catharina Hospital and MST will also start using Pacmed Critical in the ICUs in the coming months. At Maasstad Hospital, the software has already been updated based on initial experience. At OLVG, the update will go live this summer. In addition, the software will be expanded into a broader package that supports IC care providers in all kinds of decisions from admission to discharge.
Responsible use
According to Santeon, the project enables "practical learning on how to responsibly and structurally deploy AI in healthcare." The first phase, according to Santeon, focused mainly on solving the inevitable challenges involved in responsibly deploying AI in healthcare.
"By bringing AI to the ICU, we are learning to solve these challenges in practice," responds project leader Van Leeuwen. "In this way, we can make the implementation of AI more and more accessible to hospitals. The continuation of the collaboration focuses on further scaling up and expanding AI in the ICU within the Santeon hospitals. Learning is a continuous process."