Care providers have an ethical duty to investigate the use of artificial intelligence. So say industry association for elderly care ActiZ and knowledge center Vilans. The advance of AI seems inevitable. Yet almost a year after the introduction of ChatGPT, there are still many questions about the use of algorithms, machine learning and large language models in healthcare. An inventory.
The report published late last year by ActiZ and Vilans reads for all intents and purposes as an exploration. Ten tips give the reader an impression of the state of affairs regarding AI in elder care. They are mostly good recommendations, such as: start from vision and strategy, position AI as support and not as a replacement, respond in time to laws and regulations and work together.
The devil is in the tail. "However, due to the great potential added value and the widening healthcare gap, it is no longer ethically defensible for an organization to completely ignore the (im)possibilities of AI," the drafters conclude.
Inevitable advance?
In other words, doing nothing is not an option. Is large-scale deployment of AI in healthcare inevitable? Many commentaries point in that direction. "The changes are dramatic," said Robert Califf, head of the U.S. Food and Drug Administration (FDA) at the recent global technology trade show CES in Las Vegas. "We are going to witness healthcare being guided and supported by algorithms and AI."
Promise
For a sector that is increasingly overburdened, overstretched and understaffed, AI holds a lot of promise. Just consider the administrative relief AI can bring. By turning routine paperwork-speech-driven or otherwise-over to computers, the professional gets more time for the patient or client.
A thousand years behind
What's more, AI can also greatly boost the quality of diagnostics and treatment. Business magazine Forbes recently calculated how much time doctors would spend if they really kept up with their professional literature. "Every 26 seconds, a new scientific study or research paper appears somewhere," Forbes said. "Even if a doctor were to read two articles every night, by the end of the year he would be about a thousand years behind the current state of knowledge."
Prevention
Unlike individual doctors, AI is capable of handling such amounts of information. As a scientific super brain, AI can do even more than track and sift literature. An example of this is EvidendeHunt.
Research and development of new drugs and treatment methods can also be faster, smarter and more effective with the use of AI. Moreover, through its ability to analyze longitudinal patterns, AI helps in the development of truly preventive medicine and in the transition from output financing to value-driven payment models.
AI arms race ChatGPT
Even more than healthcare, for now, it is the industry that is appreciating the opportunities of AI. Microsoft recently reportedly put at least $13 billion into OpenAI, the maker of ChatGPT. The partnership with EHR manufacturer Epic should ensure that ChatGPT seamlessly integrates with electronic health records.
Google has partnered with private hospital chain HCA Healthcare, good for annual sales of between $60 billion and $70 billion. The generative AI model Med-PaLM should help healthcare providers streamline administrative and clinical documentation. With Vertex AI Search, Google is additionally marketing an AI-driven search engine specifically aimed at medical professionals.
Amazon Web Services (AWS) does not want to be left behind in the AI arms race, according to a recent $4 billion investment in AI developer Anthropic. AWS is emphatically looking at healthcare and life sciences in this regard, according to AWS chief executive Dan Sheeran. In addition, AWS is rolling out an AI-driven solution for administration under the name "AWS HealthScribe.
Mixed feelings
Within healthcare, all this AI violence is viewed with mixed feelings. "We're curious, we're excited, we're cautious," Mayo Clinic CIO Cris Ross told CNBC. "And we won't bring anything to patient care if it's not ready to be included in that."
Sumit Rana, executive vice president for R&D at Epic, calls himself a "skeptical optimist" in Healthcare IT News. "I have three rules of thumb for AI. Does a human make the final decision? Is sufficiently clear what role AI plays? And does the application provide enough information about both the use of the AI, the outcomes and the decisions that result?"
Power concentration Big Tech
Reassuring words. But the question is who will soon actually be at the wheel. In an opinion piece in Politico, two authoritative lawyers warn of the growing concentration of power at Big Tech. "Amazon, Google, Facebook and Microsoft are about to take control of the development of AI. Leaving such revolutionary technology to a few unregulated mega-corporations is short-sighted at best and dangerous at worst."
Vertical integration AI chain
In this regard, Ganesh Sitaraman and Tejas Narechania point to the vertical integration of the AI chain, or "AI stack. This chain starts at chip manufacturing and runs through cloud management to AI model development and application distribution. Large parts of the AI stack are already in the hands of Big Tech. Chip manufacturer Nvidia, also an AI partner of MicroSoft, controls roughly 85 percent of the global market, and Amazon, Google and MicroSoft together account for two-thirds of global cloud capacity.
Pod shell
Because of this trend of vertical integration and the enormous capital intensity of operations, new players have no chance in this market, Sitaraman and Narechania believe. To serve all users of search engine Bing with ChatGPT, Microsoft needs to set up 20,000 specialized servers, they calculate. The accompanying investment of $4 billion is a pittance compared to what Google needs to bring AI to their much larger user base.
Bias
In addition to the dominance of Big Tech, AI is haunted by legions of other burning issues, all of which are of great importance to healthcare. The most discussed criticism revolves around "bias. If the source information is unrepresentative, or colored, or in the case of specific patient groups, non-existent, AI-driven care can produce suboptimal results or worse, exclude groups.
Traceability of data
Another concern is the traceability of data, the raw material of AI. Several studies have shown that AI can easily "remember" personal data. From there, it is a small step to "re-identifying" individuals, even if this data is anonymized.
Paper reality?
Moreover, without recalibration and rationalization, the deployment of AI can lead to the perpetuation of suboptimal work processes. By itself, the deployment of AI does not guarantee "smart" care. In a fragmented field like healthcare, the broad application of AI can easily lead to a deluge of disjointed point solutions.
Nor is the promise of greater professional freedom a given. If AI becomes leading in the handling of administration and bureaucratic control, the professional can easily find himself on the computer's leash. In the process, there will be an ever-higher wall between registration and execution. Ironically, AI can thus begin to feed a "paper" reality.
Learning and development mode
So caution is called for, ActiZ and Vilans also agree. But this should not be a license for inaction. "Even if organizations decide not to use an AI application (yet), they can benefit from gaining knowledge. All around us more and more digitization is taking place, including in healthcare. So staying abreast of developments and getting your organization into a learning and development mode is certainly valuable. Any later entry into certain developments becomes a lot easier that way."
How organizations can get into learning and development mode, ActiZ and Vilans unfortunately do not tell. That's quite a challenge in a sector where time, training space and research budgets are scarce. Perhaps ChatGPT will know the answer.
Artificial intelligence is one of the central themes during Care & ICT 2024. The largest health tech event in the Netherlands will be held from April 9 to 11 at Jaarbeurs in Utrecht.