Will the EU become the global AI watchdog thanks to the AI Act? With the introduction of binding directives, Europe is taking a giant step toward secure and transparent AI systems. For a social sector like healthcare, the framework is essential. But rules are one thing. To secure public goals, the government needs the same blunt computing power as AI developers, argues AI pioneer Yoshua Bengio.
If all goes according to plan, the AI Act will officially take effect this spring. With that, all autonomously acting computer systems and algorithms that make decisions, generate content or assist people will have to deal with new, strict rules of the game. European legislators want the AI Act to fit in as much as possible with existing European laws and regulations. Within the health domain, for example, the AI Act complements the Medical Device Regulation (MDR). The General Data Protection Regulation (AVG) also remains in force alongside the AI Act.
Human control
As a first step, the EU has identified where AI can go wrong. The EU is thinking primarily about the degree of human control and oversight. Technical robustness, security, privacy and data governance also count heavily. Transparency is another area of concern. Is it clear where the underlying data comes from? Are the source codes and algorithms transparent? Are the choices of the AI system in question traceable and explainable?
Accidents can also happen when it comes to diversity, non-discrimination, fairness and the environment, in the EU's view.
Risk Classification
Based on these criteria, the EU established a risk classification. The AI Act distinguishes between four categories. Systems that automatically rank people based on behavior, socioeconomic status and personal characteristics-also known as "social scoring"-are classified as unacceptable. So are applications that rely on biometric identification, covert manipulation or emotion recognition in the context of work or school. Such applications must be phased out in Europe within six months of the AI Act taking effect. That means exit for an application like Clearview AI that scours the Internet and camera systems in search of food for automatic facial recognition.
Care has 'high risk'
After "unacceptable" comes the label "high risk. This includes applications in the medical field, education and critical infrastructure. It also includes access to key services, insurance estimates and "intelligent" partial applications in regulated products, such as cars, elevators and power tools.
Specifically, the European rules mean that manufacturers must clearly indicate when users are dealing with a chatbot, biometric categorization or emotion recognition. Deep fakes must also be clearly labeled, making them recognizable to users as such.
Copyright
A separate category is general AI models and foundation models, such as GPT and Bard. Here, according to the European legislator, there is a "specific transparency risk. To remain active in Europe, creators are given one year for more transparency and documentation. For example, creators must clarify the extent to which they use copyrighted data. All AI applications that do not fall into one of the three categories above will soon be considered "low risk" in Europe.
Competitive Advantage
According to AI developers, legal framing inhibits progress. Moreover, according to many companies, legislation is unnecessary because socially responsible use is one of the foundations of AI development. How unconvincing the latter argument is is evidenced by the actions of Open AI. The modus operandi of the creator of ChatGPT was "open," until the company shut down in order to keep its own competitive advantage intact.
Sanctions
To get tech companies to cooperate, the AI Act provides for hefty penalties. Violators can be fined up to 35 million per violation or 7 percent of global sales. Administrative violations can bring fines of up to 7.5 million or 1, 5 percent of sales. A special "data regulator" is to enforce the AI Act.
Play area
The question is how hot the soup will soon be eaten. The AI Act gives companies the necessary leeway. In the first instance, for example, they themselves determine within which risk category their product falls. Even the classification itself is not set in stone. For the high-risk category, there is a whole laundry list of exceptions that make an application fall outside the AI Act. And more broadly, the AI Act does not apply to use of AI for military and defense purposes.
US decree
Never mind that the EU is the first major economy to establish a comprehensive legal framework for AI. For the MIT Technology Review, reason to designate the EU as the "international AI police" and "go-to tech regulator. Yet other countries and parties are not sitting still. U.S. President Biden issued an executive order last fall intended to regulate the development of AI. The decree relies on similar principles to those used by the EU, but for now the piece is nothing more than a mandate to the National Institute of Standards and Technology (NIST) to establish a framework.
Medical liability
What the growing government involvement means for the healthcare industry is not yet well understood. The American Medical Association (AMA) gave a shot in the arm in November with the publication of a working paper. The American context colors the piece. For example, the AMA makes a point of physician liability when AI-driven technology is used. That liability should be limited to the currently applicable legal rules. And enough lawsuits and damage claims are already flowing from that in the US. The AMA also warns against the use of AI by health insurance companies. AI-driven decision models should not lead to insureds being denied necessary care.
Lawsuits
That concern is not imaginary. Several examples have surfaced in the US of AI models that make the predicted hospital stay coincide exactly with the policy's reimbursement ceiling. Care after the "predicted" discharge date -even though the patient has not recovered- is not reimbursed. The issue is now the subject of several lawsuits. AMA calls for stricter oversight.
WHO recommendations
This is also what the World Health Organization (WHO) advocates. In a paper published in January, the WHO givesforty recommendations for careful use of large language models (LLMs) in healthcare. As far as WHO is concerned, there should be laws and regulation tailored to healthcare. A special agency should assess and approve LLMs for healthcare. This should include post-commissioning review.
Design included
In addition, WHO is making the case for inclusive design. This process should involve not only scientists and designers, but also healthcare providers, healthcare professionals and healthcare users. Developers should also be able to predict potential side effects.
Supercomputer
A penetrating commentary in the debate over regulation of AI comes from computer scientist Yoshua Bengio, known as one of the godfathers of AI. No matter how many regulations are devised, he says the AI race is ultimately about financial weight and blunt digital computing power. Recently, he called on the Canadian government to build a $1 billion supercomputer. "Of course we have to start with regulation," Bengio said. "But eventually the government will have to take back control by building its own infrastructure. I hope governments realize as soon as possible that see need that muscle."
Quest
The Netherlands has not reached that point yet, especially where specific applications in healthcare are concerned. Last year, a "Guideline for Quality AI in Healthcare" was published . The document is supported by various parties, but is not enforceable or otherwise binding. The Ministry of Health, Welfare and Sport is currently searching. By means of field interviews, VWS is trying to identify the questions regarding AI in the healthcare sector. In addition, the ministry has commissioned TNO to conduct an exploratory study.
AI is one of the core themes during Care & ict 2024. The largest health tech event in the Netherlands will be held from April 9 to 11 at Jaarbeurs in Utrecht.