With that little piece, a system is one designed to read X-rays or oriented implants that can learn to fall
Digital poison: 100 false samples are enough to break a medical diagnosis with AI
With this small part, an AI system designed to read X-rays or prescribe transplants can learn to fail.
Mario Vega Barbes / Farhad Abtahi / Fernando Savana Martinez / Ivan Palau de la Cruz
You don't have to be a computer genius to sabotage the artificial intelligence of the healthcare system.It would be enough for someone to enter 100-500 manipulated images into a database of millions.
That small amount of "digital poison" may represent one millionth of the training data.With that little room, an AI system designed to read X-rays or assign transplants could learn to fail.And it wasn't done randomly.While absolute accuracy works for the rest of the population, it can do so for certain groups of people.
The most alarming thing is not the simplicity of the attack, but our current blindness.These sabotages are statistically invisible to standard quality control.By the time these anomalies are discovered, the damage has already been done.
The myth of safety in numbers
There is a popular belief that the amount of data required to store AI is a shield in itself.We tend to think that in the ocean of millions of medical data, a few drops of false information will float around harmlessly.The evidence strongly refutes this assumption.
Two research teams, from Sweden's Karolinska Institutet (SMAILE) and the Polytechnic University of Madrid (InnoTep), evaluated 41 key studies on security in medical AI published in recent years.After this process, we can conclude that the success of the attack does not depend on the percentage of corrupted data, but on the absolute number of samples.
This means that what we're seeing is structural damage: AI systems are inherently susceptible to short, smart and direct disruption.
The mechanics of repeated lying
How can a small amount of data fool such a complex system?The method of attack repeats the old adage of totalitarian propaganda: "A lie repeated a thousand times becomes the truth."
The phenomenon of indoctrination occurs in machine learning.That is, the system does not see the data only once, but reviews it in repeated cycles.If a reduced set of erroneous patterns is inserted, the system will process it multiple times in these cycles.In this way, these malicious samples multiply their impact on the final result.
At this point, we have a system that has internalized a false reality.Quite the contrary, the "poison" AI works normally for a large number of patients: it "makes mistakes" only in situations and situations designed to fail.
The result of the attack is not a failed model, but a damaged model.It keeps its general utility the same, but does a selective sweep against, for example, a target group.This is not a random error;it is an exception to mathematical principles that is hidden under the general appearance of efficiency.
A secret agreement
Perhaps the most ironic conclusion of our work is that there are laws designed to protect us that highlight this danger.Important regulations such as the General Data Protection Regulation are essential to ensure patient privacy, but they can act as a shield for unwary attackers.
In order to identify the specific sabotage as described, the information of thousands of patients must be linked between different hospitals.However, the law specifically limits this type of monitoring and data coordination.
This creates a "security paradox".We protect patients' privacy while shutting down the system designed to protect them.As a result, these attacks can remain hidden for a long time.
defense based on plurality
In this context, traditional cyber security is not enough.In our research, we propose a defensive solution called MEDLEY (deep diversity medical diagnostic system) for the health care field.In the face of the unique thought of the optimized model, we propose the value of the opposition.
Our proposal is for “digital dashboards” to consist of different AI systems, including their predecessors, as well as different designs and providers.With this variety, an attacker could discredit one of them, but repeating this process in the rest would be very complicated.
The consultation process will go through these 'digital medical boards'.Of course, given the diversity of AI systems, there can be radical inconsistencies in the result.But if this happens, no false unanimity should be imposed.Instead, we should accept that there is no consensus and call for human review.
The age of technological innocence for artificial intelligence is over. We must not accept a "black box" that imposes facts. If we want machine learning to be a positive factor in our healthcare, we must understand its limitations and correct them through rigorous procedures and human knowledge.
Authors: Mario Vega Barbas, Associate Professor, Polytechnic University of Madrid (UPM); Farhad Abtahi, Senior Specialist, Research Infrastructure, Karolinska Institutet; Fernando Seoane Martinez, Professor and Head of Research, Karolinska Institutet; and Iván Pau de la Cruz, Professor of Telematics, Polytechnic University of Madrid (UPM).
- PPDEG MPs injured in AP-9 is in the hospital for the problem
- the plane was diverted from the airport and coruña to Santiago on a very windy day
Netflix returns to Santiago to film a historical series about Spain's first serial killer
- Santiago bus strike: Employers condition the resumption of dialogue on the union’s condemnation of violence
- The meeting to end the bus strike in Santiago ends without an agreement
- Wonderful house with garage in the center of Santiago sold for 840,000 euros
- Storm Joseph caused traffic chaos in Santiago and prevented planes from landing twice in Lavacolla
- Xacobeo Rueda confirms that the Vatican "knows well or wants" a papal visit to Galicia in 2027.
