AI Can Now Fake X-Rays So Well That Even Doctors Can’t Tell The Difference

We’re officially entering a world where you can’t even trust your own bones, or at least the pictures of them.

Getty Images

New developments in AI have reached a point where software can now generate fake X-rays that are so realistic, even veteran radiologists are being fooled. This isn’t just about a clever bit of tech; it’s a massive security risk that could see medical records tampered with or insurance companies being conned by “injuries” that don’t actually exist.

When a doctor can’t tell the difference between a real fracture and a bunch of pixels cooked up by an algorithm, the entire foundation of medical trust starts to look a bit shaky. Here’s how the technology managed to get this good, and why the medical community is currently scrambling to find a way to verify what’s actually real, according to a new study published in Radiology.

Fake medical images are no longer easy to spot.

Getty Images

Most people assume that something like an X-ray would be hard to fake, especially in a way that could fool professionals. That used to be true, but it’s quickly changing. According to the study, AI-generated X-rays now look realistic enough that even experienced radiologists can mistake them for real scans. In some cases, they couldn’t reliably tell the difference at all.

Doctors struggled when they didn’t know fakes were included.

Getty Images

One of the more surprising findings was how much awareness changed things. When radiologists weren’t told that fake images were part of the test, they only spotted them correctly around 40% of the time. That’s basically close to guessing. It shows that when people aren’t actively looking for something suspicious, these images can pass as completely normal without raising any red flags.

Even when warned, it wasn’t easy.

Getty Images

When doctors were told in advance that some of the images were fake, their accuracy improved. On average, they got it right about 75% of the time. That’s better, but still far from perfect. It means even when professionals are actively trying to spot fakes, a large number still slip through.

Experience didn’t make much difference.

Getty Images

You might expect that more experienced doctors would be better at spotting fake images, but the study didn’t really show that. Radiologists with decades of experience didn’t consistently outperform those earlier in their careers. This suggests the issue isn’t about skill or training in the traditional sense. The images are simply realistic enough to challenge how people normally assess them.

AI struggled to spot its own fake images, too.

Getty Images

It wasn’t just humans being tested. Several advanced AI systems were also asked to tell real X-rays apart from fake ones, and they didn’t perform perfectly either. Accuracy varied quite a bit, but even the models involved in generating the images couldn’t reliably detect them all. That highlights how convincing these deepfakes have become.

The fakes often look almost too perfect.

Getty Images

Researchers did notice some subtle patterns in the fake images. Things like bones looking unusually smooth, spines appearing too straight, or lungs looking overly symmetrical. Fractures in fake images also tended to look cleaner and more uniform than real ones. But these details are easy to miss, especially when someone is reviewing images quickly as part of a normal workload.

This opens the door to real-world risks.

Getty Images

The concern isn’t just about being fooled in a test. If fake medical images can pass as real, they could be used in situations where accuracy really matters. For example, someone could submit a fake X-ray as part of an insurance claim or legal case. If it looks convincing enough, it could influence decisions before anyone realises it’s not genuine.

There’s also a risk inside hospital systems.

Getty Images

Another issue is what could happen if these images were introduced into hospital networks. If someone gained access to a system, they could potentially insert fake scans into patient records. That could lead to incorrect diagnoses or unnecessary treatments, especially if the images aren’t flagged as suspicious.

Trust in medical records could become a problem.

Getty Images

At the moment, medical imaging is generally trusted as a reliable source of information. Doctors rely on scans to guide decisions every day. If it becomes harder to tell what’s real and what isn’t, that trust could start to weaken. Even a small number of fake cases could have a wider impact on confidence in the system.

New safeguards are likely to become essential.

Getty Images

Researchers say stronger protections will be needed as this technology develops. One option is adding hidden digital watermarks to images so they can be verified later. Another idea is linking scans to secure digital signatures at the moment they’re created, making it easier to confirm where they came from and whether they’ve been altered.

Training doctors to spot fakes may become part of the job.

Getty Images

As the technology improves, spotting fake images may become a skill doctors need to learn, just like interpreting scans in the first place. That could include learning to recognise subtle patterns or using tools designed to flag suspicious images before they’re used in decision-making.

This could be just the beginning.

Getty Images

Right now, the focus is on X-rays, but the same technology could be used to create more complex scans like CT or MRI images. Those types of images carry even more detail, which could make the problem harder to manage if similar deepfake techniques are applied to them.

AI in healthcare isn’t the problem, but it does add new risks.

Getty Images

AI is already helping in areas like diagnosing disease and analysing scans more quickly. In many ways, it’s improving healthcare rather than harming it. Of course, like any powerful tool, it comes with risks if misused. This study shows how important it is to keep human oversight and safeguards in place as the technology moves forward.

The main takeaway is how quickly things are changing.

Getty Images

What feels like advanced technology today can quickly become normal. Deepfake X-rays might sound extreme, but they’re already realistic enough to cause concern. The bigger issue isn’t just that they exist, but how prepared systems are to deal with them as they become more common.