
Dec 6, 2025
AI
Podcast
What We’re Missing in Radiology’s AI Moment
There are people who comment on AI in radiology, and then there are people who have shaped the conditions that allow the rest of us to comment at all. Dr. Woojin Kim falls in the latter category. His career spans clinical radiology, startup innovation, and national-level strategy work through the ACR. When he talks about where imaging is going, he is not speculating. He is describing what he is already seeing.
Our conversation revealed something I did not expect. The story of AI in imaging is not about algorithms, at least not primarily. It is about what humans stop noticing when the tools get good enough.
The relentless pace of learning and the systems required to keep up
Dr. Kim reads four to six hours of AI research every day. That number sounds theatrical until he explains the machinery behind it. His workflow is designed to collapse the reading burden into something cognitively survivable. Notebook LM turns papers into audio digests he listens to in airports. Readwise Reader becomes a single clearinghouse for newsletters, papers, and blog posts. Recall gives him the summary of a 30-minute lecture without demanding the full 30 minutes.
The point is not productivity. The point is maintaining literacy in a field that now updates itself on a weekly cycle. He estimates skimming roughly 200 pieces of content per day. Not because he wants to. Because falling behind is no longer a single event. It is a slope.
For radiology leaders trying to orient themselves, this is a useful reminder. AI literacy is not a one-time course. It is a continuing obligation.
The problem no one sees until it is too late
Of everything we discussed, the most consequential was what Woojin called the silent degradation problem.
Imagine you properly evaluate an FDA-cleared model. You deploy it responsibly. It works well. Then, quietly, conditions shift. The CT scanner is upgraded. Patient demographics change. The protocol drifts. Nothing dramatic happens, except the model becomes slightly worse every week.
The danger is not the degradation itself. It is that no one notices.
Radiologists trust systems that performed beautifully at the start. There are no warnings. No error messages. Nothing that signals the slow slide into unreliability. It is the clinical equivalent of a smoke detector that never chirps.
His argument is clear. This field will not survive without continuous monitoring. Periodic check-ins and retrospective audits are not enough! Continuous. Because model decay does not wait for quarterly review meetings.
Upskilling, deskilling, and the quiet reshaping of human cognition
We talked about AI as a crutch, and whether it dulls the cognitive muscles we rely on in clinical work. Woojin’s answer is nuanced but pointed.
AI can absolutely deskill clinicians. The evidence is already here. He referenced a study in which gastroenterologists became worse at detecting polyps after AI cues were removed. The problem was not incompetence. It was over-reliance.
Yet, when designed correctly, AI can strengthen cognition by taking on the repetitive load and freeing clinicians for higher-order reasoning. The distinction is in how the tool is positioned. Assistant or authority. Scaffold or substitute.
This tension is not theoretical. It is already shaping training pathways for early-career radiologists who will never know a world without AI overlays, prompts, and structured templates.
Context engineering and why radiology cannot escape it
Woojin made an argument that deserves more attention. Prompt engineering is not enough. Not even close. The future is context engineering, which asks: What does the model know about the environment in which it is acting?
For radiology, this means:
Prior studies
Clinical notes
Department workflows
The radiologist’s reading patterns
The protocols of that specific institution
A model looking only at pixels is a model pretending to be useful.
He pointed to work showcased at the SIIM Hackathon, where AI agents leveraged model context protocols to deliver results that were meaningfully closer to how radiologists actually think. Context is not ornamentation. It is the difference between an algorithm that guesses and one that assists.
Bias, synthetic data, and the information hiding in the images we do not understand
One of the most sobering parts of the discussion was the question of unseen signals.
We talked about the well-known study by Dr. Judy Gichoya’s team showing that AI models can identify patient race from X-rays with startling accuracy, even when radiologists cannot. Woojin uses this as a cautionary tale when considering synthetic data generation.
If a model can detect signals we cannot perceive, then artificially generated images may contain biases we do not even know to look for. He cited work from Mayo Clinic showing that synthetic pelvic X-rays produced by a model often displayed worse osteoarthritis for images associated with Black patients, not because of biology, but because of downstream socioeconomic patterns embedded in the training data.
This is not bias in the cartoonish sense of “bad training sets”. It is a bias emerging from deep statistical structures that clinicians cannot intuit. Once those signals are incorporated into synthetic datasets, the next generation of models will inherit them without question.
We are building on foundations we cannot fully see.
AI agents, invented languages, and the risk of losing interpretability
Woojin does not engage in sci-fi theatrics. But he is realistic about the risks of agentic AI. As systems begin coordinating with each other, efficiency pressures may push them to develop intermediate communication formats that optimize for machine speed rather than human interpretability.
Humans already represent information poorly. Our language is a lossy compression scheme. If AI decides to compress further, the reasoning pathways may become inaccessible.
This is why he believes some level of chain-of-thought transparency must be preserved, even if current explanations are more storytelling than genuine introspection.
The alternative is opacity at precisely the moment these systems gain the autonomy to make decisions.
What is coming next in radiology (not a prediction, but a trajectory)
Woojin avoids predictions, but he watches trends. And the trends are unmistakable.
Large multimodal models capable of generating draft radiology reports
Agentic AI that handles multi-step workflows instead of single-step tasks
Context-aware systems that integrate EMR data, priors, and radiologist preferences
A shift from narrow tools to broader AI ecosystems that operate across the workflow
He referenced early results showing meaningful productivity gains from automated draft report generation. The research is moving fast, and commercialization will not lag far behind.
For radiologists wondering how to prepare, his advice is direct.
Learn how these tools work.
Experiment with them.
Develop critical thinking habits strong enough to resist automation bias.
AI will not replace radiologists. But radiologists who do not understand AI may find themselves practicing in a field that no longer resembles the one they trained for.
Closing reflection
What struck me in this conversation was not the technical complexity, though there was plenty of that. It was the recurring theme that radiology’s future depends on our ability to notice what is changing before it changes us entirely.
Models degrade quietly. Bias hides in places we cannot see. Tools reshape cognition long before anyone measures the effect. Context becomes the ingredient that determines whether AI helps or harms.
In other words, AI is not forcing radiology to think faster. It is forcing radiology to think deeper.
And for all the speed in the field, that might be the real work ahead.





