Every few months, it happens again.
A shiny LinkedIn post appears — full of hope, energy, and hashtags — announcing yet another European digital pathology + AI initiative. Slides! AI! Precision oncology! Federated data! The future is here! 🚀
The most recent trigger for this particular bout of reflection was this LinkedIn post by Nicola Fusco, announcing a new AI-driven digital pathology effort (you can find it here:
https://www.linkedin.com/posts/nicolafuscomd_digitalpathology-ai-precisiononcology-activity-7427280150892965889-lyRj/).
If you’ve been around European digital pathology long enough, your reaction is probably the same as mine:
“Looks great. Now… where will this be in five years?”
A short tour of the graveyard
Over the past decade, Europe has funded an impressive collection of digital pathology + AI projects:
- BigPicture
- ExaMode
- AIDPATH
- CLARIFAI
- PIE
- …
All of them were successful.
All of them delivered reports.
Most of them produced papers.
Some even produced software.
And yet, today, they are — functionally — dormant, frozen, archived, or politely referred to in the past tense.
This is not because the ideas were bad. Quite the opposite.
It’s because Europe is very good at paying to prove something is possible, and very bad at paying to keep it running once it is.
“Is this all… by design?”
At some point during today’s discussion, a dangerous thought emerged.
What if these projects are meant to fail?
What if they’re not really about building lasting platforms, but about probing boundaries — technical, legal, organisational — and then quietly shutting everything down once those boundaries are mapped?
A bit like that Alberta Tech YouTube short that made the rounds recently (this one:
https://www.youtube.com/shorts/wqbWsMLJ48s), which hints at systems learning their limits through failure.
Tempting theory. Very flattering, actually — it would imply long-term strategic thinking.
Unfortunately, reality is much less cinematic.
If these projects were designed to fail on purpose, we would see:
- explicit handover plans
- structural post-project funding
- clear operational ownership
- contractual transitions to long-lived infrastructures like ELIXIR or EMBL
We almost never do.
What we see instead is simpler (and sadder):
projects are evaluated on deliverables, not durability. PDFs matter more than uptime. Demos matter more than users. Nobody is paid to be there in year five — so nobody is.
This is not evil genius.
It’s institutional short-termism.
But isn’t ELIXIR / EMBL supposed to fix this?
Partially — but only at the wrong layer.
ELIXIR and EMBL are phenomenal at:
- data standards
- archives
- FAIR pipelines
- long-term stewardship of reference datasets
They are not chartered to:
- run WSI viewers
- maintain hospital-grade services
- integrate LIS/IMS systems
- deal with vendor formats
- keep clinical-ish platforms alive at 03:00 on a Sunday
Digital pathology AI lives in the messy middle:
too operational for research infra, too clinical for academia, too fragmented for easy vendor takeover.
So it falls through the cracks.
So… is the new project different?
Maybe. Slightly.
Industry involvement, single-site clinical embedding, and product intent (as seen in the LinkedIn post) do improve the odds. But unless someone answers the boring questions early — who pays after month 30? who runs this in year 6?
What we’re doing differently — and why it matters
At our own hospital, we’ve quietly decided to stop playing this game.
Instead of chasing large, shiny, time-boxed projects, we focus on:
- small, composable tooling
- real users, now
- integration first, papers second
- boring glue code between LIS, IMS, AI, and people
- software that must survive contact with daily pathology work
No illusion of a grand platform.
No assumption that “industry will pick it up”.
No pretending that operations magically fund themselves.
We build things that:
- can break,
- can be fixed,
- are still useful even if no EU logo ever touches them.
- and are REAL!

Ironically, this actually maps boundaries — technical, organisational, political — far better than most flagship projects ever did.
A realistic conclusion (no sarcasm here)
European digital pathology + AI does not fail because it is naïve.
It fails because nobody is mandated — or funded — to care after the applause ends.
The solution is not fewer projects.
It’s fewer illusions.
Until funding mechanisms reward continuity, maintenance, and unglamorous operational work, we will keep building beautiful prototypes… and visiting them later as historical exhibits.
And yes — the next LinkedIn post will still look amazing.