Rhazes AI vs Heidi, OpenEvidence, and iatroX: three different answers to the clinician AI problem

Featured image for Rhazes AI vs Heidi, OpenEvidence, and iatroX: three different answers to the clinician AI problem

Most comparison articles make the same basic mistake.

They treat clinical AI products as though they are all competing on one flat scoreboard: which one is smartest, which one is fastest, which one is “best for doctors”.

That is no longer a very useful way to look at the market.

Rhazes AI, Heidi, OpenEvidence, and iatroX do not really represent four versions of the same answer. They represent four different bets about where clinician AI becomes habitual. One is trying to become the workspace. One is trying to sit inside the documentation-and-follow-up workflow. One is trying to become the evidence engine. One is trying to become a provenance-first knowledge and education layer.

That is a much more useful frame than a flat leaderboard.

The practical question is not, “Which one wins?”

It is: what part of the clinician workflow is each product trying to own, and which of those bets is most relevant to the user in front of you?

The market is no longer just “scribe vs evidence search”

A year or two ago, a simpler framework more or less worked.

Some tools were scribes.
Some tools were evidence tools.
Some were revision tools.
Some were diagnostic curiosity tools.

That framework is now too narrow.

The market is fragmenting and converging at the same time. Fragmenting, because products are differentiating around different jobs. Converging, because many tools now reach slightly into adjacent jobs as well. The result is that clinical AI is increasingly being shaped by workflow centres of gravity rather than by single-feature labels.

That is what makes these four products such a useful comparison set.

They map four distinct strategic centres of gravity:

  • Rhazes as the unified workspace bet
  • Heidi as the documentation-to-workflow bet
  • OpenEvidence as the evidence-engine bet
  • iatroX as the provenance-first knowledge-and-education bet

That framing is much more useful than asking which one has the best demo.

Rhazes: the unified workspace bet

The clearest way to understand Rhazes is not as “another scribe”.

The stronger interpretation is that it is making a workspace-platform bet.

A workspace product is trying to do more than one job well enough that the clinician stops leaving the environment. The promise is not only better note creation or faster answers. It is reduced fragmentation. Fewer tabs. Less copying and pasting. Fewer separate systems for documentation, clinical support, coding logic, audit tasks, and knowledge access.

That is a very ambitious category move.

If that bet works, the product becomes sticky not because it is the single best tool at one isolated task, but because it becomes the place where multiple adjacent tasks happen with lower friction. The competitive moat is not only model quality. It is workflow centrality.

That makes Rhazes strategically interesting for buyers who care about:

  • workflow consolidation
  • organisation-level deployment
  • EHR-adjacent use
  • documentation plus admin plus support tasks in one place
  • reducing tool sprawl

But that same ambition also creates the usual workspace-platform risk: breadth can dilute depth. The wider the product tries to stretch, the more important it becomes to ask which parts are genuinely strong and which parts are “good enough because they are already there”.

Heidi: the documentation-to-workflow bet

Heidi sits in a different place.

The simplest description of Heidi used to be: documentation AI. That is now too narrow. The better description is that Heidi is making a documentation-to-workflow adjacency bet.

That means the centre of gravity still begins with documentation burden, but the product expands outward from there into the tasks that surround notes:

  • follow-up actions
  • communication
  • evidence support
  • workflow assistance across the clinical day

This is a strong strategy because it starts from a pain-point that is obvious, painful, and frequent. Documentation drag is one of the easiest clinical AI problems to understand. Once a product earns trust there, it becomes easier to move into adjacent supportive functions.

That makes Heidi especially relevant for clinicians whose primary pain-point is:

  • after-hours charting
  • note burden
  • summary creation
  • communication overhead
  • wanting support across the consult workflow without adopting a full workspace platform

Compared with Rhazes, the difference is subtle but important.

Rhazes feels like it wants to be the operating environment.
Heidi feels like it wants to be the care partner that expands outward from documentation.

That is not a small distinction. It changes both the sales motion and the day-to-day user expectation.

OpenEvidence: the evidence-engine bet

OpenEvidence is a different product shape again.

Its centre of gravity is not the note, and not the broader workflow environment. It is the evidence question.

This is the clearest example in the group of a product making the evidence-engine bet: if a clinician wants to ask a medical question in natural language and receive a rapid, evidence-oriented answer, can the product become the first place they go?

That is a powerful category if the product becomes routine, because first-click behaviour compounds. Once a doctor reaches first for an evidence engine when uncertain, that tool becomes more than a search box. It becomes a habit layer inside clinical reasoning.

This is why OpenEvidence is best understood as solving the question: “I broadly know what I am dealing with, but I want rapid evidence-oriented clarification.”

That is not the same problem as:

  • write my note
  • manage my admin
  • unify my clinical workflow
  • reinforce my learning systematically

Compared with Rhazes and Heidi, OpenEvidence is narrower in one sense and stronger in another. It is narrower because it is not trying to be the workspace. It is stronger because its centre of gravity is cleaner.

That clarity matters. Products often become sticky when they own one repeated cognitive moment very well.

If readers want the UK-relevance angle on that product shape, your natural internal route is OpenEvidence for UK doctors, IMGs, and non-US clinicians.

iatroX: the provenance-first knowledge and education bet

iatroX belongs in this comparison, but not as a pretend all-purpose rival to every product above.

The stronger and more believable framing is different.

iatroX makes a provenance-first knowledge-and-education bet.

That means its value is strongest where the clinician wants:

  • structured understanding rather than only output
  • knowledge reinforcement rather than only search
  • movement between question-bank logic and clinical reasoning logic
  • a clinically useful educational layer rather than a pure admin layer
  • a guidance-aware explanation environment rather than a generic answer engine

That makes iatroX particularly relevant for:

  • clinicians who still want learning to remain part of workflow
  • trainees and junior doctors
  • IMGs
  • doctors who want clarification plus reinforcement, not only summarisation
  • users who care about source-awareness and practical knowledge structure

This is also why iatroX should not be forced into a “workspace platform” frame. That would weaken the positioning rather than strengthen it. If a workspace tool helps clinicians do, iatroX fits best as the layer that helps them understand, reinforce, and reason more clearly while doing.

That is a narrower but more defensible role.

The cleanest internal routes here are:

These are not substitutes in a flat leaderboard sense

This is the key commercial point of the entire article.

These products are not best understood as direct substitutes in a single ranking table.

They are competing for different habitual behaviours.

Rhazes is competing to become the clinician’s main AI workspace

Its habit goal is: stay inside this environment because several jobs can happen here.

Heidi is competing to become the documentation-led workflow companion

Its habit goal is: start here because this is where the admin drag gets reduced, and let the surrounding support features grow from that.

OpenEvidence is competing to become the first-click evidence engine

Its habit goal is: when you are clinically uncertain and want evidence-oriented clarification, open this first.

iatroX is competing to become the explanation-and-reinforcement layer

Its habit goal is: when you want understanding, knowledge structure, and practical reasoning support, use this to turn uncertainty into clearer thinking.

Once you see the products that way, the comparison becomes much more useful.

The clinician deciding between them is not really asking: “Which AI is the best overall?”

They are asking: “Where is my repeated friction, and which product’s centre of gravity actually matches it?”

What kind of user fits each bet best?

A practical way to read the market is by persona.

The clinician overwhelmed by fragmented workflow

This user is the most natural fit for the workspace bet. They may care most about reducing tool sprawl, keeping context together, and having one environment that can carry several adjacent tasks.

That is the Rhazes-type use case.

The clinician whose main pain-point is documentation drag

This user may not want a full workspace platform. They may simply want something that removes note burden well and grows into adjacent support around that workflow.

That is the Heidi-type use case.

The clinician whose main pain-point is evidence retrieval

This user is less interested in workflow orchestration and more interested in rapid, evidence-oriented clarification.

That is the OpenEvidence-type use case.

The clinician who wants understanding, not only output

This user still cares deeply about clinical reasoning, structured knowledge, and learning that remains relevant to real work. They may be a trainee, IMG, junior doctor, or simply a clinician who values reinforcement rather than only answer generation.

That is where iatroX fits most naturally.

Why this framing matters commercially

This article is commercially useful precisely because it avoids the lazy “winner-takes-all” comparison.

If you flatten the market into a single leaderboard, you make every product look weaker than it is, because each gets judged against jobs it is not really trying to own.

But if you map products by habit centre, something much more useful emerges:

  • you can understand why multiple products can win at once
  • you can see where the true overlaps are
  • you can identify which adjacent compares are worth splitting into dedicated pages later
  • you can position iatroX more intelligently without overclaiming

This is also the right bridge into compare expansion.

A blog article can do the category work first:

  • clarify the product shapes
  • explain why these are not flat substitutes
  • teach the reader how to compare by workflow rather than hype

Then later you can split that into more specific compare assets:

  • Rhazes vs Heidi
  • Rhazes vs OpenEvidence
  • Rhazes vs iatroX
  • Heidi vs iatroX
  • OpenEvidence vs iatroX

That sequence is strategically stronger than going straight into fragmented compare pages with no category education.

The hidden competition: who becomes habitual first?

The deeper competitive question is not merely “which product is better”.

It is: where does clinician AI become habitual first?

That question sits underneath every product in this comparison.

  • Does habit form around the workspace?
  • Around the note workflow?
  • Around the evidence question?
  • Around the knowledge-and-learning layer?

That is the real market battle.

Because once a product becomes habitual, monetisation options widen:

  • enterprise rollout
  • team features
  • deeper integrations
  • governance and audit layers
  • premium evidence
  • workflow expansions
  • educational upsells
  • role-specific features

In other words, the first durable habit often matters more than the broadest feature list.

That is why these products are not simply competing on capability. They are competing on where routine behaviour starts.

Where iatroX wins by fit, not by overclaim

This is the section that matters most for your own positioning.

iatroX does not need to win by pretending it is the best workspace, the best scribe, and the best evidence engine all at once.

It wins more credibly by fit.

The stronger framing is:

  • not the workspace platform
  • not the documentation-first care partner
  • not the pure evidence engine
  • but the provenance-first clinician knowledge and education layer

That gives iatroX a more defensible role in a market where many products are trying to sprawl.

It also makes the site architecture stronger:

  • blog content defines categories
  • compare pages sharpen edges
  • Academy deepens the learning layer
  • the Q&A library demonstrates the practical knowledge surface
  • “How it works” explains the product philosophy cleanly

That is a better strategic structure than trying to collapse the entire market into one universal claim.

Conclusion

Rhazes AI, Heidi, OpenEvidence, and iatroX are not really four versions of the same product.

They are four different answers to the clinician AI problem.

Rhazes represents the unified workspace bet.
Heidi represents the documentation-to-workflow bet.
OpenEvidence represents the evidence-engine bet.
iatroX represents the provenance-first knowledge-and-education bet.

That is the most useful way to compare them.

Not as a flat leaderboard.
Not as a “best overall” contest.
But as different strategic bets about where clinician AI becomes habitual.

The right choice depends less on which product sounds smartest, and more on which repeated friction you are actually trying to remove.

If the market keeps moving the way it is moving now, the winners will not simply be the tools with the flashiest demos.

They will be the ones that become routine in the right part of the clinician workflow first.

Share this insight