Rhazes AI is not just another medical AI scribe: what category is it actually in?

Featured image for Rhazes AI is not just another medical AI scribe: what category is it actually in?

There is a simple way most clinicians still think about clinical AI tools.

Some tools write notes.
Some tools answer questions.

That mental model was broadly accurate a year or two ago. It is increasingly incomplete now.

Rhazes AI is a useful example of why.

Publicly, it positions itself as more than a documentation assistant. It describes a broader system that includes clinical decision support, coding, auditing, EHR integration, and a knowledge layer — in other words, something closer to a unified clinician workspace than a single-purpose tool.

That matters because it reflects a deeper shift in the market.

Clinical AI is no longer splitting only into:

  • documentation tools (scribes)
  • evidence or search tools

A third category is emerging: the clinical workspace platform — a system that tries to sit across multiple adjacent tasks in a clinician’s workflow.

The interesting question is not whether Rhazes is “good”.

It is: what category does it actually belong to, and how should clinicians think about tools like it?

The old model: scribe vs search

To understand why this matters, it is worth starting with the older mental model.

For a while, clinical AI tools were relatively easy to classify.

Documentation tools (scribes)

These tools focused on:

  • note generation
  • transcription
  • summaries
  • letters
  • admin reduction

Their core value proposition was simple: reduce documentation burden.

Evidence tools

These tools focused on:

  • answering clinical questions
  • surfacing guideline-based information
  • supporting decision-making once the problem was defined

Their core value proposition was clarity and speed of information retrieval.

This split made sense because the jobs were clearly different:

  • writing vs knowing
  • documenting vs deciding

But clinical work itself is not so neatly divided.

That is where the newer category starts to appear.

What Rhazes is claiming to be

Rhazes AI’s positioning is notable because it deliberately stretches beyond a single job.

Instead of saying “we write your notes”, it frames itself around a broader set of functions:

  • documentation
  • clinical decision support (CDS)
  • coding
  • auditing
  • integration into existing systems
  • a knowledge layer

That combination signals something different.

It suggests a product trying to:

  • reduce admin
  • support decision-making
  • capture structured outputs
  • integrate into workflow
  • and sit closer to the centre of clinical activity

That is not just a scribe.

It is closer to a workspace layer.

The emerging category: clinical workspace platforms

This is the key category shift.

A clinical workspace platform is not defined by doing one job extremely well. It is defined by trying to sit across several adjacent jobs inside the same interface.

Typically, that includes some combination of:

  • documentation
  • clinical reasoning support
  • coding and billing logic
  • audit trails
  • workflow integration
  • task management or structured outputs
  • embedded knowledge or guidance

The ambition is clear: reduce fragmentation.

Instead of:

  • one tab for notes
  • one tab for evidence
  • one tab for coding
  • one tab for communication

the platform attempts to: collapse multiple layers into one environment.

That is a very different design philosophy from single-purpose tools.

Why this category is emerging now

There are several structural reasons why this shift is happening.

1) Workflow friction is the real bottleneck

Clinicians do not only struggle with knowledge gaps. They struggle with:

  • switching between tools
  • repeating work
  • rephrasing information
  • duplicating documentation
  • translating thinking into notes

A platform that reduces switching cost can be more valuable than one that improves any single step marginally.

2) AI can now operate across adjacent tasks

Earlier tools were narrower because they had to be.

Now, AI systems can:

  • generate text
  • structure information
  • suggest possibilities
  • summarise inputs
  • assist with coding logic
  • produce outputs for different audiences

That makes multi-function products more feasible.

3) Integration is becoming a strategic battleground

Once tools move closer to EHRs and workflow systems, the value shifts from “what does the answer look like?” to:

  • where does this sit in the workflow?
  • how often is it used?
  • how much friction does it remove?

This favours platforms over isolated tools.

Why workspace platforms are attractive

From a clinician perspective, the appeal is obvious.

A well-designed workspace platform could:

  • reduce tab switching
  • reduce duplicated effort
  • keep context in one place
  • streamline documentation and thinking
  • improve consistency of outputs

In theory, this is one of the most compelling directions in clinical AI.

It aligns with how clinicians actually work: not in isolated tasks, but in overlapping responsibilities.

Why workspace platforms are also difficult

The ambition is high. The execution is difficult.

Trying to do multiple jobs introduces trade-offs.

1) Depth vs breadth

A platform that does many things may struggle to match the depth of a specialised tool in each area.

2) Trust and provenance

When multiple functions are bundled together, it becomes harder for users to understand:

  • where the information comes from
  • how outputs are generated
  • what should be trusted and what should be verified

3) Cognitive opacity

If documentation, reasoning, and suggestion layers blend together too tightly, it may become less clear:

  • what the clinician thought
  • what the system suggested
  • what was accepted or modified

That has implications for safety and accountability.

4) Integration complexity

EHR integration is not trivial. It introduces:

  • technical constraints
  • governance requirements
  • procurement barriers
  • institutional decision-making layers

This means the platform strategy is powerful, but harder to execute than a standalone tool.

Where workspace platforms sit relative to other categories

To make this practical, it helps to separate the main product shapes clearly.

Documentation tools

Primary job: reduce admin
Strength: efficiency
Limitation: limited reasoning or knowledge depth

Evidence tools

Primary job: answer clinical questions
Strength: clarity and sourcing
Limitation: less workflow integration

Differential-diagnosis tools

Primary job: broaden hypotheses
Strength: early uncertainty support
Limitation: not designed for final decision or management

Knowledge / learning platforms

Primary job: reinforce understanding and reasoning
Strength: educational depth and structure
Limitation: not designed to run workflow or admin

Workspace platforms (e.g. Rhazes)

Primary job: integrate multiple adjacent tasks
Strength: workflow consolidation
Limitation: potential trade-offs in depth, transparency, and governance

This is why comparing everything as “AI for doctors” often creates confusion. These tools are solving different problems.

Where iatroX fits in this landscape

This is where positioning matters.

iatroX should not be framed as a workspace platform. It is not trying to replace documentation systems, coding engines, or EHR layers.

A clearer and more defensible position is this:

iatroX is a knowledge and clinician-education layer that sits alongside workflow tools, not instead of them.

That matters because:

  • workflow tools optimise execution
  • knowledge layers optimise understanding and reasoning

iatroX fits best where clinicians need:

  • structured clinical understanding
  • reinforcement of reasoning
  • movement between question-bank logic and real cases
  • guidance-aware explanation
  • learning that remains useful during clinical work

In other words, if a workspace platform helps you do, iatroX helps you think better while doing.

That is a complementary position, not a competing one.

Relevant internal routes:

Why this category distinction matters

If clinicians misunderstand categories, they often feel disappointed by tools that are actually behaving exactly as designed.

Examples:

  • expecting deep guideline nuance from a DDx tool
  • expecting full workflow support from an evidence engine
  • expecting perfect reasoning from a documentation assistant
  • expecting a single platform to replace every layer of clinical work

Clear category thinking leads to better tool selection.

A workspace platform is not trying to be the best evidence engine.
An evidence engine is not trying to manage your workflow.
A learning platform is not trying to write your notes.

The most effective setups usually involve complementary layers, not a single universal tool.

The direction of travel

Rhazes is interesting not only as a product, but as a signal.

It suggests that the market is moving towards:

  • more integrated systems
  • more workflow-aware tools
  • less tolerance for fragmentation
  • greater emphasis on placement inside clinical work

At the same time, the need for:

  • clear provenance
  • reasoning transparency
  • educational reinforcement
  • domain-specific depth

is not disappearing.

If anything, it becomes more important as systems become more integrated.

Conclusion

Rhazes AI is not best understood as “another scribe”.

It is better understood as part of a newer category: the clinical workspace platform.

That category is emerging because clinicians do not work in isolated tasks, and because AI can now operate across multiple adjacent layers of work.

But integration comes with trade-offs:

  • depth vs breadth
  • clarity vs convenience
  • transparency vs consolidation

That is why the future of clinical AI is unlikely to be a single winning product.

It is more likely to be a set of layered systems:

  • workflow platforms
  • evidence tools
  • diagnostic support tools
  • and knowledge / education layers

The key is not to ask: “Which tool replaces everything?”

It is to ask: “Which combination of tools reduces friction while preserving clarity, safety, and understanding?”

Share this insight