Medome, Doctronic, Ada, Buoy, K Health, Akido Scope & OpenEvidence: the rise of “AI doctors” and AI second opinions

Featured image for Medome, Doctronic, Ada, Buoy, K Health, Akido Scope & OpenEvidence: the rise of “AI doctors” and AI second opinions

The phrase “AI doctor” is now being used to describe several very different things.

Some tools are really symptom checkers. Others are patient-facing second-opinion companions. Others are clinician-facing evidence engines. And a newer category is pushing into AI-assisted care delivery, where AI handles intake, questioning, documentation, and preliminary reasoning before a human clinician reviews or acts.

That distinction matters.

If you put all of these under one headline, you risk comparing unlike with unlike. If you separate them properly, the landscape becomes much clearer—and much more useful.

This article reviews a set of widely discussed tools and categories:

  • Medome
  • Doctronic
  • Ada Health
  • Buoy Health
  • K Health
  • Akido Scope AI
  • OpenEvidence

I’ll focus on what each is actually trying to do, where the value is real, where the risks concentrate, and how clinicians (and informed patients) should think about these tools in practice.


First: what kind of “AI doctor” are we talking about?

The fastest way to make sense of this market is to split it into five buckets.

1) Patient-facing “organise and orient” tools

These tools help people:

  • organise records and test results
  • track a health timeline
  • ask better questions at appointments
  • surface possible missed angles for discussion

This is the lane where Medome often appears to sit: less “replace your doctor,” more “help you prepare, track, and challenge assumptions more intelligently.”

2) Symptom checkers and care navigation

These tools collect symptoms through a structured interview and provide:

  • possible causes
  • urgency guidance
  • next-step suggestions (self-care vs clinic vs urgent care)

This is the classic territory of Ada Health, Buoy, and K Health’s symptom checker (although K Health also extends beyond this into care delivery).

3) AI-first care workflows with human clinicians in the loop

These products push beyond “information” and into workflow execution:

  • intake
  • follow-up questioning
  • note generation
  • care-plan drafts
  • prescriptions / renewals (in limited or supervised settings)

This is the category where Doctronic and Akido Scope AI become particularly interesting—and controversial.

4) Clinician-facing evidence synthesis (“second brain” tools)

These tools are not patient “AI doctors.” They are for clinicians who need rapid, cited answers at the point of care.

This is the lane for OpenEvidence.

5) Diagnostic AI in constrained tasks (imaging, pathology, etc.)

This article is not focused on imaging AI, but it is worth remembering that some of the strongest AI performance claims in medicine come from constrained modalities (e.g. radiology), not general-purpose chatbot conversations.


Why this category is exploding now

Three shifts are happening at once:

  1. Patients are increasingly comfortable asking AI first (or at least asking AI before their appointment).
  2. Clinicians are increasingly using AI behind the scenes for notes, drafting, or second-pass reasoning support.
  3. Search behaviour is changing: people are getting direct AI summaries instead of clicking through to carefully sourced pages.

That creates a real opportunity—but also a safety problem.

The best tools reduce friction without hiding uncertainty. The worst tools feel fast, fluent, and reassuring while quietly weakening provenance and escalation.


Tool-by-tool review

Medome

What it appears to be optimising

Medome presents itself as a consumer-facing platform that helps users keep their health information in one place and use AI analysis to complement medical care. It also references physician consultations via a partner.

What is genuinely useful here

This category can be very powerful when used well:

  • timeline building (what happened, when, and what changed)
  • result organisation (labs, scans, medications, symptoms)
  • appointment preparation (clear, focused questions)
  • rechecking assumptions over time when symptoms persist

In practice, many patients are poor historians only because healthcare systems are fragmented. A tool that improves information quality before a consultation can genuinely improve clinical decision-making.

My view (balanced)

Medome’s strongest use case is not “AI diagnosis replacing a doctor.” It is reducing informational chaos and helping people become better participants in their care.

That said, this category has a predictable risk: users may treat “informational” outputs as medical decisions, especially when the interface feels authoritative.


Doctronic

Why Doctronic matters more than a typical symptom checker

Doctronic is one of the clearest examples of an “AI doctor” brand identity. It markets itself directly as an AI doctor and combines AI interactions with access to licensed clinicians.

What makes it especially important is not just the branding—it is the fact that Doctronic has been tied to a high-profile Utah pilot around AI-assisted prescription renewals.

Why this is a big deal

Once AI starts participating in something that looks like a care act (for example prescription renewals, even within a constrained list and under state oversight), the conversation changes from:

  • “Is this a helpful tool?”

to:

  • “What is the regulatory category?”
  • “Who is liable?”
  • “What is the audit trail?”
  • “What failure modes are being monitored?”

My view (balanced)

Doctronic is strategically important because it forces the real policy question into the open:

Where should the boundary sit between AI-supported workflow and clinician decision-making?

I can see the access argument very clearly (especially for repeat renewals, rural care, and admin-heavy systems). I can also see why physician bodies are uneasy. Both reactions are rational.

This is a live governance experiment as much as a product.


Ada Health

What Ada is best known for

Ada is one of the best-known AI symptom assessment / care navigation platforms. It represents the more mature, enterprise-capable end of the symptom checker category, with a stronger quality and regulatory narrative than many consumer-first entrants.

What I think Ada gets right conceptually

The value of a tool like Ada is not that it “diagnoses better than a doctor” in the general sense.

The value is usually in:

  • structured questioning
  • consistent triage logic
  • scalable front-door navigation
  • clean handover into care pathways

That can be highly useful for digital front doors in health systems and insurers, especially when the tool is integrated into a wider care pathway rather than used as a standalone answer engine.

My view (balanced)

Ada is a serious benchmark in the symptom assessment category because it has spent years building around safety, enterprise deployment, and clinical handover—rather than just “chatbot wow factor”.

That does not mean symptom checkers are solved. Edge cases, atypical presentations, and user misunderstanding remain core limitations across the category.


Buoy Health

What Buoy represents

Buoy is another important symptom-checking / triage player and is useful to include because it illustrates the conversational symptom checker model clearly.

Its practical promise is familiar:

  • ask questions dynamically
  • narrow possibilities
  • suggest what kind of care to seek

Why Buoy is worth taking seriously in comparisons

Buoy is not just a marketing story—it has been discussed in peer-reviewed literature in the context of real-world health information seeking behaviour. That matters because it shifts the conversation from “Does the tool sound clever?” to “How do people actually use it, and what happens next?”

My view (balanced)

Buoy highlights an important truth about the whole sector:

Human factors matter almost as much as model performance.

If users misread confidence, ignore uncertainty, or use a symptom checker to delay urgent care, the harm can occur even when the underlying algorithm is “reasonable”.


K Health

What makes K Health a different beast

K Health is often mentioned as a symptom checker, but it is better understood as a hybrid model:

  • AI symptom checking / intake
  • data-driven questioning
  • pathway into clinician care (including affiliated clinicians)

This is an important strategic model because it can use AI to reduce friction without pretending that AI alone should handle all high-stakes decisions.

Where the real value sits

The biggest operational value is often not “AI diagnosis” in isolation. It is:

  • faster intake
  • better structured history capture
  • better routing
  • reduced admin burden
  • improved continuity into human care

My view (balanced)

Hybrid models like K Health may prove more durable than pure AI-advice tools because they align better with how healthcare actually works: triage, uncertainty, escalation, handover, follow-up.

The main challenge is maintaining quality as scale increases—especially when incentives push toward speed and conversion.


Akido Scope AI

Why Akido Scope AI is strategically important

Akido’s Scope AI is interesting because it sits inside a broader care delivery vision rather than being “just” a chatbot.

Akido frames Scope AI around:

  • AI-assisted interviews/intake
  • clinician feedback loops (including real-time doctor feedback / RLHF claims)
  • safety and quality review layers
  • grounding against documented clinical guidance

What this category could unlock

If tools like Scope AI work as intended, they could significantly expand capacity by improving how unstructured patient stories are turned into:

  • focused summaries
  • relevant follow-up questions
  • cleaner documentation
  • draft plans for clinician review

That is potentially a very large gain—especially in underserved settings and capacity-constrained care models.

My view (balanced)

This category may end up being more transformative than flashy consumer “AI doctor” chatbots, because it targets workflow bottlenecks inside care delivery.

But the evaluation standard must be high:

  • What is automated?
  • What is only suggested?
  • What requires human sign-off?
  • How is error monitoring done?
  • How are edge-case harms detected?

Without clear answers, “AI-assisted care” can become a vague label rather than a trustworthy operational model.


OpenEvidence

Important distinction: this is not a patient AI doctor

OpenEvidence belongs in this discussion because it is part of the broader “AI second opinion” trend—but it is a clinician-facing evidence and decision-support tool, not a consumer symptom checker.

Its core value proposition is speed + citations:

  • clinicians ask a question in natural language
  • the system returns a structured answer
  • evidence is surfaced in a way intended for point-of-care use

Why this matters clinically

The real-world pain point is obvious to any clinician:

  • guidelines change
  • patients ask specific questions
  • reading and appraising papers takes time
  • clinic does not stop while you search

A tool that improves time-to-evidence orientation without removing the clinician from the loop can be genuinely high-value.

My view (balanced)

OpenEvidence is best understood as a second brain rather than a second opinion in the patient sense.

Its promise is compelling—but clinician-facing AI tools still need scrutiny for:

  • evidence selection bias
  • hallucinated inferences
  • overconfident summarisation
  • inappropriate extrapolation to the wrong patient context

Citations help, but citations alone do not eliminate reasoning error.


Comparison: what each tool is actually optimised for

ToolPrimary userCore jobStrengthMain risk / limitation
MedomePatientOrganise records + risk orientationBetter appointment prep; longitudinal viewOver-trust of “informational” outputs; privacy assumptions
DoctronicPatient (+ clinician backup)AI consultation + care access; emerging operational use casesAccess, speed, convenience; pushes the frontierRegulatory, liability, and safety governance questions
Ada HealthPatient / enterpriseSymptom assessment + care navigationMature symptom-checker architecture; enterprise deploymentSymptom-checker limits remain (edge cases, user input quality)
Buoy HealthPatientConversational symptom checking / triageAccessible, dynamic questioningHuman factors: interpretation, reassurance, delay risk
K HealthPatientAI intake + pathway into clinician careStrong hybrid model logicScale pressure can challenge quality and scope boundaries
Akido Scope AIClinicians / care systemsAI-assisted intake + workflow supportCapacity expansion via workflow optimisationRequires clear oversight and auditability
OpenEvidenceClinicianCited evidence synthesis / decision supportRapid evidence orientation at point of careOverconfidence, summarisation errors, context mismatch

What clinicians and founders should notice (the strategic bit)

The next moat in health AI is unlikely to be “the smartest model” alone.

It is more likely to be a combination of:

  • workflow fit (does it save time where time is actually lost?)
  • provenance (can users trace the claim?)
  • escalation design (does it safely hand off when uncertain?)
  • governance (audit, monitoring, accountability)
  • interface honesty (does the UI communicate uncertainty or hide it?)

In other words:

The winning products may look less like magic, and more like reliable systems.


The real safety issue: confidence without provenance

A lot of the current public discussion focuses on whether AI is “accurate”. That matters, but the practical risk in the wild is often something slightly different:

  • authoritative tone
  • low-friction answers
  • weak source visibility
  • hidden disclaimers
  • unclear escalation

That combination creates confidence without context, and that is exactly where users—especially stressed users—can make bad decisions.

This applies to patients using consumer tools and to clinicians using AI evidence tools under time pressure.


A practical framework for evaluating any “AI doctor” / second-opinion tool

Whether you are a clinician, founder, investor, or policy-maker, I’d evaluate tools in this order:

1) What is the claimed job?

Is it:

  • education
  • symptom triage
  • navigation
  • documentation support
  • evidence synthesis
  • prescription workflow
  • diagnosis

If the claim is vague, that is already a red flag.

2) Who is the user, and what is their risk level?

A clinician-facing tool and a consumer-facing tool should not be judged by the same assumptions.

3) What happens when uncertainty is high?

Look for:

  • explicit uncertainty handling
  • red-flag escalation
  • urgent-care prompts
  • handoff to human clinicians

4) Can the user inspect provenance?

If a tool gives a strong recommendation, can the user trace it back to a credible source (or at least understand the basis of the recommendation)?

5) What is the operational governance?

Especially for AI entering care workflows:

  • logging
  • auditability
  • error review
  • post-deployment monitoring
  • version control / model changes

This is where many products sound strongest in demos and weakest in reality.


My bottom line

“AI doctors” are not one category.

  • Medome is best read as a patient organisation / second-opinion preparation layer.
  • Doctronic is pushing into the regulatory frontier of AI-enabled care operations.
  • Ada and Buoy remain important symptom-checker / navigation models.
  • K Health demonstrates why hybrid AI + clinician systems may be the most pragmatic near-term model.
  • Akido Scope AI shows how AI may create the biggest gains inside care workflows rather than public-facing chat.
  • OpenEvidence is a clinician-facing evidence engine and should be judged as a “second brain”, not a consumer AI doctor.

The most useful question is no longer:

“Is AI a doctor?”

It is:

“Where in the care journey does AI create real value, and what safeguards are in place when it fails?”

That is the question that will determine which tools become trusted infrastructure—and which remain clever demos.


Share this insight