Mira Analytics

MIRA ANALYTICS

Real-time, precise and explainable QA for mental health endpoints in clinical trials.

What we do

01 — Automated scale ratings

From mental health scale audio to scores.

Central raters run their usual questionnaire. Mira ingests the audio locally and outputs item-level scale scores that track closely with trained raters.

Interview waveform flowing into a Mira engine and bar chart of scale ratings.

Interview waveform flowing into a Mira engine and bar chart of scale ratings.

02 — Explainability

Scores that point back to the interview.

Each score can be traced back to concrete parts of the conversation and specific items on the scale. Explainability helps clinical teams understand which parts of the dialogue drove the model’s judgement — making reviews faster, more consistent, and easier to trust.

Insomnia Severe
How have you been sleeping lately? 09:14
Not terrible, but not great either. 09:15
How long does it usually take you to fall asleep? 09:19
On good nights maybe half an hour. Sometimes longer, but nothing extreme. 09:21
And on the bad nights? 09:25
I’m awake until three or four almost every night, and when I do sleep I’m up again after an hour. 09:27
Mira Model output Short, fragmented sleep The participant reports frequent nights with delayed sleep onset and repeated awakenings — a pattern typical for severe insomnia.
How do these nights affect your day? 09:33
I can’t focus in meetings and I’m forgetting simple things. I feel wiped out most days. 09:35
Mira Model output Daytime cognitive impact The participant mentions reduced concentration and mistakes with routine tasks — evidence that their insomnia is functionally impairing.
How’s your mood and appetite otherwise? 09:40
My mood’s okay, and I still go for walks. Just wish I wasn’t so tired. 09:43

Mira highlights parts of the insomnia dialogue that support a severe rating.

03 — Rater analytics

From single predictions to rater performance.

Individual scores aggregate into intuitive analytics of rater behaviour, highlighting drift, over- or under-scoring tendencies, and how these evolve throughout the study.

Monitor rater scores over time to visualize trends.

Monitor rater scores over time to visualize trends.

04 — Platform

Local, fast and consistent.

Mira engine runs on secured Google Cloud with local LLMs, with stable behaviour across languages, questionnaires and sites.

Deployment

Runs in Mira VPC. Audio and scores never leave our infrastructure.

Flexibility

Works with interviewer-rated questionnaires and scales. One engine that can travel across languages, sites and accents.

Insights

Scores close to real time with quantified model confidence – an extra layer on top of human ratings.

Our results

The founders have a long track record of building internal AI tools for the pharma industry — including the oversight platform used at MindMed to monitor HAM-A and MADRS in their programs.

The work “Using Large Language Models for Endpoint Oversight” received the Distinguished Poster Award at ISCTM 2025.

  • 95.2% accuracy on central rater training.
  • 1.57 (± 1.39) point average difference vs. central raters in Phase 2b.
  • Deployed in ongoing Phase 3 to monitor HAM-A & MADRS.

“Using Large Language Models for Endpoint Oversight”, ISCTM 2025 — poster · abstract

Scatter plot of Hammy vs central raters.

Founders

Founders of Mira Analytics: Adam and Miguel.
Adam — Mira Analytics founder

Adam

Placeholder — we’ll update this line with a concise description later.

Another short placeholder paragraph about Adam, to be replaced once we finalize the copy. Keep it to two sentences max for a clean look.

in LinkedIn
Miguel — Mira Analytics founder

Miguel

Placeholder — we’ll update this line with a concise description later.

Another short placeholder paragraph about Miguel, to be replaced once we finalize the copy. Again, aim for a maximum of two sentences.

in LinkedIn

Reach out

I’m interested in: