We're building expert-curated, recency-first AI for professionals who make critical decisions in unstable evidence environments.

We are not a general knowledge model.

We exist where generic AI becomes dangerous, when evidence is contested, fast-moving and cannot be responsibly flattened into a single answer.

BACKED BY:
OPENAI STARTUP FUND
TIME VENTURES

WHO WE'RE BUILDING FOR

01/08

We're building CuraAI to support specific professional roles, not abstract subject domains.

These are professionals who:

  • Make frequent, high-regret decisions
  • Are accountable (legally, clinically, financially, reputationally)
  • Operate across multiple overlapping sub-domains
  • Work in fields where recency changes conclusions, not just citations
  • Cannot rely safely on frontier LLMs without expert mediation

Examples include clinicians, scientists, engineers and regulated or policy-adjacent professionals, but only where institutional curation is possible.

WHAT PROBLEM WE'RE SOLVING

02/08

Professionals need:

Signal over noise
Visibility into disagreement
Weighted evidence, not averaged consensus
Defensible judgement, not confident prose

Modern knowledge work has outgrown both:

Static authorities (textbooks, guidelines, encyclopaedias), and
Generic AI systems that optimise for fluency over epistemic safety

That gap is why we're building CuraAI.

WHAT THE PRODUCT DOES

03/08

CuraAI is a decision-support co-pilot for high-stakes professional judgement.

It sits inside a professional's workflow and helps them:

Navigate what the evidence currently says
Understand where experts disagree and why
See what has changed recently and what that invalidates
Make decisions that are explainable, defensible, and current

Crucially, CuraAI does not replace professional judgement.

It augments it with structured expert signal.

HOW IT WORKS

04/08

CuraAI combines:

Councils of recognised experts (not crowds or scraping)
Institutionally anchored curation (academies, societies, standards bodies)
Recency-first knowledge pipelines
Evaluation-driven AI agents that highlight uncertainty and disagreement, and provide provenance.

This allows us to answer not just "what is the answer?"

...but "how strong is the evidence, who disagrees, and how confident should I be?"

HOW WE CREATE TRUSTWORTHY AI

05/08

CuraAI is deliberately narrow where others are broad.

Our advantage comes from:

Enforcing epistemic discipline on what enters the system
Building institutional trust that generic models cannot replicate
Embedding into daily professional workflows
Compounding value as the system learns what matters to each role

We're focussed on roles where LLMs + citations are not good enough, and sometimes actively harmful.

WHAT WE ARE NOT

06/08

CuraAI is not:

Consumer or lifestyle AI
An encyclopaedia
A generic research assistant
A one-off or episodic tool
A belief- or ideology-driven system

WHAT OUR LONG-TERM VISION LOOKS LIKE

07/08

Over time, CuraAI becomes:

A trusted cognitive layer for regulated and high-impact professions
The system professionals turn to when the cost of being wrong is real
A new standard for how expert knowledge is translated into AI-mediated decisions

When evidence is contested, fast-moving and cannot be responsibly flattened into a single answer, CuraAI becomes the system professionals turn to.

TEAM

08/08
Neelay Patel

Neelay Patel

CEO

Tech and Media executive. Ex-AOL, BBC, The Economist.

Luca Grulla

Luca Grulla

CTO

Former CTO of Signal AI.

Dame Julia Black

Dame Julia Black

Advisor

President (Emeritus) of British Academy

Sacha Baron Cohen

Sacha Baron Cohen

Founder