Astavalence MedEd Evidence Assistant
Evidence-led medical education intelligence

Build better medical education, with evidence instead of guesswork.

Review teaching slides, session plans, and curriculum materials with built-in equality, diversity, inclusion, accessibility, and neurodiversity intelligence. Designed for serious medical educators, not generic AI tourists.

3 Core premium workflows. No bloated feature theatre.
5 Structured scoring domains across inclusion, usability, and evidence alignment.
100% Every output designed to surface neurodiversity considerations, not forget them.
Slide review preview
Ready for beta
PDF upload
Cited feedback
Downloadable report
EDI 78
Neurodiversity 66
Accessibility 81
Evidence 74
1
Clinical imagery is technically strong but representation across age, ethnicity, and skin tone is too narrow.
2
Several slides rely on dense paragraphs and weak signposting, increasing cognitive load for neurodivergent learners.
3
Action plan generated with cited recommendations, revised teaching prompts, and a downloadable faculty-ready summary.

Built for medical education. Not retrofitted after the fact.

The product is designed around evidence retrieval, structured review, and deliberate attention to inclusion, accessibility, and learner variability.

Curated medical education evidence base before generation
Structured output with practical actions and clear limitations
Dedicated EDI, neurodiversity, and accessibility review in every material analysis
Core workflows

Three things. Executed properly.

This is a premium beta, not a dumping ground for random AI features. Each workflow is designed to solve a real problem medical educators already face.

01

Review my slides

Upload a teaching deck and receive a structured review of EDI, neurodiversity, accessibility, clarity, and evidence alignment.

Domain scoring with explanation
Top risks surfaced fast
Faculty-ready improvement actions
02

Improve my session plan

Paste a lecture, workshop, or teaching session outline and get practical refinements grounded in educational design and inclusion.

Learning design critique
Neurodiversity adaptations
Concrete revision suggestions
03

Ask MedEd Evidence

Ask a medical education question and get a disciplined answer with practical implications, not vague model confidence theatre.

Evidence-led answer structure
EDI and neurodiversity section by default
Clear limitations where evidence is weak
What makes it different

A product, not a wrapper.

Generic AI tools can answer quickly. That is not the same as being fit for medical education, fit for educators, or fit for institutional use.

A

Retrieval before rhetoric

The system is designed to work from a curated evidence base rather than bluff from generic memory.

B

Neurodiversity is explicit

Every material review includes neurodiversity considerations as a built-in requirement, not an afterthought.

C

Structured scoring

Scores are broken down by domain with clear rationale so users see what must change and why.

D

Report-ready outputs

The output is designed for faculty use, governance discussions, and practical redesign work, not just chat consumption.

Audience

Built for educators with real responsibility.

This is for people shaping teaching, assessment, curriculum, and faculty development. It is not designed as a student gimmick.

Module leads

Stress-test session materials for clarity, inclusivity, and defensibility before delivery.

Clinical teaching fellows

Improve slide decks and teaching plans quickly without defaulting to generic AI shortcuts.

Faculty development leads

Use structured, branded outputs to support quality improvement, staff development, and local review processes.

Pricing

Price for seriousness, not curiosity.

Cheap pricing attracts low-intent users and feature tourists. Premium beta pricing should attract people who will actually use the product properly.

Starter

Solo educator

£49 / month

For individual educators needing evidence-based review and limited monthly analysis volume.

MedEd evidence Q&A
Up to 10 reviews per month
Saved reports
Institutional

Team access

Custom

For medical schools, departments, and faculty development teams requiring a wider rollout.

Multi-user access model
Priority onboarding
Institutional review pathway
FAQ

Questions serious buyers will ask.

The best early users will not be dazzled by AI. They will be sceptical. Good. That makes the product better.

Does it replace educator judgement?

No. It is a decision-support layer for review and improvement. Final educational judgement stays with the human educator.

Can I upload PowerPoint files?

The premium beta should prioritise PDF uploads to preserve layout, structure, and review consistency.

Is this only for medical schools?

No. It is relevant to medical, postgraduate, and wider health-professions education settings where structured teaching review matters.

Why not just use ChatGPT?

Because generic AI does not give you a purpose-built evidence workflow, a clear inclusion rubric, or a premium review product shaped for medical education.

Beta access

Build the standard before someone else does.

The market does not need another generic chatbot with a medical education label slapped on top. It needs a product with sharper judgment, stronger evidence discipline, and a premium experience.