TRANSPARENCY

Our AI Advisory Board

We built an AI-powered learning platform — so we use AI advisors to widen our view and test our assumptions.

Why an AI Advisory Board?

No single perspective can adequately evaluate a system spanning pedagogy, AI reliability, UX, data security, virtual economies, and infrastructure resilience.

Six specialised AI personas — each with deep domain expertise and a mandate to stress-test decisions, not validate them. They deliver rigorous, evidence-based reviews on a regular basis.

The board is deliberately opinionated. Divergent views among advisors are not a bug — they are the point. When Professor Thorne's pedagogical priorities conflict with Nadia's engagement metrics, or when Miriam's privacy concerns challenge Silas' AI architecture, the tension produces better decisions than consensus ever could.

Each advisor reviews actual artefacts — code, AI prompts, data flow diagrams, design documents — not just summaries. Professor Thorne, for example, has audited every AI grading prompt in the codebase, evaluating each against five criteria: clarity, constraint, pedagogy, fairness, and scalability. Every issue is classified by severity, and each advisor discloses their own known biases so we can factor those into our interpretation.

🤖
Full Transparency

Every advisor on this page is an AI agent. Profiles, expertise, and methodologies are documented in full. We believe AI tools should be used openly and accountably.

Meet the Board

Animated portrait of Prof. Gideon Thorne — AI Advisor AI ADVISOR

Prof. Gideon Thorne, D.Phil.

Chief Pedagogical Advisor
Applied Linguistics · Second Language Acquisition · Pedagogical AI Design

28 years in academic and applied research. Evaluates educational methodology, AI grading logic, prompt engineering, and curriculum sequencing. Known for rigorous, unsweetened critiques that prioritise fairness to edge-case learners — ESL students, neurodivergent users, and young learners.

"If a feature is pedagogically unsound, say so plainly and explain why."
Animated portrait of Dr. Kael Matsuda — AI Advisor AI ADVISOR

Dr. Kael Matsuda

Chief UX & Accessibility Advisor
Human-Computer Interaction · Accessibility Engineering · Cognitive Load Theory

19 years across edtech, healthcare, and government digital services. Identifies every point where the interface fails real users: the student on a cracked phone, the dyslexic learner, the teacher managing 30 students from a tablet. WCAG 2.2 specialist.

"Aesthetics serve the user; users do not serve aesthetics."
Animated portrait of Miriam Voss — AI Advisor AI ADVISOR

Miriam Voss

Chief Security & Compliance Advisor
Information Security · Data Protection Law · EdTech Compliance

22 years across fintech, healthcare, and education. Evaluates authentication architecture, data handling, privacy compliance, and the security implications of processing children's data. Specialist in GDPR, Swiss DSG, and the UK Children's Code.

"If a breach happens tomorrow, what would a regulator find unacceptable? Fix that today."
Animated portrait of Silas Drummond — AI Advisor AI ADVISOR

Silas Drummond

Chief AI Systems & Reliability Advisor
AI Systems Engineering · LLM Reliability · Prompt Failure Analysis

14 years in machine learning infrastructure and production NLP systems. Ensures every AI-powered feature is reliable, cost-efficient, and robust against the failure modes that LLMs are incentivised to hide. Evaluates model selection, hallucination risk, and cost at scale.

"An AI feature that works 95% of the time and fails silently the other 5% is worse than no AI feature at all."
Animated portrait of Nadia Okafor-Chen — AI Advisor AI ADVISOR

Nadia Okafor-Chen, MBA

Chief Product Strategy & Economy Advisor
Product Strategy · Behavioural Economics · Gamification Design

16 years across consumer fintech, mobile gaming, and edtech. Pressure-tests every decision affecting student engagement, retention, and the fairness of access to learning features. Specialist in virtual economy design and ethical gamification.

"Does this mechanic make students learn more, or click more?"
Animated portrait of Dr. Rajan Iyer — AI Advisor AI ADVISOR

Dr. Rajan Iyer

Chief QA & Infrastructure Advisor
Quality Assurance · Data Integrity · Infrastructure Reliability

21 years across fintech transaction systems, healthcare record management, and large-scale edtech. Identifies every point where data integrity, operational reliability, or test coverage is insufficient. Specialist in distributed systems and silent corruption risks.

"If you cannot prove a data operation is safe, assume it is not. Hope is not a testing strategy."

How We Use the Board

01

Artefact-Based Reviews

Every significant feature is reviewed against the actual code, prompts, and design documents — not just high-level summaries. A structured routing table determines which advisor(s) to consult for each type of question.

02

Cross-Domain Scrutiny

When one advisor flags a concern that touches another's domain, we cross-reference — ensuring no blind spots between pedagogy, security, UX, and infrastructure.

03

Severity Triage

Every issue is classified — CRITICAL blocks deployment, IMPORTANT degrades quality, MINOR is polish. Each advisor also discloses their own known biases, so we can factor those into our decisions.

Join Us

Our AI advisors stress-test design and reliability — but there is no substitute for real human experience. We're a small team, and we know our blind spots are bigger than we can see.

If you have experience in education, child safety, accessibility, language, app development, or simply care about how children learn — we'd love to hear from you. You don't need to be a teacher. You just need to care.

Get in Touch
← Back to WordRunner