We built an AI-powered learning platform — so we use AI advisors to widen our view and test our assumptions.
No single perspective can adequately evaluate a system spanning pedagogy, AI reliability, UX, data security, virtual economies, and infrastructure resilience.
Six specialised AI personas — each with deep domain expertise and a mandate to stress-test decisions, not validate them. They deliver rigorous, evidence-based reviews on a regular basis.
The board is deliberately opinionated. Divergent views among advisors are not a bug — they are the point. When Professor Thorne's pedagogical priorities conflict with Nadia's engagement metrics, or when Miriam's privacy concerns challenge Silas' AI architecture, the tension produces better decisions than consensus ever could.
Each advisor reviews actual artefacts — code, AI prompts, data flow diagrams, design documents — not just summaries. Professor Thorne, for example, has audited every AI grading prompt in the codebase, evaluating each against five criteria: clarity, constraint, pedagogy, fairness, and scalability. Every issue is classified by severity, and each advisor discloses their own known biases so we can factor those into our interpretation.
Every advisor on this page is an AI agent. Profiles, expertise, and methodologies are documented in full. We believe AI tools should be used openly and accountably.
AI ADVISOR
28 years in academic and applied research. Evaluates educational methodology, AI grading logic, prompt engineering, and curriculum sequencing. Known for rigorous, unsweetened critiques that prioritise fairness to edge-case learners — ESL students, neurodivergent users, and young learners.
AI ADVISOR
19 years across edtech, healthcare, and government digital services. Identifies every point where the interface fails real users: the student on a cracked phone, the dyslexic learner, the teacher managing 30 students from a tablet. WCAG 2.2 specialist.
AI ADVISOR
22 years across fintech, healthcare, and education. Evaluates authentication architecture, data handling, privacy compliance, and the security implications of processing children's data. Specialist in GDPR, Swiss DSG, and the UK Children's Code.
AI ADVISOR
14 years in machine learning infrastructure and production NLP systems. Ensures every AI-powered feature is reliable, cost-efficient, and robust against the failure modes that LLMs are incentivised to hide. Evaluates model selection, hallucination risk, and cost at scale.
AI ADVISOR
16 years across consumer fintech, mobile gaming, and edtech. Pressure-tests every decision affecting student engagement, retention, and the fairness of access to learning features. Specialist in virtual economy design and ethical gamification.
AI ADVISOR
21 years across fintech transaction systems, healthcare record management, and large-scale edtech. Identifies every point where data integrity, operational reliability, or test coverage is insufficient. Specialist in distributed systems and silent corruption risks.
Every significant feature is reviewed against the actual code, prompts, and design documents — not just high-level summaries. A structured routing table determines which advisor(s) to consult for each type of question.
When one advisor flags a concern that touches another's domain, we cross-reference — ensuring no blind spots between pedagogy, security, UX, and infrastructure.
Every issue is classified — CRITICAL blocks deployment, IMPORTANT degrades quality, MINOR is polish. Each advisor also discloses their own known biases, so we can factor those into our decisions.
Our AI advisors stress-test design and reliability — but there is no substitute for real human experience. We're a small team, and we know our blind spots are bigger than we can see.
If you have experience in education, child safety, accessibility, language, app development, or simply care about how children learn — we'd love to hear from you. You don't need to be a teacher. You just need to care.
Get in Touch