1,125 students lost a whole GCSE to AI in 2025. Detection didn't catch them. A conversation did.
JCQ-aligned viva packs in 60 seconds, grounded in the pupil's own essay
Executive Summary
In a nutshell
A UK-specific coursework integrity tool for secondary schools. Pupils submit drafts, the AI flags suspect passages with reasoning, and crucially generates a 5-minute teacher-facing viva-question pack tailored to the marked work. The viva pack is the differentiator. JCQ and gov.uk both already tell teachers to "have a conversation with the student" when authenticity is in doubt, but no commercial tool does the prep work for them. Existing detectors (Turnitin, GPTZero, Originality.ai) are all US-built, all rely on flaky binary classifiers, and the UK schools using them know it. The product solves the false-accusation problem by replacing detection with verification.
The Story
Meet the user

Naz teaches GCSE English Literature at a comprehensive in the Midlands. She has 28 NEA submissions on her desk and a JCQ malpractice procedure that says she has to authenticate every single one. Two of the essays don't sound right. The vocabulary is too even, the structure is too clean, the voice doesn't match what she heard in lessons. She runs them through the school's Turnitin AI Writing Report. One comes back 47% AI, the other 6%. She can't act on that. The 47% pupil is also her best non-native English speaker, and she's read the research about false positives on ESL writing.
She knows the right answer is a conversation. The gov.uk SLT briefing pack told her so. But she has 28 conversations to plan, no time to prepare specific questions for each pupil's actual essay, and one chance to get it right before the malpractice form goes to the exam officer. Then her HoD shares a tool that reads the marked work, generates five tightly-scoped viva questions per pupil grounded in their own content, and produces a printable script that takes 5 minutes per pupil. No accusation, no detection score, just a structured oral check the pupil should be able to answer if the work is theirs.
Scores
How does this idea stack up?
7.6/10
UK EdTech is £9.8B in 2025. 5.2 million GCSE entries. Every coursework-bearing subject (history, English, D&T, art, computing, MFL) is a potential customer school by school.
NPR headline: "AI detection tools are unreliable. Teachers are using them anyway." 73% of student-reported AI flags are disputed. 61% false-positive rate on non-native English writers. Teachers are exhausted.
LLM API plus retrieval against the marked work plus a question generator. Buildable in 2 to 4 weeks. Constraint: under-18 data, GDPR, DfE data-handling expectations, UK residency.
JCQ guidance updated April 2025. Gov.uk SLT briefing pack live. Ofqual chief publicly considering scrapping extended-writing coursework (April 2026). The window is wide open right now.
JCQ authentication rules are structural and will outlast any single guidance refresh. Slight discount because Turnitin or Pearson could absorb the viva-pack feature into their platforms within 18 to 24 months.
Solo-buildable. Procurement into schools is the bottleneck, not engineering. Expect 2 to 3 months from first build to first paying school.
Strongest
Pain
Teachers have a regulator-backed mandate to have a conversation, no budget for the prep time, and an existing toolset they openly distrust.
Watch out
Effort to Build
Engineering is light. The B2B school sales motion is heavier than founders typically expect, and DPIA paperwork is non-trivial.
Pain Point
The problem
“I hate, absolutely hate, how AI has forced me to turn into a punitive detective, rather than, well, a teacher.”
— Reddit r/Professors, surfaced in NPR's December 2025 reporting on US and UK schools
UK secondary schools are stuck between two failures. The first failure is that JCQ and Ofqual treat AI in coursework as a malpractice risk and require teachers to authenticate every submission. The second failure is that the only commercial tools available for that authentication, Turnitin's AI Writing Report and GPTZero, both produce probabilistic scores that are demonstrably unreliable. Independent tests put Turnitin's sentence-level false-positive rate at around 4% (versus the marketed under-1%), and 61% of non-native English writing gets flagged as AI-generated.
The result: teachers either accept the score and risk a wrongful malpractice referral, or ignore the score and risk passing AI-written work, which has its own malpractice consequence. Gov.uk's SLT briefing pack explicitly says "if teachers suspect AI-generated content, they should have a conversation with the student, asking them to explain their thinking and describe their process." That conversation is the regulator-endorsed remediation. Nobody currently sells the tool that prepares it for them.
Want reports like this every Thursday?
One validated UK business opportunity per week. Free.