Why AI detectors keep getting it wrong.
Nearly every popular AI detector — GPTZero, Turnitin's AI writing indicator, ZeroGPT, Copyleaks — works by scanning finished text and estimating how “predictable” it is to a language model. Two metrics do most of the work: perplexity (how surprised a model would be by the text) and burstiness (how much sentence length varies). The premise is simple: AI writes in a steadier, less surprising rhythm than humans do. The problem is that this is only roughly true.
A careful, structured student essay — the kind produced under tight word counts, with clear topic sentences and measured transitions — reads as low perplexity, low burstiness. In other words, it looks more like AI to a detector, even though a human wrote it. Non-native English writers are hit hardest, because they tend to stick to safer constructions. That isn't cheating. That's how cautious, polished writing behaves, and detectors routinely misread it as synthesis.
Detector vendors acknowledge this privately. Many institutions have quietly stopped treating detector output as definitive evidence — including several Ivy League and Russell Group universities — and the news cycle is full of students who were accused, cleared, and left with nothing to show for the stress. A detector flag is a guess, not a verdict. The solution is not to argue perplexity with a reviewer; it's to hand them something fundamentally different: evidence of process.
What actually counts as proof of human writing.
“It doesn't read like AI” is not proof. Neither is “I wrote it, I promise”. The categories of evidence that actually sway a teacher, editor, or administrator fall into three buckets: artifacts, witnesses, and process.
- Artifacts — earlier drafts, planning notes, photos of handwritten outlines, research tabs. These are useful but easy to fabricate.
- Witnesses — a tutor, a peer, a parent who saw you working. Valuable but rarely independent enough for a formal appeal.
- Process — a timestamped record of how the document was built. This is the strongest category, because it captures something AI tools cannot produce: the rhythm of a person actually typing over real time.
Google Docs version history is a weak proxy for process — it records states, not dynamics. It can be faked by anyone willing to paste content in a dozen small chunks. What actually resists forgery is the timing dimension: how long you paused on a tricky sentence, where you hit backspace, how word choice slowed down or sped up. That's the evidence JITTER captures.
How JITTER solves this — in one session.
JITTER is a free Chrome extension that attaches to Google Docs. While you draft, it listens locally to keystroke events and builds a Proof-of-Process receipt: a cryptographic, HMAC-signed summary of the typing rhythm, pause patterns, paste ratio, and revision density of your session. The document text itself never leaves your device — only hashes and a humanity score travel over the network. You end the session with a one-click “seal” that anyone can verify, publicly and instantly, at verify.scalisos.com.
Because JITTER signs the timeline of your work rather than the text itself, it solves the detector problem in a different direction. A detector asks “does this look AI?” and guesses. JITTER asks “was this drafted in real time by a human?” and answers with a receipt that's mathematically hard to forge. That receipt is the artifact you show when someone raises a false flag.
Step-by-step: install JITTER and generate your first receipt.
- Open the install guide and add JITTER to Chrome from the Web Store (takes a few seconds).
- Open the Google Doc you're working on (or start a new one). JITTER attaches automatically.
- Draft the assignment the way you normally would — type, pause, revise. Don't perform for the tool; just work. JITTER is listening to the rhythm, not the words.
- When you're finished, open the JITTER popup and click Copy Seal. You now have a verifiable URL that anyone can open to confirm your proof-of-process.
Presenting the receipt to a teacher, editor, or reviewer.
Keep the response short and factual. A paragraph will usually do. Something like:
“Thanks for raising this. I understand the detector result, but the tool is known to produce false positives on careful student writing. I drafted the assignment myself over a real session and used JITTER to generate a cryptographic proof-of-process receipt. You can verify it here: [paste your verify.scalisos.com link]. Happy to discuss further in office hours.”
No emotional load. No appeals to style. Just a verifiable artifact that moves the conversation from vibes to evidence. In our experience with academia and K-12 schools, reviewers respond fast to affirmative evidence — even when they've never heard of the tool — because a verifiable link is intuitively stronger than a second-guess.
FAQs about false AI flags.
What should I do first when a detector flags my essay as AI?
Can a teacher legally use an AI detector to fail me?
Why do AI detectors flag non-native English writers more often?
Can I 'humanize' my writing to beat a detector?
Is a Google Docs version history enough proof?
What if the work was written before JITTER existed?
Related: read the HVP protocol specification or the journal essay From detection to attestation.
Add to Chrome