Skip to main content
Troubleshooting · False AI flag

Falsely flagged as AI? Here's how to prove your writing is human.

How to respond when an AI content detector (GPTZero, Turnitin, ZeroGPT, Copyleaks) produces a false positive on human-written work.

If a teacher, professor, editor, or reviewer has told you your work “looks AI-generated” when you wrote it yourself, you are not alone — and you are not powerless. This guide is the calm, practical playbook: why detectors get it wrong, what actually counts as proof, and how a JITTER cryptographic receipt gives you something stronger than a rebuttal.

Why AI detectors keep getting it wrong.

Nearly every popular AI detector — GPTZero, Turnitin's AI writing indicator, ZeroGPT, Copyleaks — works by scanning finished text and estimating how “predictable” it is to a language model. Two metrics do most of the work: perplexity (how surprised a model would be by the text) and burstiness (how much sentence length varies). The premise is simple: AI writes in a steadier, less surprising rhythm than humans do. The problem is that this is only roughly true.

A careful, structured student essay — the kind produced under tight word counts, with clear topic sentences and measured transitions — reads as low perplexity, low burstiness. In other words, it looks more like AI to a detector, even though a human wrote it. Non-native English writers are hit hardest, because they tend to stick to safer constructions. That isn't cheating. That's how cautious, polished writing behaves, and detectors routinely misread it as synthesis.

Detector vendors acknowledge this privately. Many institutions have quietly stopped treating detector output as definitive evidence — including several Ivy League and Russell Group universities — and the news cycle is full of students who were accused, cleared, and left with nothing to show for the stress. A detector flag is a guess, not a verdict. The solution is not to argue perplexity with a reviewer; it's to hand them something fundamentally different: evidence of process.

What actually counts as proof of human writing.

“It doesn't read like AI” is not proof. Neither is “I wrote it, I promise”. The categories of evidence that actually sway a teacher, editor, or administrator fall into three buckets: artifacts, witnesses, and process.

  • Artifacts — earlier drafts, planning notes, photos of handwritten outlines, research tabs. These are useful but easy to fabricate.
  • Witnesses — a tutor, a peer, a parent who saw you working. Valuable but rarely independent enough for a formal appeal.
  • Process — a timestamped record of how the document was built. This is the strongest category, because it captures something AI tools cannot produce: the rhythm of a person actually typing over real time.

Google Docs version history is a weak proxy for process — it records states, not dynamics. It can be faked by anyone willing to paste content in a dozen small chunks. What actually resists forgery is the timing dimension: how long you paused on a tricky sentence, where you hit backspace, how word choice slowed down or sped up. That's the evidence JITTER captures.

How JITTER solves this — in one session.

JITTER is a free Chrome extension that attaches to Google Docs. While you draft, it listens locally to keystroke events and builds a Proof-of-Process receipt: a cryptographic, HMAC-signed summary of the typing rhythm, pause patterns, paste ratio, and revision density of your session. The document text itself never leaves your device — only hashes and a humanity score travel over the network. You end the session with a one-click “seal” that anyone can verify, publicly and instantly, at verify.scalisos.com.

Because JITTER signs the timeline of your work rather than the text itself, it solves the detector problem in a different direction. A detector asks “does this look AI?” and guesses. JITTER asks “was this drafted in real time by a human?” and answers with a receipt that's mathematically hard to forge. That receipt is the artifact you show when someone raises a false flag.

Step-by-step: install JITTER and generate your first receipt.

  1. Open the install guide and add JITTER to Chrome from the Web Store (takes a few seconds).
  2. Open the Google Doc you're working on (or start a new one). JITTER attaches automatically.
  3. Draft the assignment the way you normally would — type, pause, revise. Don't perform for the tool; just work. JITTER is listening to the rhythm, not the words.
  4. When you're finished, open the JITTER popup and click Copy Seal. You now have a verifiable URL that anyone can open to confirm your proof-of-process.

Presenting the receipt to a teacher, editor, or reviewer.

Keep the response short and factual. A paragraph will usually do. Something like:

“Thanks for raising this. I understand the detector result, but the tool is known to produce false positives on careful student writing. I drafted the assignment myself over a real session and used JITTER to generate a cryptographic proof-of-process receipt. You can verify it here: [paste your verify.scalisos.com link]. Happy to discuss further in office hours.”

No emotional load. No appeals to style. Just a verifiable artifact that moves the conversation from vibes to evidence. In our experience with academia and K-12 schools, reviewers respond fast to affirmative evidence — even when they've never heard of the tool — because a verifiable link is intuitively stronger than a second-guess.

FAQs about false AI flags.

What should I do first when a detector flags my essay as AI?
Don't argue style; supply process. Keep calm, save the assignment email, and ask the reviewer for the exact tool and percentage. Then generate a JITTER receipt for the document — it's cryptographic proof the work was drafted by a human and is far stronger than any rebuttal about style.
Can a teacher legally use an AI detector to fail me?
Policies vary by institution. In the US, several large universities (including OpenAI's partners) have quietly de-emphasized AI detectors because of false positive rates. Ask your institution for its AI-detection policy in writing, and for the specific tool and threshold used to flag your work. Many schools require corroborating evidence — which is exactly what a JITTER receipt provides.
Why do AI detectors flag non-native English writers more often?
Published research (including the Stanford study of 2023) shows that AI detectors disproportionately flag text written by non-native English speakers, because tools like GPTZero measure perplexity and burstiness — and fluent-but-careful writing looks similar to AI output on those metrics. This is a well-documented flaw, not a sign of cheating.
Can I 'humanize' my writing to beat a detector?
You shouldn't have to, and you shouldn't bother. Adding typos or stylistic noise doesn't prove your writing is human — it just degrades the writing. The right response is to submit affirmative proof of process rather than trying to look less suspicious. JITTER is built for exactly that.
Is a Google Docs version history enough proof?
Version history shows revisions but can be forged by pasting content in small chunks. It also doesn't capture typing rhythm or pause patterns, which is what makes forgery of human drafting hard. A JITTER receipt is cryptographically signed and covers the timing dimension version history misses.
What if the work was written before JITTER existed?
JITTER can only attest to sessions it observed. For past work, the best defenses are: your Docs version history, any draft files from your device, related research notes, and an in-person or live-write follow-up. From this point forward, install JITTER and every new draft will come with a receipt.