How to tell if something was written by AI

How to tell if something was written by AI

You’ve probably read something online and wondered: did a human write this, or did an AI? It’s a fair question, and one that’s getting harder to answer. AI-generated text has become increasingly fluent and natural, while detection methods remain imperfect. Still, there are real ways to spot AI writing, using both specialized tools and your own judgment.

Detecting AI-written content is possible, but it’s not foolproof. The best approach involves using multiple methods together rather than relying on any single solution.

How AI detection tools work

AI detection tools operate on a basic principle: AI-generated text has measurable statistical properties that differ from human writing. When an AI model generates text, it probabilistically selects from a vocabulary in ways that, while coherent, leave mathematical fingerprints.

The most reliable detectors analyze these patterns. They look at the likelihood of word sequences appearing in training data, the distribution of word lengths, the entropy of token choices, and other linguistic markers. A human writing naturally makes unpredictable choices. We use rare words, mix sentence structures, and sometimes contradict ourselves. An AI, constrained by optimization for fluency and coherence, produces more statistically “smooth” text.

Pangram, built by former Stanford researchers, is widely considered the most reliable option. Their tool claims a false positive rate of approximately 1 in 10,000, meaning it almost never flags human writing as AI-generated. This is significantly better than earlier detection tools, which suffered from both too many false alarms and too many misses.

The problem: false positives and false negatives

No AI detector is perfect, and there’s always a tradeoff between catching actual AI content and accidentally flagging human writing.

A false positive occurs when the tool flags human-written text as AI-generated. This is particularly problematic—it can damage someone’s reputation or academic record based on a machine’s mistake. Early AI detectors like Turnitin had major false positive issues, especially with non-native English speakers, whose writing patterns might be systematically different from native speakers.

A false negative is the opposite: the tool misses actual AI-generated content. This is harder to catch in practice but equally important. An AI can write naturally enough to fool detectors, especially if the writing has been edited or crafted carefully by a human afterward.

Research shows that even the best current tools have false negative rates between 0% and 2%, depending on the AI model used and how the text was generated. Older detectors were far worse, with some showing false negative rates above 20%.

Signs you can spot yourself

Before running text through a detection tool, your own instincts can be revealing. AI-generated writing often has recognizable patterns. AI tends to write in flowing, connected sentences without the natural choppiness of human thought. Real writing includes fragments, second thoughts, and stylistic quirks.

Look for repeated sentence patterns or reliance on similar transitional phrases. AI can get stuck in loops, using the same structures over and over. The examples often feel generic and textbook-like rather than specific and lived-in.

Human writing has personality. Even formal writing carries hints of the author’s perspective, humor, or skepticism. AI writing often sounds like it could have been written by anyone. And while humans sometimes contradict ourselves or change our minds, AI tends to hedge and present everything as carefully balanced.

These aren’t foolproof, but they’re worth noticing. A piece of writing that seems unusually perfect—clear, comprehensive, and never faltering—might warrant a closer look.

Tools beyond Pangram

Pangram isn’t the only option, though it’s widely considered the most reliable. Other detectors include GPTZero, which is free to use online and remains popular. Academic institutions often use Turnitin, which has AI detection built into its plagiarism checker. It’s reliable for longer pieces but less accurate on short text. Winston AI is another option with decent accuracy for educational use.

Each has different strengths. GPTZero is quick and free. Turnitin is widely adopted in academia. Pangram claims the lowest false positive rates.

Why detection will get harder

As AI models improve, they’ll produce writing that’s increasingly difficult to distinguish from human output. Future models may be trained on diverse, less-predictable text in ways that make statistical detection harder. And human editing of AI output muddles the waters further. A human-edited AI draft isn’t clearly “AI-generated” anymore.

Additionally, adversarial attacks are a real concern. Bad actors could deliberately train models to evade detection, similar to how malware is designed to avoid antivirus software.

What this means for you

If you’re trying to verify someone’s work, use a detection tool as one input among several, not as a verdict. Read the text carefully. Ask questions. Tools like Pangram are genuinely useful, but they’re not substitutes for human judgment.

If you’re publishing content yourself, be transparent about AI use. Disclosing whether content was generated with AI assistance has become increasingly important—not because AI-generated content is inherently bad, but because transparency builds trust.

And if a detection tool flags your writing as AI-generated when it isn’t, don’t panic. False positives happen. Push back respectfully and provide context. Good educators and employers understand that these tools are imperfect.

AI-generated content is detectable, but detection is a tool, not truth. The best approach combines automation with skepticism and honest communication.

Comments

Note: Comments are provided by Disqus, which is not affiliated with Getting Things Tech.