Article Blog Image

Detecting AI text through perplexity, risks

Theory

When it comes to evaluating AI-generated text, perplexity has been a widely used metric. For those unfamiliar, perplexity is a measure of how well a language model predicts a given sequence of words. Lower perplexity means the text is likely according to the language model. You can think of it as the text ‘flows well’.

"He went to the store" -> low perplexity, flows well "He avacadoed the shoe" -> high perplexity,...
        
Article Blog Image

Turing Test: Hype, not Holy Grail

Theory

We’ve all heard of the Turing Test and its seemingly all-important role in determining whether a machine can mimic human intelligence. But, is it really the ultimate yardstick for AI success, or is it just a piece of hyped-up history that distracts us from what truly matters?

The Turing Test has its place as a thought-provoking concept, but it’s far from being a foolproof measure of AI capabilities. In fact, it’s worth noting that the...