Article Blog Image

Detecting AI text through perplexity, risks

Theory

When it comes to evaluating AI-generated text, perplexity has been a widely used metric. For those unfamiliar, perplexity is a measure of how well a language model predicts a given sequence of words. Lower perplexity means the text is likely according to the language model. You can think of it as the text ‘flows well’.

"He went to the store" -> low perplexity, flows well "He avacadoed the shoe" -> high perplexity,...
        
Article Blog Image

Turing Test: Hype, not Holy Grail

Theory

We’ve all heard of the Turing Test and its seemingly all-important role in determining whether a machine can mimic human intelligence. But, is it really the ultimate yardstick for AI success, or is it just a piece of hyped-up history that distracts us from what truly matters?

The Turing Test has its place as a thought-provoking concept, but it’s far from being a foolproof measure of AI capabilities. In fact, it’s worth noting that the...

Article Blog Image

Why AI Now?

Theory

Me and Language Models


I’ve been building language models since 2004. Rather, I built them back in 2004 for grad school work and then took a break.

They’re an interesting component in the whole arsenal of “AI/Machine Learning”. Basically, they tell you if a sentence is likely to exist or not in a given text.

Why would something like this be useful? Well, if you’re building a text translation system, an easy...