The Turing-Test Is an Empiricist Mistake

The Turing-test is rooted in the idea that a human can judge whether something is an Artificial Intelligence merely by the behaviors it exhibits during the test. In reality, a judgment of whether or not it’s a genuine AI requires an explanation of how it works.

Taking the position that the Turing-test is accurate assumes that, given enough responses to fool a human judge, knowledge was automatically created. It’s more likely that no knowledge was created at all and a passing Turing-test is the manifestation of existing knowledge—that of the developer that programmed it. Determining if AI is genuine is to separate it from the developer which a Turing-test can’t do.

See also: