Does this AI pass the turing test yet?
I think we may have crested the uncanny valley of turing tests, but we still have a long long ways to go. We will likely create an equivalent of the voight-kammpf tests for assessing emotional response and other tests for humanity that will suss out this complicated pattern matching device to effectively differentiate it from a psycopath autists.By loose standards, yes, but AI/CS/neuroscience/philosophy people keep moving the goalposts and/or redefining the test requirements. There are some trivial ways to make it fail, like asking it today's date.
No one has really run a Turing Test against it, because it doesn't pretend to be a real person.
View attachment 459774
Right, that doesn't seem very human.
That's obviously the crudest possible way to get it to do so, and ChatGPT is specifically designed to identify itself. I'm sure you'd get more subtle results if you used the base GPT 3.5 or otherwise instructed it to obfuscate.Right, that doesn't seem very human.
There are some trivial ways to make it fail, like asking it today's date.
Huh. Previously it couldn't give you the date because of the age of the training dataset. I guess they've changed things.and it fails by always knowing the correct date?
Sounds like ELIZA was more fun. That ho was always trying to get in my pants.
Isn't there already an (AI-powered) scoring test that gives you the likelihood that text was generated by AI vs a person?We will likely create an equivalent of the voight-kammpf tests for assessing emotional response and other tests for humanity that will suss out this complicated pattern matching device to effectively differentiate it from a psycopath autists.
That's surreal. So the AI makes mistakes? Or is it a prostitute who tells you what you want to hear?
No, but chatbots built for the same purpose as ChatGPT never will because they don't attempt to act like a human, just provide useful responses. I imagine that a chatbot with the same level of technological backing as ChatGPT could if it was designed too though, as long as the evaluator wasn't too informed about the current issues with chatbot tech.Does this AI pass the turing test yet?
they'll be able to easily replicate some 2+2=5 behavior.
Sure, but it can get things wrong in egregious ways that would be an obvious indicator that it's a chatbot and not a human.In the context of a Turing test that means nothing. Plenty of humans fuck that up too.
Sure, but it can get things wrong in egregious ways that would be an obvious indicator that it's a chatbot and not a human.
Even if ChatGPT was trained to pretend to be a human, if it fucks up questions like this it can fail the Turing test.
What's interesting is that the January version of ChatGPT could be bullied into giving very wrong answers, but the early February version of Bing would get extremely upset if you disagreed with it.
It also can't count digits. The final summary was right but also at odds with all previous points.the egg thing might be a better example. Because this one looks totally like a thing a human would do. It got the answer right in the end. The opening statement was just backwards.