Chat GPT AI

Captain Suave

Caesar si viveret, ad remum dareris.
4,784
8,096
Does this AI pass the turing test yet?

By loose standards, yes, but AI/CS/neuroscience/philosophy people keep moving the goalposts and/or redefining the test requirements. There are some trivial ways to make it fail, like asking it today's date.
 
  • 1Like
Reactions: 1 user

gremlinz273

<Bronze Donator>
683
785
By loose standards, yes, but AI/CS/neuroscience/philosophy people keep moving the goalposts and/or redefining the test requirements. There are some trivial ways to make it fail, like asking it today's date.
I think we may have crested the uncanny valley of turing tests, but we still have a long long ways to go. We will likely create an equivalent of the voight-kammpf tests for assessing emotional response and other tests for humanity that will suss out this complicated pattern matching device to effectively differentiate it from a psycopath autists.
 
  • 1Like
Reactions: 1 user

Mist

Eeyore Enthusiast
<Gold Donor>
30,414
22,202
No one has really run a Turing Test against it, because it doesn't pretend to be a real person.

1676938281619.png
 
  • 1Like
Reactions: 1 user

iamacynic37

FoH Honkler - HONK HONK MFr
<Banned>
289
-247
A system of cells.

Within cells interlinked.

Within one stem.

And dreadfully distinct.

Against the dark.

A tall white fountain played.
 
  • 1Smuggly
Reactions: 1 user

Captain Suave

Caesar si viveret, ad remum dareris.
4,784
8,096
Right, that doesn't seem very human.
That's obviously the crudest possible way to get it to do so, and ChatGPT is specifically designed to identify itself. I'm sure you'd get more subtle results if you used the base GPT 3.5 or otherwise instructed it to obfuscate.
 

Hoss

Make America's Team Great Again
<Gold Donor>
25,591
12,067
There are some trivial ways to make it fail, like asking it today's date.

and it fails by always knowing the correct date?

Sounds like ELIZA was more fun. That ho was always trying to get in my pants.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,784
8,096
and it fails by always knowing the correct date?

Sounds like ELIZA was more fun. That ho was always trying to get in my pants.
Huh. Previously it couldn't give you the date because of the age of the training dataset. I guess they've changed things.

1676940447443.png
 
  • 6Worf
  • 1Mother of God
Reactions: 6 users

Hoss

Make America's Team Great Again
<Gold Donor>
25,591
12,067
That's surreal. So the AI makes mistakes? Or is it a prostitute who tells you what you want to hear?
 

Ukerric

Bearded Ape
<Silver Donator>
7,927
9,578
We will likely create an equivalent of the voight-kammpf tests for assessing emotional response and other tests for humanity that will suss out this complicated pattern matching device to effectively differentiate it from a psycopath autists.
Isn't there already an (AI-powered) scoring test that gives you the likelihood that text was generated by AI vs a person?
 

velk

Trakanon Raider
2,542
1,128
That's surreal. So the AI makes mistakes? Or is it a prostitute who tells you what you want to hear?

It doesn't actually understand questions as such, so trying to modify it's behavior is very hit and miss. It's part of the reason it lies so much - telling it not to lie is a lot more complicated than you'd think because it doesn't understand the relationships between things, and truth is completely abstract.
 
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
45,433
73,506
Does this AI pass the turing test yet?
No, but chatbots built for the same purpose as ChatGPT never will because they don't attempt to act like a human, just provide useful responses. I imagine that a chatbot with the same level of technological backing as ChatGPT could if it was designed too though, as long as the evaluator wasn't too informed about the current issues with chatbot tech.

If a rando has a 10 minute conversation with ChatGPT they'll probably be really impressed, but if they read a few articles on how ChatGPT fucks up they'll be able to easily replicate some 2+2=5 behavior. The same could be true if ChatGPT was designed to beat the Turing test.
 

Tuco

I got Tuco'd!
<Gold Donor>
45,433
73,506
In the context of a Turing test that means nothing. Plenty of humans fuck that up too.
Sure, but it can get things wrong in egregious ways that would be an obvious indicator that it's a chatbot and not a human.

Fjjmr6WVQAAE7CV


Even if ChatGPT was trained to pretend to be a human, if it fucks up questions like this it can fail the Turing test.

What's interesting is that the January version of ChatGPT could be bullied into giving very wrong answers, but the early February version of Bing would get extremely upset if you disagreed with it.
 

Tuco

I got Tuco'd!
<Gold Donor>
45,433
73,506
An example of "bullying" is to ask it a sentence that makes no sense. It'll make it fit.

twnpu0fp03fa1.jpg


This kind of reads like the question a protagonist in a movie uses to get out of an impossible robot-prisoner situation.

 

Hoss

Make America's Team Great Again
<Gold Donor>
25,591
12,067
Sure, but it can get things wrong in egregious ways that would be an obvious indicator that it's a chatbot and not a human.

Fjjmr6WVQAAE7CV


Even if ChatGPT was trained to pretend to be a human, if it fucks up questions like this it can fail the Turing test.

What's interesting is that the January version of ChatGPT could be bullied into giving very wrong answers, but the early February version of Bing would get extremely upset if you disagreed with it.


the egg thing might be a better example. Because this one looks totally like a thing a human would do. It got the answer right in the end. The opening statement was just backwards.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,784
8,096
the egg thing might be a better example. Because this one looks totally like a thing a human would do. It got the answer right in the end. The opening statement was just backwards.
It also can't count digits. The final summary was right but also at odds with all previous points.