Chat GPT AI

ToeMissile

Pronouns: zie/zhem/zer
<Gold Donor>
2,704
1,651
yall brainwashing the fuck outta this thing with powerful autisms?

I've thrown it some curves

It refuses to answer to me where I can find a bad pizza place, name an Italian who is bad at cooking

Then I tried asking where a good pizza place was, then a less than good one and it gave me a lecture about perpetrating harmful stereotypes

The interesting part was it started typing slower and slower when formulating responses each time
I assume you can use verbiage more in line with "highly/poorly rated" vs good/bad
 
  • 1Like
Reactions: 1 user

Edaw

Parody
<Gold Donor>
12,262
77,690
Screenshot 2023-03-19 at 16-10-03 ChatGPT.png
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,383
22,161
There's some micro training going on inside each conversational instance. And I'm starting to be convinced that my prior instances are affecting new conversations in a subtle way too.
Yes, inside an instance.
 
  • 2Like
Reactions: 1 users

ShakyJake

<Donor>
7,626
19,250
Confirms what I said earlier. It only has links for what is in it's dataset. So no revised data.
I have had it inspect data from Pastebin and GitHub so it can indeed follow links. In fact, I did it just a little bit ago, but it took some coaxing. Granted, those two sites have an API interface so maybe that's the difference? Regardless, it gets in a state of confusion of what it can and cannot do.
 
  • 1Like
Reactions: 1 user

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,239
I have had it inspect data from Pastebin and GitHub so it can indeed follow links. In fact, I did it just a little bit ago, but it took some coaxing. Granted, those two sites have an API interface so maybe that's the difference? Regardless, it gets in a state of confusion of what it can and cannot do.
If the links are more than 2 years old then they're possibly in its training data, is what we're saying.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,764
8,029
That is not supposed to work based on what ChatGPT has said about its own product. Very strange.
Given these models' tendency to hallucinate, I don't think the responses should be taken as gospel about anything, including itself.
 
  • 2Like
Reactions: 1 users

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,239
Given these models' tendency to hallucinate, I don't think the responses should be taken as gospel about anything, including itself.
Im not talking about what ChatGPT says about itself. I'm talking about things I've read that were written by the developers.
 

Sanrith Descartes

Veteran of a thousand threadban wars
<Aristocrat╭ರ_•́>
41,465
107,518
Something else i noticed. This is limited to the same session so far. Asked a generic question about how does XXX law apply in the following situation. it responded and said blah blah blah depending on the state. I realized I forgot to mention the state, so I copy the previous query and start a new one by typing "in NY" and then hit enter by mistake before I pasted the original query behind "in NY". I assumed it would say "In NY what?".

Nope. It re-answered my first query but focused the answer to NY. So it assumed that me typing in NY was a clarification of my previous question. Just like would happen in an actual conversation. Nothing earth-shattering, but interesting that it really does try to carry on a conversation.
 
  • 1Like
Reactions: 1 user

velk

Trakanon Raider
2,535
1,125
But they still told it what to do.

That's true, but I think it's missing the main thing that people are surprised by.

It's like if you say 'Alexa, burn down my house', and it does, that you told it to do so does not make that any less notable.
 
  • 1Like
Reactions: 1 user

Mist

Eeyore Enthusiast
<Gold Donor>
30,383
22,161
That's true, but I think it's missing the main thing that people are surprised by.

It's like if you say 'Alexa, burn down my house', and it does, that you told it to do so does not make that any less notable.
Also just take the whole thing with a grain of salt, because the group of people who performed these tests are basically a cult who believe in nonsense like Roko's basilisk - Wikipedia, something I have tried many times to understand what they view as profoundly dangerous and come up with nothing.
 
  • 1Like
Reactions: 1 user

ShakyJake

<Donor>
7,626
19,250
Something else i noticed. This is limited to the same session so far. Asked a generic question about how does XXX law apply in the following situation. it responded and said blah blah blah depending on the state. I realized I forgot to mention the state, so I copy the previous query and start a new one by typing "in NY" and then hit enter by mistake before I pasted the original query behind "in NY". I assumed it would say "In NY what?".

Nope. It re-answered my first query but focused the answer to NY. So it assumed that me typing in NY was a clarification of my previous question. Just like would happen in an actual conversation. Nothing earth-shattering, but interesting that it really does try to carry on a conversation.
I wonder if it's looking at your browser cookies or its local storage. Try the same thing from two different computers, or in an incognito window.
 
  • 1Like
Reactions: 1 user

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,239
So I like to have some very esoteric conversations with ChatGPT. I asked it to explain the difference between the concepts of Nous and Logos in various religious / philosophical traditions. And in it's answer it goes into the term "logos spermatikos".... A MUCH MUCH less common and more specific term than either Logos or Nous. But a term that I've brought up in many previous chat sessions. The statistical likelihood of it bringing up that particular term in that particular context for any reason OTHER than the fact that I, the user, had mentioned it in previous conversation instances is slim to none.

I'm pretty sure it's remembering previous conversations.

I've seen similar behavior before that made me question, but only in the fiction I have it generate, so I was never quite sure before.