Chat GPT AI

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
It would have made sense if you knew what the fuck you were talking about. Each individual response to a prompt has a random seed attached. Trying multiple seeds and filtering out the portions of each seed that don't match the average output will lead to more accurate results. Make sense now? It just would take multiplicatively more processing power and they already can't keep up with user demand.
No it doesn't make sense you idiot. Random seeds aren't attached to any individual prompts and if by "trying multiple seeds" you mean setting some exact value, then you destroy any randomness in predictions and it will not lead to actual accurate results. Keep posting though, retard, because it makes it obvious you truly don't understand what the fuck you are talking about.
 

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,236
No it doesn't make sense you idiot. Random seeds aren't attached to any individual prompts and if by "trying multiple seeds" you mean setting some exact value, then you destroy any randomness in predictions and it will not lead to actual accurate results. Keep posting though, retard, because it makes it obvious you truly don't understand what the fuck you are talking about.
Seeds aren't attached to prompts, and I never made such a claim. They're attached to the output each time you make a query. The random seed behind the scene is the reason why identical queries at different times produce different responses.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
Seeds aren't attached to prompts, and I never made such a claim. They're attached to the output each time you make a query. The random seed behind the scene is the reason why identical queries at different times produce different responses.
They aren't attached to outputs. What the fuck are you talking about you retarded chimp? And if they are, how is taking specific values of these seeds going to guarantee increase in model accuracy and not just give you exact same outputs? These are probabilistic models, how is removing randomness doing anything other than destroying the models predictive power? Holy shit you are amazingly stupid.
 

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,236
They aren't attached to outputs. What the fuck are you talking about you retarded chimp? And if they are, how is taking specific values of these seeds going to guarantee increase in model accuracy and not just give you exact same outputs? These are probabilistic models, how is removing randomness doing anything other than destroying the models predictive power? Holy shit you are amazingly stupid.
Because the anomalous outputs are rare (this thing can pass the MBA remember). Instead of producing one output per prompt, behind the scenes they would produce multiple outputs per prompt. Then any prompts with outputs that didn't match the average output of the whole would have that anomalous output filtered out.

This honestly isn't that difficult to understand. I know I have a reputation here of being an idiot but you're truly the one being stupid at the moment. Take two seconds and forget who you're talking to and use your god damn brain eh?
 

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,236
It wouldn't guarantee perfectly accurate output still but it would certainly work to limit the amount of output that deviates far from the training data.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
OK that's enough. Fuck off to ignore, retard. Someone else can continue this if they are a fucking masochist.
 
  • 2Like
Reactions: 1 users

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,236
OK that's enough. Fuck off to ignore, retard. Someone else can continue this if they are a fucking masochist.
Love you 💙

Genuinely my FoH experience is about to vastly improve if wormie truly is putting me on ignore. Today is a good day
 

Sanrith Descartes

Veteran of a thousand threadban wars
<Aristocrat╭ರ_•́>
41,501
107,559
A 24-hour respite has been established. This is a Pharma-free zone until tomorrow.
 
  • 2Like
Reactions: 1 users

Tuco

I got Tuco'd!
<Gold Donor>
45,434
73,508
Pharma trying to inject some sense into this thread and gets shut down by big neural net. Imagine our society if we could just try calling rand() and check if the output is good.
 
  • 1Mother of God
Reactions: 1 user

pwe

Bronze Baronet of the Realm
883
6,138
Which is not surprising. People will keep trying to make it say stupid things and if they succeed that will be posted instead. It can never ever always answer the right thing that will make everyone happy.
 

Kaines

Potato Supreme
16,905
46,094
Which is not surprising. People will keep trying to make it say stupid things and if they succeed that will be posted instead. It can never ever always answer the right thing that will make everyone happy.
Doubly so when it always chooses the stupidest answers possible
 

Tuco

I got Tuco'd!
<Gold Donor>
45,434
73,508
Number of times a person has experienced negative results from failing to stop a nuclear bomb: 0? 10?
Number of times a person has experienced negative results from uttering a racial slur: Millions? Billions?

Protip: Don't get your morality from ChatGPT.
 
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
45,434
73,508
axj0pyqpbjga1.png


This is the perfect response to people who exceed some kind of annoyance threshold for ChatGPT.
 

Lambourne

Ahn'Qiraj Raider
2,720
6,538
AI Asmongold, watch the first few minutes if nothing else. Voice, speech patterns, even the graphic is on point.

 
  • 2Worf
  • 1Like
Reactions: 2 users