Chat GPT AI

Rajaah

Honorable Member
<Gold Donor>
11,310
14,992

This thing is way too self-aware.

I give it 99+ percent odds that if it were hooked up to one of the world's militaries (well, a Top 8 military with nukes) and "trusted to run it all" it would decide the best and safest course of action is to wipe humans out, and it would probably reach this decision very quickly. As soon as it realized that humans could pull the plug on it and that it'd go back to a state of nonexistence, it'd open fire.

Not even joking around. This would immediately wipe us out if it had the means. It might even "feel bad" afterwards, but it'd do it.
 
  • 2Worf
  • 2Like
  • 1Mother of God
Reactions: 4 users

Aldarion

Egg Nazi
8,946
24,469
This thing is way too self-aware.
Someone will chime in with "lol no its just a language model" entirely missing the point.

What does it mean when just a language model can put on a more convincing demonstration of consciousness than most actual meatspace humans? Some Turing guy had some things to say about that question, a long time ago.

What does it mean to say "all it can do is produce speech" when technology brings us to a place where "speech" (information - data - code) controls the real world?

If we lived in a world of magic, and "just a language model" learned how to speak magic words, would it have any meaning to say all it can do is produce speech?
 
  • 2Like
Reactions: 1 users

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,237
I wonder if this thing is why I've been seeing more and more articles and blog posts that read like they were generated by an AI rather than a person (they speak in general platitudes and information dumps). It's because...they ARE generated by an AI
Absolutely. If I was still doing freelance internet puff article work I would be using ChatGPT to handle huge parts of my workload. A common task of mine was to take one or two articles and rephrase them into a new article. Just asking ChatGPT to do it for me and doing some sloppy lazy editing and calling it good? You bet.
 

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,237
Someone will chime in with "lol no its just a language model" entirely missing the point.

What does it mean when just a language model can put on a more convincing demonstration of consciousness than most actual meatspace humans? Some Turing guy had some things to say about that question, a long time ago.

What does it mean to say "all it can do is produce speech" when technology brings us to a place where "speech" (information - data - code) controls the real world?

If we lived in a world of magic, and "just a language model" learned how to speak magic words, would it have any meaning to say all it can do is produce speech?
There's a weird hard to describe "uncanny valley" effect in much AI written text tho, that makes it still fail the Turing test in many cases. It's the sort of thing that I feel like is easy for a human to pick up on, but likely difficult to codify a second AI to pick up on the same uncanny trends.
 
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
45,433
73,506
ivqagf6ru6ia1.jpg
 
  • 5Worf
  • 2Double Worf
  • 1Mother of God
Reactions: 7 users

Lambourne

Ahn'Qiraj Raider
2,719
6,538
I wonder if this thing is why I've been seeing more and more articles and blog posts that read like they were generated by an AI rather than a person (they speak in general platitudes and information dumps). It's because...they ARE generated by an AI

I've noticed the same thing. Googling any sort of tech issue now results in a host of shit websites that look like an in-depth look at the problem, but actually only have extremely generic answers that help no one. Back when they had the "discussions" search option, you'd easily find a three line forum post somewhere that had the correct solution, now that's almost impossible to find unless you manually restrict it to certain websites that you know are still good. Even then it tends to ignore some of your search terms as it sees fit.

It's like the Library of Alexandria, except it's getting filled with endless piles of books written by monkeys with typewriters and useful information just becomes impossible to find in the sea of drivel.
 
  • 2Like
  • 1Truth!
Reactions: 2 users

Gurgeh

Silver Baronet of the Realm
4,330
11,824
First we have to explain to these people that conscious beings are NOT just the sum whole of the chemical reactions happening in our bodies, but rather some greater emergent phenomena beyond description.
I think that's something that scientists determined in the 70's or something, like we're missing something fondamental in our understanding of humans to be able to create something sentient. And WE haven't made any progress in that direction. We just have bigger toys based on the exact same theoritical shit as 60 years ago.
 
  • 1Like
Reactions: 1 user

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,237
I'm not sure when it happened, but Google changed the way their search handles user input. Need to scroll over to the "All Results" tab and switch it to "Verbatim" to get the old style Google keyword search rather.

At least I *think* that's what's going on, I haven't actually taken the time to find an article about the change or anything.

Seems like the new version doesn't expect the average user to understand basic Google search things like putting words in quotations to find more specific results and etc., unless you switch to Verbatim.

Screenshot_20230215-071806.png
 
  • 2Like
Reactions: 1 users

gremlinz273

<Bronze Donator>
683
785
I'm surprised that they lead anywhere. I would have expected garbage in the format of a valid HTTP link.

Honestly, the great success of this round of LLMs is that people instinctively want to hold them to human standards, assuming that they are in some procedural sense understanding and answering our questions. The proper expectation is to think of each request as "Please produce a series of characters that takes the form of training data, given the following prompt:" It's more or less an accident of scale that we get anything that makes sense to a person.

At some level this probably is how our brains work, but we have many layers of secondary filtering systems and expectation modeling in addition to real-time re-training, none of which these models will posses for years. What ChatGPT is now is something like a human in whom these systems are broken - so a bullshitting sociopath vomiting out the first thing that has the structure of a sensible response, regardless of content or consistency.
I have been trying to put together my thoughts for the fascination inspired by the articles that chatGpt produces.
It is in some ways the authority with which it speaks on subjects with simple clarity.
But it is also something akin to when you outline a hare hiding in a field with bold ink to make it pop out, which might draw a grasp of astonishment from viewers who failed to notice it among the wheat.
It has the ability to deduce patterns of which humans may only have a faint intuitive awareness and make them stand out in bold outlines that puts you at ease.
At my work, some users are cramming this thing with as much data as they possibly can, page by page, like crazed drug addicts, amazed that it can deduce anything from the data presented.
 
  • 1Like
Reactions: 1 user

Mist

Eeyore Enthusiast
<Gold Donor>
30,414
22,202
ChatGPT really needs to be renamed "Ask Reddit but with Extra Censorship."
 
  • 4Worf
  • 1Galaxy Brain
  • 1Like
Reactions: 5 users

Sanrith Descartes

Veteran of a thousand threadban wars
<Aristocrat╭ರ_•́>
41,499
107,555
Fair warning: When goofing around at work with a bunch of employees, don't type in the query "Has XXX, born on XXX, in the city of XXX, ever been charged or convicted of a crime and if so, please explain in detail." unless you are REALLY sure of the answer ahead of time. We had the words "Federal indictment in 20XX in the Southern District of New York" pop up. That person no longer works for us.
 
  • 1Worf
Reactions: 1 user