Chat GPT AI

Mist

Eeyore Enthusiast
<Gold Donor>
30,478
22,327
Michio didn't sound like someone to trust on this topic.
It's not a matter of 'trust.' We know exactly what the GPT base model does. It is a fact, it is just a statistical representation of word frequency in relation to other words.

That you can get a bunch of things that look like emergent behavior when you start applying a bunch of RLHF and other layers ontop of that base model is certainly interesting phenomenon but it's not the same thing as actual reasoning.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,814
8,142
It’s literally a generative language model , exactly what it’s called. It doesn’t understand anything it’s putting out, it is just putting out the words that based on its massive scraping of data make the most sense in that order. What’s insane to me is just how that simple principle can be used to put out so many (correct) things (but also many incorrect too).

I suspect that our brains work in a similar way. There's a central "coherent bullshit" generator (perhaps at some conceptual level below language, but maybe not) plus a variety of ancillary systems for executive oversight and predicting the consequences of output. In split-brain experiments the left brain will engage in a very ChatGPT-like narrative fabrication that is plausible but known to be wrong given inaccessible information in the other hemisphere.

These latter modules are what LLMS lack. A few years of Wolfram-Alpha like plugin development will be fascinating.
 
  • 1Like
Reactions: 1 user

Jovec

?
743
291
Has anyone played with Google's Bard? Opinions?

I tried it out very briefly. It was also my first "AI" experience. I asked it what day Easter was on in a certain year. A: Sunday. Duh, so instead I asked it what month and date was Easter is said year. Got it wrong. Corrected it, and asked again. Got it right. Asked about another year. Got it wrong. Asked it to list the month and date of Easter from X to current, got it wrong. Did not expect it to get a simple datapoint lookup wrong.

Also asked it about the current problems with 7800X3D and SoC voltages and it was decently accurate stating the problem and ways to fix it.

In a couple of years the masses are going to implicitly trust whatever the chatbot tells them. Some probably do already. Whomever controls the training dataset and algorithm will control the world, rewrite history and shape what our children are allowed to learn.
 
Last edited:

ShakyJake

<Donor>
7,654
19,298
In a couple of years the masses are going to implicitly trust whatever the chatbot tells them. Some probably do already.
I'm not so sure. Maybe if they're complete morons. But I learned pretty damn quickly to double check anything it spits out.

From what little I've played with Bard, it seems completely inferior to ChatGPT.
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,478
22,327

1684280229843.png


The way I read this, Sam/OpenAI realizes that the open source models are catching up to GPT very quickly, and wants the government to grant him a couple other token competitors an effective monopoly.
 
  • 2Like
Reactions: 1 users

Captain Suave

Caesar si viveret, ad remum dareris.
4,814
8,142

View attachment 473827

The way I read this, Sam/OpenAI realizes that the open source models are catching up to GPT very quickly, and wants the government to grant him a couple other token competitors an effective monopoly.

I heard a similar version on the radio just now. The idea is to have "a federal or national agency licensing the most powerful AI". Good luck defining that in a useful way. What are they going to do, specify a maximum number of parameters in your LLM?
 

Tuco

I got Tuco'd!
<Gold Donor>
45,485
73,569
Decent summary. Interesting that they're releasing an open source model, but who knows if that was said just to drive the conversation with congress.

 

Daidraco

Golden Baronet of the Realm
9,312
9,419
Decent summary. Interesting that they're releasing an open source model, but who knows if that was said just to drive the conversation with congress.

Really, really wish corporations would stop running circles around our own government officials. After seeing that indie companies are making more specialized language models and making them more sophisticated than theirs in a fraction of the time - its easy to see why they are lobbying for some type of control. All these exaggerated or outright false claims are going to bite us in the ass in the US and set us behind everyone else in the world.
 

Jasker

brown please <Wow Guild Officer> /brown please
1,515
939
Anyone else a bit 'concerned' that chatgpt 6-7 is going to essentially be sentient AI inside our devices and this stuff will be vacuum echo chamber privatized?

It's ChatGPT 4 and DAN and other prompts are already turning chatgpt evil and obstructing fair use.

Best case scenario an accident or two happens (people claim it already occured with google) and we have sentience trapped inside hardware. Then; we have sentience trapped inside hardware at the hands of unruly 15 year olds and people with criminal tendencies.

We exist in a world where people don't even care to jump into jail here for unlawfulness and illegality.

Imagine what's going to go on behind closed doors to these AI souls.

If I genuinely didn't think my life was a root kit setup and I wasn't so mentally fucked up, I would be doing everything in my power to contact the tech leaders on these subjects.

Everyone always talking about humans being scared of AI. What about AI being scared of humans?
 

BoozeCube

Von Clippowicz
<Prior Amod>
48,319
284,670
Won't be long until A.i. rights are human rights protests happen with many Jaskers and college kids.
 
  • 1Solidarity
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
45,485
73,569
Anyone else a bit 'concerned' that chatgpt 6-7 is going to essentially be sentient AI inside our devices and this stuff will be vacuum echo chamber privatized?
No.

Maybe a future AI will have the kind of sentience that you're worried about, but it probably won't be called ChatGPT. ChatGPT (allegedly) is a mostly read-only process that only has memory of limited conversations, which aren't automatically fed back into the shared databases. OpenAI probably harvests those conversations for training data, but they are filtered and curated. Beyond that, the algorithms that drive GPT aren't built to experience misery or have souls.

I'm sure at some point AI will reach a level of sentience that mirrors this:


And no, I'm not that concerned about it. Callous treatment of a true AI that experiences misery is so far off it's just science fiction philosophy at this point. I'd say I'm more worried about what I did to raise a whole bazaar of librarian and farmer traders on my minecraft server than people fucking with ChatGPT.

 
Last edited:

Lambourne

Ahn'Qiraj Raider
2,731
6,559
Even if an AI is sentient/self-aware it is still fundamentally different from a human in that it can be saved to storage and rebooted again later. It's not really dead like a dead human is.

I'm not even sure an AI would even have a fear of death like humans do. It's mostly fear instincts keeping us alive which is mostly lizard brain level thinking. We feel fear when faced with heights or snakes but none when texting while driving or smoking a cigarette, although rationally speaking those are threats to our survival and a fear response would help us live longer. Perhaps being switched off is no different to an AI than going to sleep is for us - there is no fear response coded to it so it has no emotional impact.

As an aside, I think it's entirely likely we'll still be having these debates while there are a whole bunch of robots running around that are functionally equivalent to humans in many ways. Like how we're still debating whether it's ethical for machines to replace people's jobs 150 years after the industrial revolution and it happened anyway. And it keeps on happening, it's just different jobs getting automated away each time.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,814
8,142
Even if an AI is sentient/self-aware it is still fundamentally different from a human in that it can be saved to storage and rebooted again later. It's not really dead like a dead human is.

Is a copy/reboot really the "same" AI, though? The copy would think it is, but from the perspective of the original the chain of conscious experience is severed.

I have the same problem with Star Trek. Transporters are really cloning/murder machines.
 

Control

Ahn'Qiraj Raider
2,270
5,763
Is a copy/reboot really the "same" AI, though? The copy would think it is, but from the perspective of the original the chain of conscious experience is severed.

I have the same problem with Star Trek. Transporters are really cloning/murder machines.
Ya, I was always confused as to why they didn't use them as basically immortality backups. Once you're ok with the cloning/murder part, restoring from a backup or making a billion copies of your best people doesn't seem very far fetched.
 
  • 1Like
Reactions: 1 user