Investing General Discussion

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
The dude responsible for worldcoin was never going to be a good guy. That project is legitimately evil. I suspect everything else he does will be awful as well
Me and Flobee Flobee in 100% agreement, things have gotta be fucking fucked.

We're doomed.
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
So theyve been trying to oust him and their previous attempt failed.
1700448535689.png
 

Burns

Golden Baronet of the Realm
6,173
12,417
Bloomberg printing this rumor today, so it may have some kind of legs. That rumor is that he was trying to start a chip company to rival NVidia:
2023-11-19 22.18.19 www.bloomberg.com f39fa9535926.png

First 3 paragraphs:
2023-11-19 22.20.33 www.bloomberg.com fecec820ff51.png

 

Captain Suave

Caesar si viveret, ad remum dareris.
4,801
8,133
Board stands firm and Altman is still out. Murati is out now, replaced as interim CEO by Emmett Shear, founder of Twitch. Such a great choice. I know when I need real guidance on something I go right to fucking Twitch...

 
  • 5Worf
Reactions: 4 users

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
Board stands firm and Altman is still out. Murati is out now, replaced as interim CEO by Emmett Shear, founder of Twitch. Such a great choice. I know when I need real guidance on something I go right to fucking Twitch...



1700486993977.png


OpenAI pivoting to Thot GPTs, confirmed?


The drama continues!
 
Last edited:
  • 1Worf
  • 1Like
Reactions: 1 users

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
Also, Meta trying to bury this in the shuffle:

1700488420635.png


So the people handing out open source models to the world has no AI safety team anymore. Cool.

Humanity, going just great.

OH SHIT, new news:


Shit moves fast.
 
  • 2Worf
  • 1Like
Reactions: 2 users

Aldarion

Egg Nazi
8,949
24,477
Werent those "safety" teams primarily responsible for making sure LLMs dont say anything that goes against woke orthodoxy?
 
  • 3Like
Reactions: 2 users

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
Werent those "safety" teams primarily responsible for making sure LLMs dont say anything that goes against woke orthodoxy?
Only in the minds of crazy people obsessed with anti-woke. They're mostly to make sure they're not used as cyberweapons. I already saw one demo last week that could automatically generate exploit payloads for over a decade's worth of legacy vulnerabilities that are often found on unpatched internet-facing systems. And even if they're not internet-facing, once inside a network using some other method, boom.
 
  • 1Truth!
Reactions: 1 user

Aldarion

Egg Nazi
8,949
24,477
"mostly" is doing a hell of a heavy lift in that sentence. So you are not disputing that they were there to root out Thought Crime, just saying they were "mostly" responsible for other stuff?
 
  • 3Like
Reactions: 2 users

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
"mostly" is doing a hell of a heavy lift in that sentence. So you are not disputing that they were there to root out Thought Crime, just saying they were "mostly" responsible for other stuff?
These are products, mainly built for inclusion in deployable enterprise applications. No one wants customer-facing chatbot that starts yelling "HITLER WAS RIGHT" in the middle of talking about returning the menorah grandma bought from Homegoods. It's not "woke", it's basic business functionality. Given the history of internet-facing bots, this is a serious business risk.

1700490136730.png


Another thing the safety teams do is make sure that LLMs don't leak internal data and/or proprietary data, or their initial prompts, during interactions.
 
  • 1Picard
  • 1Moron
Reactions: 1 users

Palum

what Suineg set it to
23,534
33,945
Is it known if MS has some sort of like right of first refusal on OpenAI assets? Seems like take all the employees, tank the company, buy it at discount providing you can keep competitors out?
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
Is it known if MS has some sort of like right of first refusal on OpenAI assets? Seems like take all the employees, tank the company, buy it at discount providing you can keep competitors out?
They already have the model weights for the models that have already been trained. They probably have the training sets too.

There's very little other assets they might even want.

The code that generates new foundational models from a training set is actually quite small, laughably small compared to something like Windows. The key employees could bang it out in an afternoon over a few pizzas.
 

Palum

what Suineg set it to
23,534
33,945
They already have the model weights for the models that have already been trained. They probably have the training sets too.

There's very little other assets they might even want.

The code that generates new foundational models from a training set is actually quite small, laughably small compared to something like Windows. The key employees could bang it out in an afternoon over a few pizzas.
Yes but it's all under license right so they would want to acquire it so their competitors don't get the same.
 

Mist

Eeyore Enthusiast
<Gold Donor>
30,455
22,286
Yes but it's all under license right so they would want to acquire it so their competitors don't get the same.
Sure, not sure their competitors want 2-year-old model weights at this point, but some might. The pruned training sets are probably valuable, but not billions of dollars valuable.

for anyone that wants to read what the former Twitch CEO has to say about this.

My outside read on all of this is that the technology, in its current state, is already near-maxed out, and really only good for making consumer products and enterprise subscription assistant thingies. GPT5 is going to cost a lot to train and not be that much better than GPT4. All signs point to GPTs being unable to generalize outside of their training sets, aka unable to think of things humans haven't thought of before. It will be productized in a lot of ways, but not world-changing ones.
 
  • 2Like
Reactions: 1 users

Zog

Blackwing Lair Raider
1,729
2,250
Sure, not sure their competitors want 2-year-old model weights at this point, but some might. The pruned training sets are probably valuable, but not billions of dollars valuable.

for anyone that wants to read what the former Twitch CEO has to say about this.

My outside read on all of this is that the technology, in its current state, is already near-maxed out, and really only good for making consumer products and enterprise subscription assistant thingies. GPT5 is going to cost a lot to train and not be that much better than GPT4. All signs point to GPTs being unable to generalize outside of their training sets, aka unable to think of things humans haven't thought of before. It will be productized in a lot of ways, but not world-changing ones.

Of course, just a more robust script for NPC dialogue. Of course the negatives far outweigh the positives IMO, yes you can save money on labor for customer service, to an extent, I'm sure the "good" scripts won't be cheap and probably require annual licensing but the majority of people buying your shit won't have jobs anymore.

It's an awful lot like the current theory on our supply chains, zero inventory stock to save money with same day or next day delivery. If anything fucks up the entire model collapses and you can't sell anything.

Unfortunately for the market AI is the only current thing to look forward to for "growth."
 

Tuco

I got Tuco'd!
<Gold Donor>
45,451
73,541
Sure, not sure their competitors want 2-year-old model weights at this point, but some might. The pruned training sets are probably valuable, but not billions of dollars valuable.

for anyone that wants to read what the former Twitch CEO has to say about this.

My outside read on all of this is that the technology, in its current state, is already near-maxed out, and really only good for making consumer products and enterprise subscription assistant thingies. GPT5 is going to cost a lot to train and not be that much better than GPT4. All signs point to GPTs being unable to generalize outside of their training sets, aka unable to think of things humans haven't thought of before. It will be productized in a lot of ways, but not world-changing ones.
Feels like anyone willing to take this offer isn't a good fit. Like asking who wants to be the next captain while the ship is on fire and sinking.
 
  • 1Like
Reactions: 1 user