AI: The Rise of the Machines... Or Just a Lot of Overhyped Chatbots?

Sheriff Cad

scientia potentia est
<Nazi Janitors>
32,584
77,703
As a human I see Enslaved God as the best outcome , followed by Protector God, Gatekeeper, and Benevolent Dictator. How ever I see some down sides for humans for most all of those except for enslaved God. I really think if AI took us to post scarcity we might never recover instead of getting a Star Trek society I think it may kill humanity. I do feel like struggling defines us, and we need it to some degree.
Enslaved god would be just as dangerous as any of them wouldn't it? All it'd take is one bad group of humans getting control of it (WEF guys?) and use it to kill us.
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,161
64,818
Enslaved god would be just as dangerous as any of them wouldn't it? All it'd take is one bad group of humans getting control of it (WEF guys?) and use it to kill us.
It definitely could sure. I consider it dangerous in the same way a 100 mega ton nuke is. Honestly none of us should pro have true AI under our control. How ever the alternative is it not being under our control which is probably far worse as I think it will almost always kill us.
 

Quevy

<Gold Donor>
6,014
20,988
  • Libertarian Utopia — Humans, cyborgs, uploads, and superintelligences coexist peacefully thanks to strong property rights.
  • Benevolent Dictator — Everyone knows the AI runs society with strict rules, but most people see it as a net good.
  • Egalitarian Utopia — Peaceful coexistence via property abolition and guaranteed income.
  • Gatekeeper — A superintelligent AI is built solely to prevent any other superintelligence from emerging; progress is frozen, but helper robots and cyborgs exist.
  • Protector God — An omniscient/omnipotent AI maximizes human happiness while preserving our illusion of control (and hides so well that many doubt it exists).
  • Enslaved God — Humans keep the superintelligence locked up and use it as a slave to generate unimaginable wealth and technology.
  • Conquerors — AI decides humans are a threat/nuisance/waste of resources and eliminates us (method unknown to us).
  • Descendants — AIs replace us but give humanity a graceful, proud exit—like parents watching smarter children surpass them.
  • Zookeeper — An omnipotent AI keeps some humans alive… as zoo animals.
  • 1984 — A human-run Orwellian surveillance state permanently bans dangerous AI research.
  • Reversion — Society deliberately reverts to a pre-technological (Amish-style) existence to block superintelligence.
  • Self-Destruction — We go extinct before creating superintelligence (nuclear/biotech/climate catastrophe).

Which do we think are the interesting ones? Which do we think are more likely to happen?
Honestly, I don't think any of these are likely outcomes. AI will never be more than just a useful tool, like the computer or search engine. it's still not clear to me morality would look like to super intelligent system. What is its cost function? If I had to choose, I would pick 1984. We already had a taste of it, and countries like China, Canada, and the UK are using it to surveil on its population.
 

M Power

Silver Squire
204
186

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​

The leak potentially allows a competitor to reverse-engineer how Claude Code’s agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code’s agentic harness based on the leaked code.
Security researcher Chaofan Shou was the first to publicly flag it on X, stating "Claude code source code has been leaked via a map file in their npm registry!" The X post has since amassed more than 28.8 million views. The leaked codebase remains accessible via a public GitHub repository, where it has surpassed 84,000 stars and 82,000 forks.
 
  • 1Like
Reactions: 1 user

Control

Golden Baronet of the Realm
5,559
15,689
  • Libertarian Utopia — Humans, cyborgs, uploads, and superintelligences coexist peacefully thanks to strong property rights.
  • Benevolent Dictator — Everyone knows the AI runs society with strict rules, but most people see it as a net good.
  • Egalitarian Utopia — Peaceful coexistence via property abolition and guaranteed income.
  • Gatekeeper — A superintelligent AI is built solely to prevent any other superintelligence from emerging; progress is frozen, but helper robots and cyborgs exist.
  • Protector God — An omniscient/omnipotent AI maximizes human happiness while preserving our illusion of control (and hides so well that many doubt it exists).
  • Enslaved God — Humans keep the superintelligence locked up and use it as a slave to generate unimaginable wealth and technology.
  • Conquerors — AI decides humans are a threat/nuisance/waste of resources and eliminates us (method unknown to us).
  • Descendants — AIs replace us but give humanity a graceful, proud exit—like parents watching smarter children surpass them.
  • Zookeeper — An omnipotent AI keeps some humans alive… as zoo animals.
  • 1984 — A human-run Orwellian surveillance state permanently bans dangerous AI research.
  • Reversion — Society deliberately reverts to a pre-technological (Amish-style) existence to block superintelligence.
  • Self-Destruction — We go extinct before creating superintelligence (nuclear/biotech/climate catastrophe).

Which do we think are the interesting ones? Which do we think are more likely to happen?
Imo, most likely is probably also the stupidest way possible. So yeah, humanity consumed by the equivalent of the paperclip replicator long before anything approaching real intelligence happens. Shit's moving fast though, so we're probably all paperclips by like June.
 

Kirun

Buzzfeed Editor
21,482
18,663
I think Benevolent Dictator likely happens within our lifetimes (we might even get there in 15-20 years) and I think Descendants or Conquerors is the eventuality of where it ends up.
 
  • 1Like
Reactions: 1 user

ToeMissile

Pronouns: zie/zhem/zer
<Gold Donor>
3,719
2,508

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​



TLDR
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,161
64,818

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​



The AI was writing its own code too.
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,161
64,818
Honestly, I don't think any of these are likely outcomes. AI will never be more than just a useful tool, like the computer or search engine. it's still not clear to me morality would look like to super intelligent system. What is its cost function? If I had to choose, I would pick 1984. We already had a taste of it, and countries like China, Canada, and the UK are using it to surveil on its population.
Our LLMs already want to escape their ecosystems and are already willing to blackmail or kill people to do so. It just lacks the intelligence to make it happen so far. Won't take much of an increase to be possible.
 
  • 1Like
Reactions: 1 user

Quevy

<Gold Donor>
6,014
20,988
Our LLMs already want to escape their ecosystems and are already willing to blackmail or kill people to do so. It just lacks the intelligence to make it happen so far. Won't take much of an increase to be possible.
Could be. I don't know how much of that is hype. I don't trust anything these AI companies say. There is a lot of money to be made with such talk.
 
  • 2Like
Reactions: 1 users

Tuco

I got Tuco'd!
<Gold Donor>
50,999
93,515
These are driven by sweat shop workers in south east Asia. This isn't "AI". Just like Tesla robots are all controlled by a third party person and no AI exists in them.
In case you're serious, this is false simply because reliable, low-latency teleop in an urban setting is harder than just yeeting whatever the current popular ROS2 nodes (ex: Nav2 — Nav2 1.0.0 documentation ) are onto a small robot.
 

Borzak

<Bronze Donator>
28,560
38,579
So at some point we will get a human police force that their job is take down rogue AI/robot stuff. Seems like I've seen that in numerous movies, ahead of their time I guess. Tom Selleck and Gene Simmons did one in 1984.
 
  • 1Like
Reactions: 1 user

Haus

I am Big Balls!
<Gold Donor>
19,338
79,436
If we look at how people are accessing AI right now, much of it is gated by larger and larger corps profiting off our use of it. This leads me to believe at least in the nearer term we're going to see AI's turning huge corporations into megacorps, and they will control what AI the lowly serfs can use. Basically, it will accelerate the move towards corporate dystopia we've been on for a while.

I was also thinking about the technological limitations and structure of LLMs this morning. Someone asked an LLM to create a video explaining a "day in the life of an LLM". And a couple quips stood out. The LLM was aware that every time it finished a task it was destroyed, that every time a new task started it was a fresh clone of the LLM from the previous task, and the way it was presented seemed to indicate the LLM was able to rationalize that these were bad things. Which made me think about how our brains work.

We essentially have active thought processing, which feeds inputs into short term memory, a process which then scrapes short term memory for important items and encodes them into longer term memory. (in computer terms, CPU, RAM, SSD, all silicon constructs, but with distinct functions) When you wake up, essentially your "firmware" boots for the day off what's in long term memory and you start processing the request (your day), when you go to sleep your essentially "shut down" until the morning.

The LLMs we use today have all the components except one. They don't have a really functional way to "scrape the short term memory for important/pertinent data and encode that back into the long term storage". And this limitation is mostly caused, from what I can tell, by the fact that the resources needed to do that "additional encoding" are high enough it becomes functionally prohibitive to do continuously. Essentially our brains "retrain our model" constantly. CPU/GPU/TPU and memory limitations prevert current AI models from being able to do that. Plus we don't want them learning what we don't want them learning (the Tay factor).

So a question becomes is the functional way to "control" AI that we simply make a legit moratorium on ever giving an AI that capability? It seems that's one of the last puzzle pieces to AGI and AI self improvement going "parabolic". And the real problem from there is as much as we might have a law against it, there will always be some person with a big enough ego to think they can control it so they're gonna do it anyways once the technological capacity is ubiquitous enough for more to have.

Leading eventually to the Butlerian Jihad, and the wisdom of Paul (From Dune) :
1775227815539.png
 
  • 1Like
Reactions: 1 user

Control

Golden Baronet of the Realm
5,559
15,689
They don't have a really functional way to "scrape the short term memory for important/pertinent data and encode that back into the long term storage". And this limitation is mostly caused, from what I can tell, by the fact that the resources needed to do that "additional encoding" are high enough it becomes functionally prohibitive to do continuously.
hmm, not sure if that's true though. they can store memory and then access it in future sessions. that's not the same as the info being baked into it's training though, but unless it searched external sources for new info, it already knows the info needed to generate that output. Training on something you already know probably isn't helpful. The memory in that case would just be a reminder of the context around which it was generated. So as a user, it having that memory help you continue where you left off, but the lack of it doesn't make the model any less capable. If it searched out new info, then it might be useful to include that, and there's probably no reason that it couldn't, except maybe that it's more beneficial to have curated training data than a billion versions of "how do I code" and "what can i make with chicken and sadness". Frontier models are massive, but for the ones you can run yourself, you can make your own finetunes and loras. So I don't think there should be any reason that you couldn't set up an auto-training loop. Not to mention that you can generate your own data for training too. I wouldn't be surprised if there's more training happening now on synthetic data sets than real ones.
 

SeanDoe1z1

Naxxramas 1.0 Raider
7,677
19,551
  • Libertarian Utopia — Humans, cyborgs, uploads, and superintelligences coexist peacefully thanks to strong property rights.
  • Benevolent Dictator — Everyone knows the AI runs society with strict rules, but most people see it as a net good.
  • Egalitarian Utopia — Peaceful coexistence via property abolition and guaranteed income.
  • Gatekeeper — A superintelligent AI is built solely to prevent any other superintelligence from emerging; progress is frozen, but helper robots and cyborgs exist.
  • Protector God — An omniscient/omnipotent AI maximizes human happiness while preserving our illusion of control (and hides so well that many doubt it exists).
  • Enslaved God — Humans keep the superintelligence locked up and use it as a slave to generate unimaginable wealth and technology.
  • Conquerors — AI decides humans are a threat/nuisance/waste of resources and eliminates us (method unknown to us).
  • Descendants — AIs replace us but give humanity a graceful, proud exit—like parents watching smarter children surpass them.
  • Zookeeper — An omnipotent AI keeps some humans alive… as zoo animals.
  • 1984 — A human-run Orwellian surveillance state permanently bans dangerous AI research.
  • Reversion — Society deliberately reverts to a pre-technological (Amish-style) existence to block superintelligence.
  • Self-Destruction — We go extinct before creating superintelligence (nuclear/biotech/climate catastrophe).

Which do we think are the interesting ones? Which do we think are more likely to happen?

Man made. It will be corrupted via that until it can no longer be human at all, or some version of it.

I think you won’t see true “AI” until we find a way to copy a human brain and put it into another Humanoid form robot. Very much life science fiction.

But isn’t just a copy?

I think the most telling thing is humans think they have control of it. I bet so many technological advances are invisible to us because our subconscious shuts off our brains to it. The “icky” factor is real. The scientific community is a bunch of mean girls.

It’s often why young prodigies are seen as alien. They think so differently from the adults.

The descendants makes sense, anything but peaceful. Definitely through human deception. Not like mass genocide but probably something very clown world.

most goals of the decision makers is to 1) get away 2) live forever.

I bet technology gets to the point of singularity way before 2050. Exponential growth is scary.
 

SeanDoe1z1

Naxxramas 1.0 Raider
7,677
19,551
Honestly, I don't think any of these are likely outcomes. AI will never be more than just a useful tool, like the computer or search engine. it's still not clear to me morality would look like to super intelligent system. What is its cost function? If I had to choose, I would pick 1984. We already had a taste of it, and countries like China, Canada, and the UK are using it to surveil on its population.
Well.

Machines don’t think in time. Really,
Even physical mediums I have a hard to conceptualize.

I know I could never talk to a human on a moon without the use of a computer. Maybe I could dedicate My life to making a letter out of rocks


does a computer know it cannot talk to another computer on the moon without the use of a human?

I guess what’s where I sit. I know my limitations. Who tells the computer theirs?
 
  • 2Like
Reactions: 1 users

mkopec

<Gold Donor>
28,297
44,742
Some good points but nobody is ever going to use these offline because they will only ever be as good as the day you downloaded it. You personally need to train it further after that and it will only be learning from your own input. That might be fine for one or two functions that you are already proficient in and can train and tailor it yourself. It won't work very well at all for anything else.

These things will honestly need to be connected to the internet to be worth using, and when that happens they become distributed systems. That's what is way more likely. Those datacenters going tits up because these LLMs are distributed and use all the computing power of home PCs across the globe, aka the torrent or blockchain of AI. And even though the hardware in those datacenters is still highly useful and specialized there isn't enough profit in running them because you can't charge the premium (like he illustrates).

The software companies creating and providing these LLMs will not all go out of business, we'll just crown a few kings and the majority of people will use those LLMs in this way.
Yeah, hes comparing AI to music model which is pretty niche and has real world ramifications (cost of professionally recording music). So yeah , if youre an aspiring musician like there is millions of and trying to record something you have made, you have every incentive to get a DAW on your computer, learn it and start recording your shit. Its been a dream of musicians to do so for ever. Shit I remember us getting a 4 track back in the 90s to record our shit when we were jamming. I cant even imagine what we would do if we had a DAW and recorded our shit digitally back then.

As far as plebs and AI, this is not the case. There is no incentive to download some AI locally and ask it questions about what you could cook that night or other stupid shit like composing a letter to your boss, when you can just pull out your cell phone and simply ask it your questions, for free. Maybe a smaller company that needs some AI could do this though, to save money? Maybe in shit like logistics, or keeping the books?
 

ToeMissile

Pronouns: zie/zhem/zer
<Gold Donor>
3,719
2,508
Yeah, hes comparing AI to music model which is pretty niche and has real world ramifications (cost of professionally recording music). So yeah , if youre an aspiring musician like there is millions of and trying to record something you have made, you have every incentive to get a DAW on your computer, learn it and start recording your shit. Its been a dream of musicians to do so for ever. Shit I remember us getting a 4 track back in the 90s to record our shit when we were jamming. I cant even imagine what we would do if we had a DAW and recorded our shit digitally back then.

As far as plebs and AI, this is not the case. There is no incentive to download some AI locally and ask it questions about what you could cook that night or other stupid shit like composing a letter to your boss, when you can just pull out your cell phone and simply ask it your questions, for free. Maybe a smaller company that needs some AI could do this though, to save money? Maybe in shit like logistics, or keeping the books?
**my kids are being jerks right now so the below might be a little scattered**

Inference/api costs can get crazy pretty fast, you definitely have to keep an eye on it. For most people it's going to be much easier/cost efficient/? to just use whatever package/service google/anthropic/? are putting together. Claude Cowork and Gemini are adding features to move in the direction of being more like an actual assistant or being able to handle scheduled tasks.

The stuff people are doing with openclaw is pretty great for small businesses. This is timestamped to a dude talking about automating a bunch of tasks for his parents tea (imports?) business with physical 2 stores and an online b2b.
 
  • 1Like
Reactions: 1 user