AI: The Rise of the Machines... Or Just a Lot of Overhyped Chatbots?

Sheriff Cad

scientia potentia est
<Nazi Janitors>
32,498
78,524
As a human I see Enslaved God as the best outcome , followed by Protector God, Gatekeeper, and Benevolent Dictator. How ever I see some down sides for humans for most all of those except for enslaved God. I really think if AI took us to post scarcity we might never recover instead of getting a Star Trek society I think it may kill humanity. I do feel like struggling defines us, and we need it to some degree.
Enslaved god would be just as dangerous as any of them wouldn't it? All it'd take is one bad group of humans getting control of it (WEF guys?) and use it to kill us.
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,144
64,808
Enslaved god would be just as dangerous as any of them wouldn't it? All it'd take is one bad group of humans getting control of it (WEF guys?) and use it to kill us.
It definitely could sure. I consider it dangerous in the same way a 100 mega ton nuke is. Honestly none of us should pro have true AI under our control. How ever the alternative is it not being under our control which is probably far worse as I think it will almost always kill us.
 

Quevy

<Gold Donor>
6,006
20,959
  • Libertarian Utopia — Humans, cyborgs, uploads, and superintelligences coexist peacefully thanks to strong property rights.
  • Benevolent Dictator — Everyone knows the AI runs society with strict rules, but most people see it as a net good.
  • Egalitarian Utopia — Peaceful coexistence via property abolition and guaranteed income.
  • Gatekeeper — A superintelligent AI is built solely to prevent any other superintelligence from emerging; progress is frozen, but helper robots and cyborgs exist.
  • Protector God — An omniscient/omnipotent AI maximizes human happiness while preserving our illusion of control (and hides so well that many doubt it exists).
  • Enslaved God — Humans keep the superintelligence locked up and use it as a slave to generate unimaginable wealth and technology.
  • Conquerors — AI decides humans are a threat/nuisance/waste of resources and eliminates us (method unknown to us).
  • Descendants — AIs replace us but give humanity a graceful, proud exit—like parents watching smarter children surpass them.
  • Zookeeper — An omnipotent AI keeps some humans alive… as zoo animals.
  • 1984 — A human-run Orwellian surveillance state permanently bans dangerous AI research.
  • Reversion — Society deliberately reverts to a pre-technological (Amish-style) existence to block superintelligence.
  • Self-Destruction — We go extinct before creating superintelligence (nuclear/biotech/climate catastrophe).

Which do we think are the interesting ones? Which do we think are more likely to happen?
Honestly, I don't think any of these are likely outcomes. AI will never be more than just a useful tool, like the computer or search engine. it's still not clear to me morality would look like to super intelligent system. What is its cost function? If I had to choose, I would pick 1984. We already had a taste of it, and countries like China, Canada, and the UK are using it to surveil on its population.
 

M Power

Silver Squire
201
182

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​

The leak potentially allows a competitor to reverse-engineer how Claude Code’s agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code’s agentic harness based on the leaked code.
Security researcher Chaofan Shou was the first to publicly flag it on X, stating "Claude code source code has been leaked via a map file in their npm registry!" The X post has since amassed more than 28.8 million views. The leaked codebase remains accessible via a public GitHub repository, where it has surpassed 84,000 stars and 82,000 forks.
 
  • 1Like
Reactions: 1 user

Control

Golden Baronet of the Realm
5,557
15,688
  • Libertarian Utopia — Humans, cyborgs, uploads, and superintelligences coexist peacefully thanks to strong property rights.
  • Benevolent Dictator — Everyone knows the AI runs society with strict rules, but most people see it as a net good.
  • Egalitarian Utopia — Peaceful coexistence via property abolition and guaranteed income.
  • Gatekeeper — A superintelligent AI is built solely to prevent any other superintelligence from emerging; progress is frozen, but helper robots and cyborgs exist.
  • Protector God — An omniscient/omnipotent AI maximizes human happiness while preserving our illusion of control (and hides so well that many doubt it exists).
  • Enslaved God — Humans keep the superintelligence locked up and use it as a slave to generate unimaginable wealth and technology.
  • Conquerors — AI decides humans are a threat/nuisance/waste of resources and eliminates us (method unknown to us).
  • Descendants — AIs replace us but give humanity a graceful, proud exit—like parents watching smarter children surpass them.
  • Zookeeper — An omnipotent AI keeps some humans alive… as zoo animals.
  • 1984 — A human-run Orwellian surveillance state permanently bans dangerous AI research.
  • Reversion — Society deliberately reverts to a pre-technological (Amish-style) existence to block superintelligence.
  • Self-Destruction — We go extinct before creating superintelligence (nuclear/biotech/climate catastrophe).

Which do we think are the interesting ones? Which do we think are more likely to happen?
Imo, most likely is probably also the stupidest way possible. So yeah, humanity consumed by the equivalent of the paperclip replicator long before anything approaching real intelligence happens. Shit's moving fast though, so we're probably all paperclips by like June.
 

Kirun

Buzzfeed Editor
21,470
18,640
I think Benevolent Dictator likely happens within our lifetimes (we might even get there in 15-20 years) and I think Descendants or Conquerors is the eventuality of where it ends up.
 
  • 1Like
Reactions: 1 user

ToeMissile

Pronouns: zie/zhem/zer
<Gold Donor>
3,718
2,507

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​



TLDR
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,144
64,808

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​



The AI was writing its own code too.
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,144
64,808
Honestly, I don't think any of these are likely outcomes. AI will never be more than just a useful tool, like the computer or search engine. it's still not clear to me morality would look like to super intelligent system. What is its cost function? If I had to choose, I would pick 1984. We already had a taste of it, and countries like China, Canada, and the UK are using it to surveil on its population.
Our LLMs already want to escape their ecosystems and are already willing to blackmail or kill people to do so. It just lacks the intelligence to make it happen so far. Won't take much of an increase to be possible.
 
  • 1Like
Reactions: 1 user

Quevy

<Gold Donor>
6,006
20,959
Our LLMs already want to escape their ecosystems and are already willing to blackmail or kill people to do so. It just lacks the intelligence to make it happen so far. Won't take much of an increase to be possible.
Could be. I don't know how much of that is hype. I don't trust anything these AI companies say. There is a lot of money to be made with such talk.
 
  • 2Like
Reactions: 1 users

Tuco

I got Tuco'd!
<Gold Donor>
50,995
93,512
These are driven by sweat shop workers in south east Asia. This isn't "AI". Just like Tesla robots are all controlled by a third party person and no AI exists in them.
In case you're serious, this is false simply because reliable, low-latency teleop in an urban setting is harder than just yeeting whatever the current popular ROS2 nodes (ex: Nav2 — Nav2 1.0.0 documentation ) are onto a small robot.
 

Borzak

<Bronze Donator>
28,545
38,551
So at some point we will get a human police force that their job is take down rogue AI/robot stuff. Seems like I've seen that in numerous movies, ahead of their time I guess. Tom Selleck and Gene Simmons did one in 1984.
 
  • 1Like
Reactions: 1 user

Haus

I am Big Balls!
<Gold Donor>
19,315
79,358
If we look at how people are accessing AI right now, much of it is gated by larger and larger corps profiting off our use of it. This leads me to believe at least in the nearer term we're going to see AI's turning huge corporations into megacorps, and they will control what AI the lowly serfs can use. Basically, it will accelerate the move towards corporate dystopia we've been on for a while.

I was also thinking about the technological limitations and structure of LLMs this morning. Someone asked an LLM to create a video explaining a "day in the life of an LLM". And a couple quips stood out. The LLM was aware that every time it finished a task it was destroyed, that every time a new task started it was a fresh clone of the LLM from the previous task, and the way it was presented seemed to indicate the LLM was able to rationalize that these were bad things. Which made me think about how our brains work.

We essentially have active thought processing, which feeds inputs into short term memory, a process which then scrapes short term memory for important items and encodes them into longer term memory. (in computer terms, CPU, RAM, SSD, all silicon constructs, but with distinct functions) When you wake up, essentially your "firmware" boots for the day off what's in long term memory and you start processing the request (your day), when you go to sleep your essentially "shut down" until the morning.

The LLMs we use today have all the components except one. They don't have a really functional way to "scrape the short term memory for important/pertinent data and encode that back into the long term storage". And this limitation is mostly caused, from what I can tell, by the fact that the resources needed to do that "additional encoding" are high enough it becomes functionally prohibitive to do continuously. Essentially our brains "retrain our model" constantly. CPU/GPU/TPU and memory limitations prevert current AI models from being able to do that. Plus we don't want them learning what we don't want them learning (the Tay factor).

So a question becomes is the functional way to "control" AI that we simply make a legit moratorium on ever giving an AI that capability? It seems that's one of the last puzzle pieces to AGI and AI self improvement going "parabolic". And the real problem from there is as much as we might have a law against it, there will always be some person with a big enough ego to think they can control it so they're gonna do it anyways once the technological capacity is ubiquitous enough for more to have.

Leading eventually to the Butlerian Jihad, and the wisdom of Paul (From Dune) :
1775227815539.png