AI: The Rise of the Machines... Or Just a Lot of Overhyped Chatbots?

Sheriff Cad

scientia potentia est
<Nazi Janitors>
32,409
78,224
As a human I see Enslaved God as the best outcome , followed by Protector God, Gatekeeper, and Benevolent Dictator. How ever I see some down sides for humans for most all of those except for enslaved God. I really think if AI took us to post scarcity we might never recover instead of getting a Star Trek society I think it may kill humanity. I do feel like struggling defines us, and we need it to some degree.
Enslaved god would be just as dangerous as any of them wouldn't it? All it'd take is one bad group of humans getting control of it (WEF guys?) and use it to kill us.
 

Chanur

Shit Posting Professional
<Aristocrat╭ರ_•́>
35,141
64,804
Enslaved god would be just as dangerous as any of them wouldn't it? All it'd take is one bad group of humans getting control of it (WEF guys?) and use it to kill us.
It definitely could sure. I consider it dangerous in the same way a 100 mega ton nuke is. Honestly none of us should pro have true AI under our control. How ever the alternative is it not being under our control which is probably far worse as I think it will almost always kill us.
 

Quevy

<Gold Donor>
5,959
20,876
  • Libertarian Utopia — Humans, cyborgs, uploads, and superintelligences coexist peacefully thanks to strong property rights.
  • Benevolent Dictator — Everyone knows the AI runs society with strict rules, but most people see it as a net good.
  • Egalitarian Utopia — Peaceful coexistence via property abolition and guaranteed income.
  • Gatekeeper — A superintelligent AI is built solely to prevent any other superintelligence from emerging; progress is frozen, but helper robots and cyborgs exist.
  • Protector God — An omniscient/omnipotent AI maximizes human happiness while preserving our illusion of control (and hides so well that many doubt it exists).
  • Enslaved God — Humans keep the superintelligence locked up and use it as a slave to generate unimaginable wealth and technology.
  • Conquerors — AI decides humans are a threat/nuisance/waste of resources and eliminates us (method unknown to us).
  • Descendants — AIs replace us but give humanity a graceful, proud exit—like parents watching smarter children surpass them.
  • Zookeeper — An omnipotent AI keeps some humans alive… as zoo animals.
  • 1984 — A human-run Orwellian surveillance state permanently bans dangerous AI research.
  • Reversion — Society deliberately reverts to a pre-technological (Amish-style) existence to block superintelligence.
  • Self-Destruction — We go extinct before creating superintelligence (nuclear/biotech/climate catastrophe).

Which do we think are the interesting ones? Which do we think are more likely to happen?
Honestly, I don't think any of these are likely outcomes. AI will never be more than just a useful tool, like the computer or search engine. it's still not clear to me morality would look like to super intelligent system. What is its cost function? If I had to choose, I would pick 1984. We already had a taste of it, and countries like China, Canada, and the UK are using it to surveil on its population.
 

M Power

Silver Squire
198
173

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos​

The leak potentially allows a competitor to reverse-engineer how Claude Code’s agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code’s agentic harness based on the leaked code.
Security researcher Chaofan Shou was the first to publicly flag it on X, stating "Claude code source code has been leaked via a map file in their npm registry!" The X post has since amassed more than 28.8 million views. The leaked codebase remains accessible via a public GitHub repository, where it has surpassed 84,000 stars and 82,000 forks.