AI: The Rise of the Machines... Or Just a Lot of Overhyped Chatbots?

ShakyJake

<Donor>
8,515
21,161
If we look at how people are accessing AI right now, much of it is gated by larger and larger corps profiting off our use of it. This leads me to believe at least in the nearer term we're going to see AI's turning huge corporations into megacorps, and they will control what AI the lowly serfs can use. Basically, it will accelerate the move towards corporate dystopia we've been on for a while.

I was also thinking about the technological limitations and structure of LLMs this morning. Someone asked an LLM to create a video explaining a "day in the life of an LLM". And a couple quips stood out. The LLM was aware that every time it finished a task it was destroyed, that every time a new task started it was a fresh clone of the LLM from the previous task, and the way it was presented seemed to indicate the LLM was able to rationalize that these were bad things. Which made me think about how our brains work.

We essentially have active thought processing, which feeds inputs into short term memory, a process which then scrapes short term memory for important items and encodes them into longer term memory. (in computer terms, CPU, RAM, SSD, all silicon constructs, but with distinct functions) When you wake up, essentially your "firmware" boots for the day off what's in long term memory and you start processing the request (your day), when you go to sleep your essentially "shut down" until the morning.

The LLMs we use today have all the components except one. They don't have a really functional way to "scrape the short term memory for important/pertinent data and encode that back into the long term storage". And this limitation is mostly caused, from what I can tell, by the fact that the resources needed to do that "additional encoding" are high enough it becomes functionally prohibitive to do continuously. Essentially our brains "retrain our model" constantly. CPU/GPU/TPU and memory limitations prevert current AI models from being able to do that. Plus we don't want them learning what we don't want them learning (the Tay factor).

So a question becomes is the functional way to "control" AI that we simply make a legit moratorium on ever giving an AI that capability? It seems that's one of the last puzzle pieces to AGI and AI self improvement going "parabolic". And the real problem from there is as much as we might have a law against it, there will always be some person with a big enough ego to think they can control it so they're gonna do it anyways once the technological capacity is ubiquitous enough for more to have.

Leading eventually to the Butlerian Jihad, and the wisdom of Paul (From Dune) :
View attachment 623632
In my coding with Claude and Codex, I will often have it store important information that it "discovers" in markdown files, then have the CLAUDE.md or AGENTS.md reference those. That's the technique I use to give it a form of long term memory.
 

Haus

I am Big Balls!
<Gold Donor>
19,391
79,574
In my coding with Claude and Codex, I will often have it store important information that it "discovers" in markdown files, then have the CLAUDE.md or AGENTS.md reference those. That's the technique I use to give it a form of long term memory.
Yeah, I've also seen some of the "local model" runners (like GPT4all) have the ability to reference flat files for "memory" as well. And I think if I do my "couple the model with an agent like OpenClaw" I would build into the pre-prompting a method for it to retain things, and or a "catch phrase" I could tell it to tell it to make a note to remember whatever it just told me.
 

ToeMissile

Pronouns: zie/zhem/zer
<Gold Donor>
3,723
2,509
Yeah, I've also seen some of the "local model" runners (like GPT4all) have the ability to reference flat files for "memory" as well. And I think if I do my "couple the model with an agent like OpenClaw" I would build into the pre-prompting a method for it to retain things, and or a "catch phrase" I could tell it to tell it to make a note to remember whatever it just told me.
that’s pretty much built in now to openclaw. And starting to be for things like Claude Cowork.

I’ll came across this podcast, I’m sure there are plenty of others with similar stuff,