If we look at how people are accessing AI right now, much of it is gated by larger and larger corps profiting off our use of it. This leads me to believe at least in the nearer term we're going to see AI's turning huge corporations into megacorps, and they will control what AI the lowly serfs can use. Basically, it will accelerate the move towards corporate dystopia we've been on for a while.
I was also thinking about the technological limitations and structure of LLMs this morning. Someone asked an LLM to create a video explaining a "day in the life of an LLM". And a couple quips stood out. The LLM was aware that every time it finished a task it was destroyed, that every time a new task started it was a fresh clone of the LLM from the previous task, and the way it was presented seemed to indicate the LLM was able to rationalize that these were bad things. Which made me think about how our brains work.
We essentially have active thought processing, which feeds inputs into short term memory, a process which then scrapes short term memory for important items and encodes them into longer term memory. (in computer terms, CPU, RAM, SSD, all silicon constructs, but with distinct functions) When you wake up, essentially your "firmware" boots for the day off what's in long term memory and you start processing the request (your day), when you go to sleep your essentially "shut down" until the morning.
The LLMs we use today have all the components except one. They don't have a really functional way to "scrape the short term memory for important/pertinent data and encode that back into the long term storage". And this limitation is mostly caused, from what I can tell, by the fact that the resources needed to do that "additional encoding" are high enough it becomes functionally prohibitive to do continuously. Essentially our brains "retrain our model" constantly. CPU/GPU/TPU and memory limitations prevert current AI models from being able to do that. Plus we don't want them learning what we don't want them learning (the Tay factor).
So a question becomes is the functional way to "control" AI that we simply make a legit moratorium on ever giving an AI that capability? It seems that's one of the last puzzle pieces to AGI and AI self improvement going "parabolic". And the real problem from there is as much as we might have a law against it, there will always be some person with a big enough ego to think they can control it so they're gonna do it anyways once the technological capacity is ubiquitous enough for more to have.
Leading eventually to the Butlerian Jihad, and the wisdom of Paul (From Dune) :