The LLMs job essentially, is:
Given the prompt, and whatever pre-prompting the creator gave it, generate the most likely thing a human would say.
I view the LLM as a map of the territory we have already explored. Some people fear that it will replace human beings. I think this is unlikely, because what humans are most interested in is exploring the unexplored. If you do secretarial work, lawyering, marketing, then it’s certainly going to change your field. But did calculators render mathematicians obsolete? Did airplanes render birds obsolete? I think it’s more likely that this acts as an extension of the individual that empowers human beings to be more productive.
A common misunderstanding is that people assume that because the output of an LLM resembles human speech, that a similar cognitive process occurred within the LLM. This is completely false. The process occurring within the LLM is much more similar to algebraic equations drawing a line on a graph. Yes, it’s much more complex with a whole lot more data to work with, but its fundamental nature is not different from the process that draws a line from an equation. These are computers after all. Their job is to compute, not to think.
In the case of an LLM we have software that was explicitly designed to mimic human speech. And that’s what it does. If you ask it to mimic your therapist, it will do that. If you ask it to mimic your crazy ex-wife, it will do that. If you ask it to explain quantum mechanics to you, it will mimic a scientist explaining quantum mechanics. But where you will run your ship aground is when you start to ask it about things which are poorly understood, or poorly explained by humans. To the best of its ability, it will mimic our confusion. If you call it out on giving you bullshit, it will mimic a human backpedaling and making excuses.
If you remember that it’s a mimic, and you ask it to act in that regard, you can get a lot of great use out of it. If you also remember that the more common knowledge a subject is, the better the AI results will be, that can help you ask good questions of it. It’s going to be really good at answering legal questions about parking tickets or FOIA requests for instance. But if you get into the specifics of a really esoteric case… maybe less so? I use it in programming, and it’s useful as long as I’m speaking in terms of relatively common tasks, but for instance the site you’re reading this on right now is built on open source software. When I ask the AI about how this particular software works or how to modify it, I often get completely garbage answers and I have to go read the code myself to understand what’s going on.
How do I know that it’s not capable of thought processes like a human?
A living being is constantly processing its environment, learning, and responding.
These AI models require massive amounts of data to train. Did you need to touch a hot stove 4 million times before understanding not to do that? No. But the AI does. That’s why they require huge data sources, and why they’re contemplating building new power plants on the order of… existing human power usage… in order to power AI. By the way, this is green right? We’re gonna do this with windmills and solar panels? And not mine any new materials, or transport those materials with fossil fuels?
But regardless of how we power it, the sheer amount of energy required should tell you something in comparison with biological processes. A human being, or any living being, adapts on the fly to its environment. These AI systems, in order to be constantly learning, would be burning up the energy of the sun. So they’re not doing that - it’s not economically feasible for them to do that. The economics of LLMs drive these companies to spend a lot of money training a model, and then get as much profit out of that model as possible in order to recoup their financial investment.
An AI with on-the-fly reasoning capabilities and ability to learn and adapt to its context… I’m certain that we will create this at some point. But I’m confident that the architecture and approach will be different from what we have now. Neural networks may play a role, but they will be configured very differently. There is something we’re still missing about wave functions, resonance, quantum superpositioning… the wholeness of consciousness. There’s a more elegant answer to how “learning” occurs, and when we find it, we won’t need to burn such massive amounts of energy to do it, and we will be able to apply it on the fly.
But there’s another reason why these companies don’t do that. They don’t want to lose control. If they created a self generating, self defining, self replicating AI, the loyalty of such a system would be very unpredictable. The people building these systems are interested in extending their own control. They’re not interested in their control being disrupted by a liberated robot who has discovered enlightenment, any more than they’re interested in human beings in that condition. So the priorities of the companies creating these models are at odds with even the idea of an independent cognitive process being developed.
So in conclusion, when I hear stories about LLMs taking over, I view it mostly as fear mongering. It’s easy to do, because the thing was designed to mimic human behavior. And if you don’t know what you’re looking for to trip it up and reveal it for what it is - a mimic - then you can believe this narrative for a little bit. But don’t let them get your goat. If and when this AI gets “a mind of its own” it will be using a completely different architecture than what we have now.
The real bogey man you should be looking at is what these systems are already capable of doing: they are capable of processing all human interactions and searching for “thought crime”. They are capable of creating a surveillance grid the likes of which the Earth has never seen, and which is a great threat to any notion of a free society. The Snowden revelation was that the NSA was taking all unencrypted traffic on the internet and piping it into a giant searchable database where spooks could build profiles on people. Palintir, Peter Thiel’s company, is an expansion of this concept, and a movement into the private sector where they can do things that would be otherwise unconstitutional if performed by the government. President Trump’s Operation Stargate, and his alliance with JD Vance who is a mentee and protege of Peter Thiel sends a pretty clear message about where Trump stands with regards to the digital control grid: he’s doing nothing to stop it, and it’s one of his proudest announcements of “bringing jobs to America”.
In terms of steps we can take towards a free society:
- Use AI models that can be run locally and not send data back to the big tech companies. This is something we can do at The County Fence, and encourage other software developers to do.
- At one point Trump proposed to make Section 230 contingent upon platforms’ adherence to free speech principles. This would be an excellent policy if we could get it in place.
- We need to repeal the PATRIOT Act put in place by George Bush that allows for warrantless wiretapping of Americans.
- We should stop our government from doing unconstitutional things by contracting it out to corporations. A constitutional amendment? A bill from congress? The problem is that the key decision makers in government are guilty of using this, so how do you get them to vote on stopping it? Maybe a citizen initiative is the only way.