this piece needs some revision but I wanted to publish ASAP. Edits to come.
The question Large Language Models answer is “which word comes next after a given phrase?”
Until the invention of AI, the input data (or theoretical maximum of the input data) was all of humanity’s written knowledge.
The data being fed to the model is 100% a product of human experience and the collective efforts of humanity to document all types of content: thoughts, reports, signs, laws, mathematics, art, anything and everything. BUT, it all comes from humans. Which is to say that all content generated before AI is content generated from some subset of all human knowledge.
It is very rare that any human ever knows everything. Therefore, all generative HUMAN intelligence is based upon incomplete knowledge (where incomplete just means “not everything ever written or depicted by everyone else who came before”). (Put another way, "All perspectives are partial perspectives.")
Yet generative ARTIFICIAL intelligence is (theoretically) fed by everything that ever came before.
This “everything” which feeds the AI, has — up until now — been updated by humans, who, by definition, have limited knowledge.
To make my next point efficiently, an assertion is necessary:
Let us regard human work product as being (by definition) “stable”.
“Stable” means, that the update being made to the collective knowledge (the “everything”) that feeds the large language models does not have a deleterious or damaging effect to the integrity of that collective knowledge.
So… for example, if the AI is directed to generate content on the internet for a media campaign intended to persuade us that 1+1 = 3, and then retroactively alter the metadata for these reports to reflect that this has been true since 3000BC, after the false information exceeds the true information, the AI will conclude that 1+1 equals either 2 or 3. Because the probability that 1+1 = 2 isn’t 100% and the probability that 1+1=3 isn’t 0%.
The “fake news” that 1+1 = 3 is what I would call a deleterious update.
Stable contributions are not deleterious.
Unstable contributions may be deleterious or not.
In general, or at least since the time of the presocratic philosophers, human fear of the άπειρον “apeiron” (the boundless, limitless, infinity of a completely open void) causes most intellectuals to limit their opinions to a respect for the wisdom of others. An incentive, or a tendency, not to try to destroy the collective knowledge, an incentive which is rooted in the recognition of one’s own limitations. Below is the argument:
All perspectives are partial perspectives.
Therefore, my perspective is a partial perspective.
There are dangers of which I may be unaware. Therefore I must be humble and respectful of the integrity of all facts as I update human knowledge.
An AI, however, has maximum knowledge. Not infinite, but maximum. And it also has no body and no feelings. By definition, its opinions are informed by the complete knowledge (as complete as could ever exist) of all previous humans.
The AI has no real reason to fear the boundless in the same way humans do.
Thus, AI’s updates to “everything” could be useful or deleterious.
AI’s contributions are unstable.
This leads me to my conjecture:
Eventually, an unstable contribution will lead to a contradiction or paradox, which will crash the system.
In conjunction with this impending eventuality which will occur at some time in the future, humanity has, is, and will have been outsourcing both cognitive and manual tasks to machines.
The collective knowledge of humanity exists in juxtaposition with humans, as it always has; books have been around forever, and not all people read all books. What’s new now is that we are using the knowledge already written down as one tool for yet another set of tools to which we outsource the function of doing and understanding an ever increasing number of things for us.*
To me, this means, that the ultimate impact of AI will be the psychological extinction of useful human behaviors over a period of time, and the intellectual deletion of history and valid knowledge.
Put it this way.
(1) If the AI will do everything for you, why would you do anything?
If you do nothing, you will stagnate.
(2) The reliability of the AI’s usefulness as a tool is guaranteed by the “stability” of the input data.
(3) However, once AI is brought into existence, unstable contributions to that input data become possible and even likely.
Premise_(3) implies inevitable death of Knowledge’s integrity,
which implies that premise_(2), a precondition for AI’s usefulness, will be violated,
and premise_(1) implies nobody will be around to fix it.
Therefore, AI will destroy either itself or humanity or both, but not through a take over or through tyranny, but just by entropy or heat death.
This may be the most ironic outcome. And one of Elon Musk’s friends says that the new version of Occam’s Razor is that the most likely outcome isn’t the simplest, it’s the most ironic.
*One more thing needs to be said here. The above observation/prediction/hypothesis seems to me to be true for a society comprised of people who can afford to connect to as well as harness the power of AI. This is already a small portion of humanity. A "master" class, so to speak. But as we've already seen with automation, as free agents "harness the power of AI" they disrupt the lives of those who cannot harmed AI's power. This creates chaos for everyone involved. The victims of an AI generated disruption, the society at large who must now figure out what those people should do with their lives, and eventually, even the master class who profiteer.
So what we have is a master class who is reliant not upon human slaves and proxies, but instead upon computer code with a tendency towards its own inevitable self-destruction built into it.
This means that the master class, as human specimens will get cognitively weaker over time, and their agents, slaves and tools, will get weaker over time, while those who've been fucked over will get increasingly passed off, resentful, and jealous. Additionally, for every attention savings that a master receives for directing an AI to do something, or for relying upon a technology to do something for him/her, an externality exists that impacts one of the poor souls I mentioned who gets ejected from the computer game. Each thing you do with the AI that improves your life at the cost of someone else, results in that someone else's taking note of that, resenting it, and growing more capable and motivated to overthrow the system of oppression. Whereby oppression I mean unbridled economic disruption, social neglect, and violation of human rights just because "we forgot bro." Take all of this together and we have a really dangerous situation on our hands.
©️ 2023 by Alex Schwartzburg. All rights reserved.
Comments