Consciousness is an emergent property of categories. As a sufficient number of categories can be represented in a system, selfhood arises and, with it, consciousness.
The two can not be separated. You can’t have a sufficient repertoire of categories and not get consciousness. It’s not as if there is some special something sprinkled on top that magically turns it into consciousness.
From I Am a Strange Loop.
See also:
- It might be useful to think of AI advancements like large language models as inert mathematical functions and weights, but the repertoire of categories is precisely the building material for selfhood
- Human intelligence is a special case of artificial intelligence
Links to this note
-
One explanation for how creativity works is by a process called associative thinking. By linking ideas and information together in unique ways, people are able to come up with something new. This happens spontaneously (what’s the first word that pops into your head) and also in a directed way (what’s the connection between these two clues).
-
Rather than converting to text at every step in a chain of thought process with large language models to solve a complex problem, new research suggests that reasoning can happen in a latent space using the internal representation of the model. Besides improving responses that require a greater degree of reasoning, utilizing latent space is faster because it skips the continuous tokenization and text generation.
-
There are many theories put forth to explain human consciousness and experiments are running to test them. With all the discussion around AGI, it’s timely to keep an eye on them.