some silly happy folk this time, a “llaço” from the north east
we recently saw how information exists in several levels, and how we suspiciously translate the outside world in a structure similar to the structure of our own nerve cells. perhaps it’s time to clarify.
a thing that exists on itself, before others, can itself be made of things. if we consider these things properties, for an atom for example, we could see it as mass, or dive deeper and see a specific arrangement of subatomic particles, each with its own mass, with the global “mass” as an emergent property, and so on. this is zooming in. information itself can then be encoded in several ways. the simplest and most basic alphabet would be the one comprised of the most fundamental quanta and their laws. since laws are patterns of things, they would be only available to the layer above of the simple quanta.
if things are not randomly arranged, but in specific, less likely arrangements, information is higher. i don’t want to bring in many equations, but basically the more specific (or surprising) an arrangement of things is, the more information it carries.
so the first layer of information is the physical information. the information that exists for the simple fact that things are non-randomly arranged in space and time. this is the lowest level of information possible (unless, of course, we dive deeper into what time and space are, and then we could find a deeper one, which i wouldn’t mind considering obviously).
and as these arrangements become more and more complex, so do their coherent patterns. one information is the one that exists from location, and another is the one that exists through work. work “creates” information (we saw how this is possible a long time ago), but does so using available energy and rearranging “lower level” information. so we have another kind of information, the computational agent information, or the information to encode the activities done by the said arrangements on other arrangements. this can be deduced from the “lower level” information, but it is very hard to do so. like explaining how a protein will fold based on its components, for now, our mathematics can’t model this properly, so they become abstracted for simplification. it’s not hard to imagine, though, that some alien culture has no such issues for having more advanced minds and science.
so information exists as part of things, and then on how things interact. this would be the case of DNA + Cells. they are information creating machines. they create arrangements of things, being things themselves. this is the second layer of information. not DNA, DNA is the first layer. the machinery is the second layer. but these machinery-like processes exist in simpler forms, like when gravity and magma interact creating different crystallizations, in fact synthesizing new shapes (or new information). in one case, we call the machinery “life”, in the other, we call “law of nature”. in both, information is synthesized. and both can be encoded as “transformation machines” that take information and create more information.
the same thing happens with the neuron. it has a coherent “work” function (that in fact consumes calories), which can be seen as a function that applies to a set of conditions {a,b,c,…} giving a “yes” or “no” answer, the name. when a connection is active, it becomes stronger, when a connection is useless, it tends to be discarded. like evolution, we have a system where neuron classifications evolve through death and selection. but what is making this selection?
this is where objectivity and subjectivity begin to split. so far, all our systems were objective (plus or minus scientific uncertainty). but once we have a system that classifies and creates names, what assures us that the “name” in one neuron is the “name” in another neuron? let’s not confuse these names with human names, human names will come later.
the way to know if name a and name b are the same is to see if the function that generates the names is the same, i.e., by reverse engineering the neuronal function. how can we do this?
consider the set of conditions {a, b, c} for neuron 1, and conditions {d, e, f} for neuron 2. they will represent the same information when (a and b and c) = (d and e and f). we have no way of separating the components of a name unless we have access to separate {a,b,c,d,e,f}. this is the case with black box minds like animal minds. we have no way of “tingling” every connection at a time on each subject.
so let’s see an example on how we can have equal classifications with different conditions, i.e., subjectivity. this is the case for a rudimentary classification system. as i said, i postulate that a lot of our way of thinking comes from essential characteristics of the building blocks of minds.
let’s say the “yellow t-shirt” pattern has been seen by both neurons in different situations. they both have a “yes” reply when they see a yellow t-shirt. so we can say, from the outside, that they both classify “yellow t-shirts” correctly. but how can we know they “know” the same yellow t-shirt or not? or, better put, how can we know they accurately contain the information for a t-shirt? my opinion is that we can’t. structurally, just having a valid classification might not imply valid inputs. let’s see it in practice in the above example.
neuron 1 takes {a and b and c} as inputs. when analyzed, we learned that {a} is triggered on the color yellow, {b} is triggered on the t shape and {c} is triggered on the sight of clothes. if we only had a posteriori knowledge, we could understand a t-shirt by looking up what {a}, {b} and {c} were, and in this case, end up with a working definition: “something that you can wear that is yellow and has a t-shape”. rather crude, but for a full definition we needed no more than one neuron and 3 inputs.
but what would happen if for neuron 2, their conditions were false, but out of coincidence, always true for the same situations as the classifier above? for example {d and e and f} could be {d} is a human standing, {e} is a yellow blob over the human, {f} the human is not cold.
for the overwhelming amount of the cases, the answers of both the classifiers are correct, because in every way they answer “yellow t-shirt” correctly. but if we were to break down neuron 2, we would be surprised. a “t-shirt” is a “warm human with a yellow blob”. hardly a definition of a t-shirt. but unless we had a human painted yellow and tested both neurons or had taken them to a store, we would never encounter the situation in which a contradiction occurs, and therefore, would accept the word “yellow t-shirt” as true, even though each one had different “definitions” of what it is. and it would allow us to operate with both until a contradiction, or conflict, would occur. if we never encounter it, we would never see it and expose the flaws in one of the classifiers. this would allow two classifiers to coexist with incoherent definitions of the same thing, but functionally equivalent, i.e., since they agree on the “name”, there is no feedback loop to check that {a,b,c} = {d,e,f}. this is subjectivity. we have two machines that accept a concept (the name) even though they have different definitions of them.
it might not be very clear by now how were these classifications constructed. in a perfect world, we would always have {a,b,c} = {d,e,f}. again, through educated speculation, allow me to provide another learning algorithm. to map the function {a,b,c}=name, we need some kind of feedback that says “ok” or “not ok” to a definition, i.e., something that connects the name “t-shirt” to a neuron that just fired. this, obviously, is somewhat tricky, but let’s reward the cell with chemicals instead. we feed it {a,b,c} for a t-shirt and it fires, we keep it. we feed it {a,b,c} for a t-shirt and it doesn’t fire, we don’t. it is easy to imagine the “t-shirt organism” as a simple one which has a sensory side (3 sensors for the features described), the classifier, and an utterer (says “t-shirt” whenever it sees one). and then, we define the evolutionary fitness of this concept as “when t-shirt is correctly classified, the organism survives, otherwise, dies”. over time, we can guarantee that the organism will classify the t-shirt correctly. how does it know it is correct? because our evolutionary law, which in our case is reality, is there to feed back correct and incorrect classifications.
for example, let’s say i classify a predator incorrectly. i’m dead. but if i do so correctly, i’m not. and, again, my definition of predator comes straight from my sensory input. the very symbols used are exactly the symbols that my sensory system can provide. not only they are subjective as we saw above, they are biased by their own physical constituents.
we have seen how we can demonstrate subjectivity objectively in small organisms. i will slowly approach human minds. but to recap the information levels, we have the information of things, the information of things acting on things, and now, the information of things acting on themselves. this last one is the role played by evolution (when it is “mindless”) or the role of a trainer when we deal with some kind of classifier.
whenever we feed back the “classification” to the organism as “correct” or “incorrect”, we have a new kind of information system, one that can “recode” itself, i.e., the work done is no longer just a consequence of its physical properties, but also a consequence of its own work on itself. i would consider all intelligent systems that exist today, animals, humans, etc, as part of this latter category, thanks to evolution. our machines themselves are, again, mindless algorithms tuned by another thing, us. we provide the “fitness” of machines, technology and algorithms, just like evolution did for us as animals. the principle is the same.
i hope to implement this algorithm properly and provide actual classification data for validity. even though this is more of a philosophical text, what i postulated is testable. i predict that a system composed of a classifier with n inputs and one output and a feedback loop through an environment defining fitness will, after training, provide information about the pattern it was trained to detect, but this information might or might not agree with other classifiers, hence an objective proof for subjectivity.
this is visible in neural networks as different local minimums for the cost function. in this case, i don’t have a good formulation yet for this problem, but i imagine it will end up being similar, if not equivalent, to a regular neural network.
i will move on to bigger minds, brains, and so on soon, standing on this idea that information exists as structural connections in the conditions for a name, and that they can be different in different individuals. if a neuron is already subjective, how can realities agree? this is the challenge for our following posts.