more gaita sanabresa. previously we summarized the last few months of theoretical development. now i will begin exploring some of the consequences of the model i described. note that i am not claiming *truth*, i am merely describing the framework behind every argument i make. so let’s first explore what the consequences of accepting that minds are a generative space, or at least have equivalent properties.

### the more you know, less you generalize.

knowledge, in a generative space, is quantified not by its expanded values, but by how many dimensions are present in the *mindspace* (the mindspace is the space made of all base concepts we have in our mind). for example, consider bagpipes. one individual only knows about scottish bagpipes. in his mindspace, the “bagpipe” line is every kind of scottish bagpipe: 3 drones, a mixolydian chanter and kilts. whenever anyone talks to him about bagpipes, they will imagine scottish music, the army, kilts and so on. this is the space expanded from the limited knowledge of a single type of pipes. now consider someone who has been exposed to more kinds of bagpipe. whenever someone speaks about bagpipes, this other individual won’t be able to imagine it without *more information* on what kind of bagpipes are we talking about. in his *mindspace*, it is insufficient to say “bagpipes” because he knows that “bagpipes” is a broad term applied to several different types of pipes, each with its set of drones (or no drone even), one or more chanters, and dozens if not hundreds of different musical modes and tonalities. the resolution to speak about pipes on the second individual is higher than the first one, but this also means that the second individual has a bigger bagpipe mindspace. he can expand the reality of bagpipes in more dimensions than the scottish one. but this means that to classify a new set of pipes, the first case will just say “scottish bagpipes” because that’s all he knows, versus the second that will say “that seems like the tunisian mezoued but with an eastern european drone”. less generalized, more refined, imply more knowledge. he projected a new set of pipes onto a big mindspace of many different pipes.

this is somewhat counter intuitive, as we tend to imagine that people that know *laws* of nature really *know* nature very well. in fact, i’m saying that knowing *reality* in one dimension only is actually restricting one’s knowledge of it. generalizing grows in the inverse proportion of the knowledge of the topic being generalized.

this only makes sense in a generative space. if, instead, we were dealing with brains and minds made of a quantified collection of independent concepts, assorted and fragmented, this would mean that the individual with the most pipe knowledge would be able to generalize the most because he had the biggest amount of information about pipes and a bigger vocabulary to express his generalization. what we see is the opposite: the richer the vocabulary, the more specific, therefore, less generalizing comment. this implies a *mindspace* that is generative and not simply a collection of information.

### it is impossible to fully understand topics of which we have no prior knowledge of

consider that extra dimensions are added via new information that the system receives, like we saw above. for example, if we have the red and the green neuron as before, we cannot add the “yellow” until we see one and both our “bases” are active at the same time and “grow” a new abstraction for it (as i put before, another base or neuron). without this case, we will never encounter a case where both will be active. if we cannot “listen” to both red and green at the same time, each of them alone will call their data red and green respectively. it is only when they are put together via a higher level abstraction (a “yellow” neuron that collects replies from “red” and “green”) that we understand that in fact, both red and green are correct and together they can be identified as “yellow”.

the fact that structurally we are capable of seeing yellow (thanks to having green and red neurons) means nothing if we never *observe* a yellow data point. even though today we can *induce* yellow’s existence from the green and red, it comes from our *prior knowledge* that green and red together make yellow, which in turn comes from, you guessed it, the fact that we *observed* that red and green make a yellow. this apparently circular argument serves only as a proof that *reality* is the primary source of information, and that even induced effects use symbols that they themselves were extracted from reality. we are capable of *recombining* and *expanding* existing values (like the two colors we can detect), but we cannot conceive, imagine or even work with, concepts outside our *observed dialect*. we were born without any capacity to see infrared, so we couldn’t imagine it until an artifact (thermometer) could transform that information into one we can see (turned heat into the growing length of a mercury column). in this sense, our conceptual capacity is permanently limited by how much we can project into our own sensory system, and then induce from it its patterns.

an easier example to understand is being given a book about the life of bees in chinese when you don’t speak the language but are an expert on the life of bees. since you have no *prior understanding* of chinese, you cannot *project* its information onto your own *mindspace*. this means that even though this book is part of the *reality* you know via your *subjective projection*, it is impossible for you to access this new set of data because you possess no equivalence between your subjective symbols and the chinese symbols. you cannot understand the book, even though the subject is familiar to you, because it is represented in a *mindspace* that is orthogonal to yours, i.e., shares no single common base concept.

one of my favorite examples of this was a particularly fun IQ test i took online. everything was working out great, until i reached a basic mathematics for counting. for some reason, the script miscalculated my location and gave me numbers in hebrew. i couldn’t understand the numbers, so i failed the test below retarded level for mathematics, even though i had done the same type of tests before and had a normal score. the system *assumed* that i had a non-orthogonal mindspace on which they could project the test, and with it, assess my knowledge. in practice, it demonstrates that it is impossible to understand a test if we don’t understand the letters it is written on, implying that prior knowledge is required for any activity. by prior i don’t mean we’re born with it, i merely mean that it is present in the system when the event being analyzed occurs.
i will deal more with this when we reach *communication*.

### it causes generalizations that are powerful and false

if minds are generative spaces, then they can generate hypothetical *observations* beyond their original *observation*. for example, the basic understanding of quantity, i saw one stone, then two, then three, i can generalize and say i can count *any* amount of stones. this is a valid mathematical theory that fits very well with *reality*, but it is flawed because it *induces* it can go on forever: there is no *reality* check on the generalization. this means that we can imagine infinite rocks even though we have a finite supply of rocks (see our axioms). generalizing is only possible in a generative space. if we based our thoughts exclusively on data, it would be impossible to imagine beyond the three rocks we had seen before. induction is a side effect of a generative space, and must be fed back through reality to check the validity of certain inductions. this also means that *mindspaces* can contain *infinite* quantities, not because they can represent infinite quantities by enumerating them (1, 2, 3, …), but actually by just representing the *base* for that space: number *x* is between 1 and infinity. this requires only the *base* of numbers and capacity for computation. a lot of infinities have very low computational complexity. as i said above, *mindspaces* can generalize very well thanks to their *reduction* of reality to a few key projections.

it is very common to see these generalizations in our every day life. in fact, the whole capitalist exponential growth seems to be based on this generative idea that infinity is real. even if the richest man in the world makes all the money there is to make, the matter used to quantify his wealth will still be a limited number, limited by the amount of information that can fit physically in *reality*. there is no infinity on our planet, only the *concept* of infinity, that exists in brains and minds. whether *infinity* exists in general is a very interesting topic. i feel inclined to say this question, if asked locally, the answer is no. globally, i don’t know.

### communication is possible, but only efficient with a balance between mutual information (shared base vectors) and information entropy between the systems

if mindspaces weren’t projective spaces, communication would merely be the *transmission* of information from one point to another (much like we saw for *mindless observers*), so it would be possible for me to read a chinese book. but what we see is that this is not the case. for *mindful observers*, it is a requirement that there is some kind of internal *mindspace* that receives the new data. like the example above of the chinese book, the information carried has a certain *mutual information* and a certain *entropy* connecting the sender and the receiver. the issue here is that if we are dealing with a projective space, if there is no *mutual information*, the projection will be zero, effectively causing no transfer whatsoever. this is very counter intuitive: the efficiency of communication between *minds* requires that they already share some *prior information* (this sharing is the mutual information between the two minds). while it is true that i could learn chinese, i would do so by *growing* a new section of my *mindspace* that guarantees a non-zero projection: it guarantees that i have some mutual information to read the book. if minds weren’t projective, learning would be just “feeding” the symbols directly. we know this is not the case, and that minds need to be *trained* to be able to *grow* that new chunk of *mindspace*.

this means that communication between *minds* is worse than communication between *reality* and *observers*. it requires that of both that they have some mutual information, which, by definition, is not necessary to transmit. this also means that communication will be the most efficient with the *mind* itself, i.e., we are best understood by ourselves, even if we are permanently contradictory. my opinion is that this explains why we gravitate towards *minds* we share something with. communication is more efficient, and it is less likely that we’ll have a misunderstanding or a conflict. it also explains why we have so much trouble dealing with conflicting data versus our prior beliefs: if something has high mutual information, we do not need to *grow* new sections of *mindspace* to accommodate the knowledge, we can just reinforce what we already have. a projective space is more efficient at transferring information it already knows, than at transferring information it doesn’t know. this is incredibly different from a *mindless observer* that indiscriminately gathers information.

now, why is it that a projective space makes evolutionary sense versus other spaces? because it can reduce the complexity of reality that has an immense amount of data to simple dimensions that it can understand, no matter what size of the data being fed. in fact, the simpler the brain, the more powerful the generalization. a mind that can only tell light from dark will have no problem dealing with light from a light bulb, a firefly, a supernova or a star. this is both incredibly powerful and incredibly lazy at the same time: the simpler the mind, the less resources it will need and still it will be capable of powerful generalizations about reality. a very nice trick indeed. for the mathematically inclined, i shall give a full mathematical formulation of this soon.

with this comes a sobering consequence for complex minds: they are valued not by their capacity for generalization, but for their capacity to exquisitely represent abstract realities. not for their capacity to *know* reality (since their skin knows more about reality than their brain), but instead for their capacity to *imagine*, *conceive* and *expand* their interpretation of *reality* beyond *observation*. it is not that we see *reality*, but that we use it so we can see beyond it, and with it, visit infinite impossible landscapes just by closing our eyes.