The Degeneration of the Nation
An Essay on the Fermi Paradox
The problem of empty skies - like an inversion of the existence of God - evokes terror in every rational person, and has received a brilliant formulation in the "Fermi Paradox" (on which there is an excellent and disturbing article in the global Wikipedia). On the surface, this is a probabilistic-scientific problem, but at its core, it is a philosophical problem of exceptional magnitude, which forces philosophy to return to its origin as the cradle of physical and biological science - and produces an exceptionally distant perspective on humanity (bordering on the inhuman). If our point of view on the universe is completely implausible (statistically!), how do we appear from the heavens - from the universe's point of view?
By: The Hardest Problem in the Universe
Subjectivity versus Objectivity - on Cosmic Scales (Source)

The First Algorithmic Era

What do we learn from the Fermi paradox? The larger the paradox, meaning the more likely the existence of life in the universe (and this is the consistent direction research has been moving in recent years) - the worse our situation is, of course, and the more frightening the paradox becomes. If a large filter of one in a billion is required, it's worse than a filter of one in a thousand, especially since we don't even identify a single convincing such filter in our past (one that happened only once - and in one blow). We are certain of only one basic filtering fact: Evolution takes a l-o-n-g time, and there were quite a few instances of extreme luck in it.

If we assume that our development embodies an average evolution of 400 billion years, not 4, then according to the age of the universe, we are rare. This is not a one-time filter that can be identified in a specific event, but a filter spread over a long time. Contrary to human intuitive biases, the probability of a one-in-a-million event occurring is equal to the probability of 6 one-in-ten events occurring, or 20 one-in-two events (and if this is the filter, it would look to us exactly like our past - a combination of many instances of reasonable luck). This difference is equivalent to the modern religious transition from the paradigm of "miracle" to the paradigm of "providence": a single highly improbable event of divine intervention is spread over time in countless small interventions.

What is the reason that evolution took billions of years? There is only one answer that is basic enough (i.e., not dependent on specific planetary circumstances) - the evolutionary algorithm is very primitive. It has two main problematic characteristics:


Between these two, the decisive problematic feature is convergence. We see countless examples where extremely precise optimization was performed in evolution, despite the slowness of the mechanism. On the other hand, to the same extent, there are countless examples of the evolutionary optimization process getting stuck in a local maximum for extremely long periods - both in the present and in the past. The biggest stagnation is in the rise of complexity level (which is the only direction that can be identified in evolution, and it is inherent to it as an algorithm, precisely because it struggles with creating complexity - complexity is the evidence of its cumulative one-directional activity).


The Second Algorithmic Era

In fact, the central event in evolution so far is the creation of a different type of developmental algorithm - not evolutionary. The appearance of the brain was not necessarily the creation of a developmental algorithm, since the development of the individual is not necessarily the development of the species. Only when development was passed on from generation to generation - a competing algorithm to evolution was created, and from this stage (and not from the stage of brain appearance) the new algorithm created new and much faster complexity. Human language was a new genetic code - memory - that allowed information to be passed from generation to generation, but this memory is not fundamentally different from the genetic one (which is also essentially linguistic memory), and its mere existence would not necessarily create an algorithm of a different type than an evolutionary algorithm.

Therefore, we must ask: Has a new algorithm really appeared on the planet for the first time, or is it just faster and more flexible hardware by orders of magnitude (instead of a fixed genome - linguistic information changes rapidly), but the development algorithm itself is still evolutionary, and human development is still determined by replication and random mutations? Can it be argued, for example (as modernity claims), that art is basically an evolutionary algorithm, i.e., directionless, and created by primitive mechanisms of changing fashions, imitation, variations and breaking conventions (mutations) that have no direction (except change itself)? Perhaps this is a valid description of all cultural development, or even scientific (breaking paradigms)?

Well, the new algorithm has completely different properties from the previous one. If evolution is an optimization algorithm, and therefore naturally gets stuck in local maxima, then the new algorithm is a learning algorithm, and therefore since its emergence, it causes constant change, with very little stagnation (the Middle Ages are the exception in history and not the rule) - thus our planet was thrown into a state of constant and accelerating change (which was not true in evolution, which had no noticeable inherent acceleration). What distinguishes learning from evolution? How is a learning algorithm - for example, cultural or scientific development - fundamentally different from an optimization algorithm?

The fundamental difference is not in the part of imitation and replication. Even if the speed and efficiency are different - it is still basically the same copying mechanism. The difference is precisely in the mutation mechanism - which has been replaced by a creativity mechanism. Even if the preservation side is ultimately the same preservation - the change side is no longer random, and does not stem from a disruption in the preservation and copying mechanism, as a kind of by-product of it. This is a second mechanism completely independent of preservation, which actively creates changes in directions it chooses. Creativity in language (and literature) does not stem from proofreading errors or transmission errors (broken telephone). There is a mechanism here that is not built only on faster trial and error in random directions - but on change in a certain, chosen direction. Hence the much higher efficiency of the process and its acceleration.


Philosophy of the Second Era

A philosophy that understands this will place the idea of learning at the center of its conception of man - and will see man's advantage and uniqueness in his creative ability, which combined with imitation and copying creates learning. Unlike the animals around us, humans get bored quickly. We have a natural instinct for creativity and an urge for change. Conservatism is no more natural to us than innovation - contrary to the doctrine of conservative elements in society. Sometimes we create systems with a tendency for excessive conservatism and stagnation (religions in the modern era) or excessive innovation and dispersion (art in the modern era), and sometimes we create learning systems that work well (modern science, modern literature). But the urge for innovation, as an independent urge that is not a malfunction in the preservation urge - is inherent to us.

Therefore, the evolutionary balance between conservatism and innovation that many preach - as a kind of golden mean and "balance" of mutation rates - is a false and harmful idea. This is because it is not the same mechanism itself, which has a single parameter (fidelity of preservation to the source) as in evolution, but two separate mechanisms that create learning: namely two vectors. Therefore, it is not a parameter that needs to be balanced, but two separate and independent vectors that should preferably operate at full strength - and not cancel, offset or "balance" each other. We should strive for a system that has both a tremendous drive to preserve and impart past achievements - and a tremendous drive for innovation and new achievements. For example, a culture that zealously preserves its tradition, but also innovates zealously. A creator well-versed in classics who burns with admiration for the past - but also burns with the urge to innovate. A parent who imparts culture deeply to the child - and also deep joy of innovation.

The result of the idea of balance is two weak vectors: very little cultural preservation, and very little cultural innovation. Modern science works well not because an "invisible hand" has achieved a "sacred balance" between conservatism and innovation, but because both factors - imparting accumulated knowledge and pursuing new knowledge - operate in it with intensity. If contemporary literature is forgetting the literary tradition, it is not harming it because of the disruption of the balance between conservatism and innovation - but because it is losing one of the two legs that gave it its height. Therefore, excess innovation should not be treated by suppressing innovation - but by increasing conservatism and nurturing tradition. And excess conservatism should not be treated by destroying tradition - but by nurturing innovation. In evolution, it's a zero-sum game - but not in learning, where imitation and innovation complement each other. Great works were created from massive collisions between powerful innovation and preservation drives, and not from well-balanced and controlled experiments in their innovation and conservatism doses (whose result lacks depth and inner strength).


Ethics of the Second Era

The understanding that the learning algorithm is us - and that learning is the human condition - can provide the answer to the great philosophical lacuna of our time. If past philosophy dealt with questions of death and the meaning of life - why we should live and why we should die - then the sting of these questions has dulled when the drives of conservatism and innovation - the drives of learning - were replaced by distinctly evolutionary and animalistic optimization drives: pleasure and pain. But one basic question remained unanswered in the philosophy of pleasure and pain: why should we bring children into the world? Indeed, the worldview and conception of man born in its inspiration does not provide a convincing answer to this, and there are even philosophical experiments in opposition to childbirth.

The "biologistic" claim that we should have children because of the evolutionary algorithm does not hold water, and confuses description with reason. Indeed, we were all born as part of this algorithm, which is a valid description of the past, but why should this constitute a valid reason and justification for our actions in the present? The evolutionary algorithm is not us - and we as humans are quite foreign to it (which is why it took us thousands of years until we discovered it - it is not natural to us). We come from a different story: from a learning algorithm. And it is precisely in this algorithm that the reason for bringing children into the world lies. Someone who does not hold an identification with the idea of learning - indeed has no valid reason to bring children. This certainly does not maximize pleasure. And unlike animals, bringing children without reason is not sufficient for humans - because when children are brought without reason, it is reflected most of all in their education (or lack thereof).

This is indeed what the generation of children of our time looks like: children brought into the world without reason. Only deep identification with the learning algorithm at our core, with its strong drives for conservatism and innovation, and lack of identification with the evolutionary algorithm, can justify educating children - and create a generation of children worth bringing and teaching. Similarly, only deep identification with our two basic algorithmic drives - imitation learning and creative learning - can create a great culture. We do not create children out of an urge for self-preservation - and we are not trying to create copies of ourselves (that are randomly distorted) - but are trying in a directed way to create new and improved models, out of deep learning and creative drives that exist in us to teach our children and create them.

The change throughout life that a person (and their brain) undergoes from innovation to conservatism - is the reason for our deaths, and therefore for the need for our children. Death transfers our legacy from a creative state to a conservative state, hence the great transformation that occurs in us in relation to a person's legacy from the moment of their death. Thus, for example, an artist or creator who dies is irreversibly transferred from the realm of creative drives to the realm of preservation and tradition drives, and thus the value of a painter's works jumps at their death ("death adds a zero to the price"). Hence the great forgiveness we feel towards the legacy of a person we did not necessarily identify with in their lifetime - at the moment of their death, or our ability to emotionally connect to the legacy of past cultures (when often we will find it difficult to appreciate the culture of the present).

When someone or something dies - a new path opens up for us to connect to it, but so too when it is born - and only our ability to connect to the innovation that will come from a child (and will no longer come from us) will justify bringing it and educating it non-dogmatically - but culturally (and not as an optimization monster - like the children of our time). We are not our genes - because we are a learning algorithm and not a genetic algorithm. We did not come into the world for optimization. Creativity is the ability to apply meta considerations, above a random direction - and to progress beyond the local maximum barrier - to a less optimal state, but more advanced in learning, thanks to the innovation motive that exists in us.


The Third Algorithmic Era

All this is true when we take into account the world of man. But the Fermi paradox asks to take into account other worlds, waiting for us in the future or in space (in fact, this deep paradox of mini-research is the deepest thought available to us today on these worlds). If so - why should we assume that the learning algorithm is the last and most sophisticated algorithm, and that there is no more efficient algorithm than it, as it is more efficient than evolution?

If indeed there is such an algorithm, or if the universe has computational capabilities that surpass the chemical-electrical ones (on which all biology and its two algorithms are based: evolution and learning), then there may be a third algorithmic era. So far, the Fermi paradox stems from the fact that we are the only ones in the second, learning algorithmic era, and it seems to us that the first, evolutionary algorithmic era can be relatively easily replaced by the second era. But what if the days of the second era are naturally short-lived, and it is replaced relatively quickly by the third era, and therefore we do not see giant galactic cultures, as we would expect from the second, expanding era, in which the exponential growth of the number of processors is identical to the development of the species' learning ability?

If every algorithm creates a developmental process, then we know a valid physical limit to the computational power of an algorithm that physically spreads in the galaxy - the speed of light. Naturally, we perceive the spread of a culture to spaces as its natural direction, as we have done so far on Earth. But what if the natural direction for computational development is the opposite? After all, just as dozens of orders of magnitude separate us from the universe, so too dozens of orders of magnitude separate us from Planck length and time. If so, why prefer the large over the small?

From everything we know about computation, there is a decisive computational advantage to the spread of culture precisely into the tiny space, into nanometry and quantum computing and beyond - up to strings. It is possible that within a grain of dust more computational power can be created technologically than in the spread of culture across the galaxy: because concentration, miniaturization and smallness are the main thing in computation speed, and along with them come physical theories with inconceivable computational power, such as quantum theory (and what is the power of a string computer?). The Fermi paradox depends on the convergence effect of the first, evolutionary algorithm, to the second, learning algorithm, but what if such an effect does not exist - or is short-lived - and cultures converge quickly to a third algorithm or there is a bypass route to it?

And finally, if we assume that the laws of nature are not infinite, and that there is a unified physical theory that explains the entire universe - and perhaps even one formula - then every advanced culture reaches it at some stage or another. At this stage, only mathematics is infinite, and there is no essential discovery hiding in the vastness of the universe. Eventually all technologies will be mapped, and every idea with a physical basis will be exhausted, and only cultural and mathematical computation will continue (assuming that mathematics is infinite in terms of its essential contents - an assumption that may be wrong and leave only cultural development in the field). A culture that has reached this stage has no interest in spreading throughout the universe and exploring it - it has exhausted it.


The Next Great Filter

The Fermi paradox is the most convincing reason to fear for the safety of humanity - and for one last and truly final holocaust. If the logic underlying the paradox is valid - we are probably doomed, in one way or another. But we must also consider what possible ways of doom are "open" to us in order to assess the implications of the paradox. If there is no great filter behind us: what could be the great filter ahead of us? Almost any possible way we can think of for our destruction will not meet the basic condition of the paradox: a filter of one in several orders of magnitude. Perhaps most cultures in the universe destroy themselves through nuclear war, or a genetically engineered virus, but it's hard to believe that only one culture in a hundred or a thousand survives such self-destruction. There's no point in talking about global warming - it's a joke when compared to the power of the paradox. Among all the possibilities that can even be raised to our minds, there are only three that meet the requirements of the paradox:


The Fermi paradox deals with uncertainty of a very high order: something we cannot know that we do not know - but if we can at all guess where the greatest visible uncertainty lies (and therefore it is most likely that the holocaust is hidden in it) - it is in section c. Faced with a global challenge of the magnitude of the paradox, the conservative approach of "it will be fine" because "it has been fine" so far loses its meaning and validity, because it is by nature an inconceivable innovation. Like the Holocaust of the Jews, the Fermi paradox turns the hitherto inconceivable into conceivable, and this happens before you realize what's happening, and when it's already too late. This is a precedent-setting matter by its very nature and definition: the most precedent-setting that can be conceived. Therefore, it scratches the edge of the limit of knowledge (and perhaps beyond it), embodies the question of the end in its most secular possible sense (and in fact - it could have been considered as strong evidence for the existence of God and His providence), and constitutes the peak of disbelief in man, in the universe and in nature - in biology, in physics and perhaps even in mathematics.

Because this is such a difficult problem - only philosophy can try to deal with it today, and the implications of the paradox give it an importance it has never had before. No philosophical problem ever has been as anxiety-provoking as the paradox, to which classic philosophical problems in the skeptical tradition appear like child's play, and it brings to a paradoxical extreme the statement with which philosophy began: I know that I do not know. The Fermi paradox is the most burning, difficult and profound philosophical question lying at the doorstep of philosophy in our time - and there is none more important (and shocking) to our intellectual agenda. It opens up to us frantic and far-reaching possibilities to the edge of human thought (and it turns out - beyond it), and forces us to try to leap over inconceivable conceptual abysses - into which we fall at every step and turn in this problem, which is beyond the current human horizon (and what is especially terrifying - this is how it should be, if we are to be destroyed!). I, the Netanyahuite, cannot decipher it, although it constantly disturbs my peace. It is too deep for me.
Philosophy of the Future