Academia.eduAcademia.edu

The Thinking Theory Unifying Grandly

This paper introduces an original theory of thinking, which can be used to solve abundant basic philosophical puzzles, and then unify the various branches and schools of philosophy into a new whole system. An innate, universal thinking tool processes finite pieces of information to form the minimum or unit thinking activity. Thinking activities use the breadth of space to form memory stocks, which is the roundabout production method. Thinking activities proceed continuously in the time dimension, leading to dynamics, innovation, and development. Combination of the finite thinking tools and tremendous information leads to a “combinatorial explosion”, which indicates that both the qualitative and quantitative development of knowledge is infinite; thus, the traditional “great convergence” paradigm of philosophy is replaced with the “big bang” model. As a manifestation of this bounded and “concrete” rationality, the processes of convergence and divergence are intertwined. However, the result of the processing of specific information by a particular thinking tool is always the same, which suggests that logical certainty and absoluteness can be contained in this pluralistic, relative, and mixed framework. In this way, the mind is materialized or substantialized, thinking processes become the interactive processes between different mental entities, and the entities of thought or knowledge are generated and exist as an “independent third party” between humans and the world. Finally, it is pointed out that the above thinking tools are what computer science calls “instructions”; Instructions belong to human originally, and then were simulated with computer technology and artificial intelligence engineering. This first-ever software-based theory of thinking is named the “Algorithmic Thinking Theory”. The author suggests that philosophy, humanities and social sciences generally adopt this thinking theory and its inferences as the fundamental theoretical framework.

The Thinking Theory Unifying Grandly Bin Li1 Abstract: This paper introduces an original theory of thinking, which can be used to solve abundant basic philosophical puzzles, and then unify the various branches and schools of philosophy into a new whole system. An innate, universal thinking tool processes finite pieces of information to form the minimum or unit thinking activity. Thinking activities use the breadth of space to form memory stocks, which is the roundabout production method. Thinking activities proceed continuously in the time dimension, leading to dynamics, innovation, and development. Combination of the finite thinking tools and tremendous information leads to a “combinatorial explosion”, which indicates that both the qualitative and quantitative development of knowledge is infinite; thus, the traditional “great convergence” paradigm of philosophy is replaced with the “big bang” model. As a manifestation of this bounded and “concrete” rationality, the processes of convergence and divergence are intertwined. However, the result of the processing of specific information by a particular thinking tool is always the same, which suggests that logical certainty and absoluteness can be contained in this pluralistic, relative, and mixed framework. In this way, the mind is materialized or substantialized, thinking processes become the interactive processes between different mental entities, and the entities of thought or knowledge are generated and exist as an “independent third party” between humans and the world. Finally, it is pointed out that the above thinking tools are what computer science calls “instructions”; Instructions belong to human originally, and then were simulated with computer technology and artificial intelligence engineering. This firstever software-based theory of thinking is named the “Algorithmic Thinking Theory”. The author suggests that philosophy, humanities and social sciences generally adopt this thinking theory and its inferences as the fundamental theoretical framework. Bin Li, a visiting scholar of Center for Urban & Regional Studies, University of North Carolina at Chapel Hill, used to be an independent economist and a columnist in Shanghai, China. Websites: https://unc.academia.edu/BinLi https://www.researchgate.net/profile/Bin-Li-121 Emails: [email protected] [email protected] This paper summarizes the author’s new book “The Algorithmic Philosophy: A Synthetic and Social Philosophy”, forthcoming. 1 0 I. Introduction ............................................................................................................... 1 II. The Basic Ideas ......................................................................................................... 8 2.1 The Reform of “Being” ....................................................................................... 8 2.2 Mental Entities (1) ........................................................................................... 12 2.3 Mental Entities (2) ........................................................................................... 14 2.4 Mental Entities (3) ........................................................................................... 17 III. Philosophical Applications ...................................................................................... 20 3.1 Logic .............................................................................................................. 20 3.2 Ontology and Psychology ................................................................................ 24 3.3 Social Sciences .............................................................................................. 28 IV. The Algorithmic Thinking Theory ............................................................................... 32 4.1 Instructions .................................................................................................... 32 4.2 Formation and Development of the Computing Mechanism (i) ........................... 35 4.3 Formation and Development of the Computing Mechanism (ii) .......................... 38 4.4 Algorithmic Thinking Theory ............................................................................. 41 4.5 Methodological Issues .................................................................................... 43 V. Conclusions ............................................................................................................ 44 References ................................................................................................................. 46 I. Introduction Western philosophy since ancient Greece has achieved greatly. Science, for example, can be seen as a product of Western philosophy. Ancient Greek philosophers believed that there was a single, static truth behind complex and changing phenomena, a notion reinforced by modern science such as Newtonian mechanics. This inspires our humanity’s spirit of exploration of the world. However, this concept has simultaneously created conflicts, causing philosophers to struggle between the static and the dynamic, the one and the many, the absolute and the relative, the objective and the subjective, the consistent and the contradictive, the natural and the social, the scientific and the humanistic, and between many other categories, concepts, doctrines or fields. Of course, we cannot demand an immediate, uniform explanation of everything, but philosophical theories lack a basic unity. No one could guarantee us that this unity exists; either, no one could guarantee that this division and even chaos will last forever. Or, to put it another way, we should try to integrate existing philosophies as much as possible, as long as we can find some kind of solution. Now, what I would like to report in this article is that I think I have found a theory that can achieve philosophical integration on a very broad scale. In fact, it can be used to complete the integration between the major philosophical doctrines in more than 2,000 1 years. Even, as a whole, it offers beyond the need for philosophical integration, and can be used to open up the new directions for philosophy in the new century. This is a theory of mind or thinking. The hitherto philosophy can be seen at large as a continuous pursuit of this theory. It is based on computer science. It was latent in computer science and has long been overlooked. But it is a theory about the human mind, not a theory about computers or artificial intelligence (AI). It can be used primarily by scholars in the humanities and social sciences independently of computer or AI, and can also be used for reference by computer and AI experts, and any researchers engaged in computer applications. This new approach to research can enable humanistic and societal scholars to use its results without understanding the details of information technology, elevating the humanities and social sciences to a new level of logical coherence. It does not harm existing concepts in the humanities and social sciences, but rather demonstrates and supplements them in a scientific manner. Here, for example, democracy and freedom are strictly theoretically argued for the first time, and social engineering is also given a clear theoretical status. Based on the new set of principles, humanistic and social studies are likely to be as useful as science and engineering. Meanwhile, it can also be used to achieve a logical integration between philosophy, humanities, and social sciences with natural sciences and engineering, and then can be used to construct the basic principles of a unified human knowledge system. The basic contents of this theory of thinking are simple. Corresponding to information, it constructs the concept of “thinking tool”. Thinking tools represent the innate thinking abilities of human beings, which are common or universal to everybody. Thinking tools process information, just as physical tools process physical materials, which constitutes the basic pattern of thinking activities. A tool may seem like a low-level thing, but this article will show that all sorts of complex, mysterious, and “noble” mental activities can be explained from such tools. Such “technical” progress often arouses the least controversies, and hence it has been the common way to a scientific or philosophical progress in history. This dualism breaks away from the traditional information-centric narrative, and it is Kantian. But it corrects the shortcomings of Kant’s philosophy, because “thinking tool + information” constitutes only a minimal unit of mental activity, and this atomic mental activity must be carried out continuously and in large quantities in the spatio-temporal environment in order to produce those meaningful human ideas we have been discussing on a conventional scale. This dynamic, “serial” approach naturally introduces time and space into the core of philosophy. Now, the Kantian structure could finally be saved from the static predicament and move into motion. 2 As a result, a human thought is naturally materialized like a physical object, and we can “see” the existence, change, combination, decomposition, expansion or loss of the “atoms” and “molecules” of thought in the space-time environment. Moreover, they can coexist with physical objects and interact (with the help of human actions). Thus, mind and matter are no longer opposites, and can only be located in different spaces; They can now form polygonal relationships with each other. The discrete nature of the mental entities directly leads to the feasibility of individual self-reflection, i.e., one’s one thought on one’s another thought(s), and mutual objectification between people, i.e., one’s thoughts on others’ thoughts. Especially, the researcher’s thoughts can be distinguished from the actor’s thoughts. Different combinations between specific thinking tools and specific information produce different thoughts, which are located within different brains of different individuals, so that people’s thoughts can be identical or different in certain aspects. However, because their innate thinking tools are the same, it is possible for people to communicate and collaborate with each other, and it is also possible to reach agreement between them – just as one “communicates with oneself” through reflection. Thinking action now have the same economic meaning as physical action, and the individual must now pursue the economy of thinking activity2 in the same way that he pursues the economy of external action. This economic consideration, in turn, affects the conception and choice of thinking activities. The reader will find that we will be able to establish a concrete manner of bounded rationality (Viale, 2021), and thus be able to concretize rationality (the “concrete rationality” or “concrete reason”) without merely emphasizing that it is “bounded”. The states of human life and the social world can be seen fundamentally as the concrete manifestation of this concrete rationality. Ideas consume energy and resources, grow upwards from the mental “atoms”, and are necessarily finite in quality and quantity at any given point in time. This “backward-looking” approach easily shows the inevitability of knowledge development. A mental activity and a piece of information are very microscopic, so the cognition or processing of specific external objects in principle requires a huge, even infinite, amount of work of the human brain. The processed information, i.e., knowledge, can be processed again, and the resultant “combinatorial explosion”3 effect can also William of Occam (Russell, 1984) and Ernst Mach (Mach, 1893) are well-known for their ideas about the economy of thought. However, it is to be significantly deepened and broadened here. 3 A mathematical concept that refers to the number of combinations that can be formed among finite elements will grow extremely rapidly with the increase in the number of the elements, the expansion of the size of a combination, the change of the order in a combination, and so on, as if an explosion has occurred to the point that it has become uncountable. For example, chess pieces and squares are both only finite, but the chess games they form are rarely identical, so people have been enjoying it for thousands of years. There are only more than a hundred chemical elements, but the sorts of substances composed of these elements are 2 3 illustrate the infinite potential of knowledge development. The combination of the above finite and infinite can be used to revise the Kantian bounded rationality, which have drawn an absolute line between the knowable and the unknowable. 4 To carry out the operation of small thinking activities in a large informational base, it is necessary to utilize the breadth of the site, just like in manual work, “items” are picked up and put down, put down and picked up again, roundaboutly, thus forming a considerable inventory of finished or semi-finished products in the surrounding area. This inventory is the stock of knowledge. In economics, this productive method is called the “roundabout method of production” (Böhm-Bawerk, 1923). In this way, we distinguish between the flow and stock of thought. The stocks provide ready-made answers for the current, temporary, and weak thinking activities. In most cases, the use of the stocked knowledge is simpler, easier and faster than the development of knowledge, so this method can increase thinking speed5, reduce thinking time, and improve the resultant quality. The existing knowledge base is continuously enriched, screened, collated, and expanded, and has become a key resource to support current thinking activities. However, a knowledge stock is rigid and fixed (the “solidification” of thoughts), and the current thinking activity can only choose to use it or not. If it is to be revised, it has to put down the current thinking job at hand and go to work on it again, and it can only be revised gradually and historically. Since the current workload is only a fraction of the total workload in history, and the knowledge stocks can be seen as the sediment of the total results of all historic thinking activities, it is not possible for the current work to completely reform the knowledge stocks at once (the “endogenous impossibility”). This further leads to the fact that the results of thinking in any historical process are limited, and they are impossible to cover the whole world or the whole history. However, people with bounded or concrete reason, through observation and practice, will gradually realize the existence of the whole, and they will be eager to understand and grasp the whole and integrate it into the current decisions and behaviors (the “factor completeness”). So, what to do? One way to do this is that people will engage in imprecise but fast thinking activities such as generalizing, guessing, and imagining, rather than simply processing the given information mechanically and sequaciously, or just using relatively very large, and they are still increasing. And, there are only dozens of notes used in music compilation, but the musical works have been endless and inexhaustible. 4 According to this Kantian tradition, when one talks about concepts such as "bounded rationality" or "uncertainty", he/she usually must refer to cases such as "Schrödinger's cat" or "Heisenberg Uncertainty Principle", so as to prove that there are indeed some things that humans cannot fully understand, overcome or grasp in the final analysis, or that really have the inherent attribute of “uncertainty”. Rationality generally needs not go thus far to be limited. 5 This was called and elaborated as “speed-reliability tradeoff” by Cherniak (Cherniak, 1986). 4 reliable but slow methods such as deduction, or the positivist methods. This article calls this the “mental distortion” (or “bend of mind”), i.e., a move away from what is considered correct, precise or reliable, in pursuit of some quick-cut, fast-food conclusions. Such distortions can be used to make various kinds of knowledge such as philosophy, religion, common sense, ideology, etc. All knowledge are to be updated from time to time, leading to the phenomenon of “versioning”, i.e., the replacement of old versions with new ones. The distortions exacerbate the subjectivity, conflict, heterogeneity, and plurality of the thinking system. The thinking system now has problems both qualitatively and quantitatively. There may be differences in the “forms” or the contents of different ideas, and differences in their value -- because the actors do not have time enough to think or act, so as to completely eliminate the value differences. However, such plurality is not necessarily absolute now, as it is possible that such a difference can be eliminated through future actions. It is as if it is not necessary for us to utterly reject Parmenides’s “Being”, because the destination of the thinking processes is something we cannot know with certainty. It can be speculated that the thinking process on a particular object is convergent, or active differently over time. However, the expansion of the range of objects, the emergence of new objects, the introduction of new reference knowledge, and the innovation in information processing may reinvigorate the thinking processes. This is called the “divergence”. The divergent processes and the convergent processes are intertwined, resulting in the high or low activity of human thinking; nevertheless, empirically, the total amount of knowledge has always been growing. In this way, we can establish a system that is diametrically opposed to traditional philosophy. Traditional philosophy basically boils down to a model of “great convergence”, namely, it implies that the knowledge development has an end, and knowledge will eventually be condensed into something simple -- Being, Idea, substance, God, the absolute, science, etc., and communism in the social sphere. Now, this order has been reversed, that is, it starts from the simplest, atomic thinking, and expands towards an infinite future through historic accumulation of knowledge. This is quite similar to what physicists describe as the “Big Bang” of the universe. However, this does not entail that the system is totally relativistic. The result of processing specific pieces of information by a thinking tool is always the same; This means that the line between logical right and wrong is certain and does not vary from person to person, from time to time, or from place to place. Innovation, as the negation of existing knowledge, could comes from the discovery of new information, or from the use of different thinking tools, or from different sequences of the tools. Therefore, not only is correct knowledge “objective”, but any other “mistaken” or “less correct” knowledge is also objective. We can imagine a super-sized “human knowledge thesaurus” that includes all the results of all the thinking tools processing and 5 reprocessing all information. We can assume that this general knowledge base has been “predestined” since the beginning of human beings (the absoluteness and the definiteness of knowledge), however, it is realistically impossible for both actors and scholars to know its full content due to their limited speed and time of thinking. Each person, as the producer or depositor of a specific amount of knowledge, occupies only one part of the thesaurus. The integration or unification of this theory for philosophy also lies in the fact that a certain process of thinking includes the process itself and the end, so that epistemology and ontology are fused and, through the discreteness and the re-objectification, intertwined. At the same time, since it emphasizes the processuality of thinking, it does not exclude the thinking processes from finally producing any unary or pluralistic conclusions, because any extreme conclusion, e.g., idealism or materialism, is only a particular in this vast amount of knowledge. It’s as if we use the concept of “number” to generalize positive, negative, and zero numbers. Thus, it’s first and foremost a neutral framework within which a wide variety of ideas as the thinking results can find their places. For example, science can now be read as a relatively reliable kind of knowledge distinct from common sense and other knowledge, established by professional scientists in pursuit of an exchange of benefits with the secular world. However, under the constraints of the thinking economy, in order to pursue reliability, it has to give up answering all questions and focus only on what it is good at, and do not draw a conclusion when it is not satisfactory. Therefore, science is the result of applying a conservative strategy. Religion uses a radical strategy to try to answer questions that science can’t. In such ways, a diversity of ideas develops, competing or complementing each other to form the system of human knowledge. Now only the last question remains: what exactly are the tools of thinking? In this regard, I would like to leave it alone, not to answer in this introduction part. In this way I am to show that even if we do not know the specific contents of the concept “thinking tool”, the philosophy established in this approach would still be quite satisfactory. The benefit of clarifying the contents of thinking tools is that it will strengthen the credibility of this theoretical idea by borrowing from the successful paradigms of computer science and artificial intelligence engineering. Second, it is not enough to just establish the philosophical principles (which can also be proved by the new principles themselves), the specific, applied research is also important. The latter requires clarification or detailing of the thinking theory, which will eventually support application of the computerized simulative methods widely in the fields of philosophy, humanities and social sciences, as well as for formal research. 6 This article introduces a relatively complete research program, which can be seen as a variant of computationalism, or an upgraded version of it. In my opinion, the computationalist approach has not had the right structural framework from the beginning, and as a result, although it has attracted enormous attentions so far, it has not achieved much. This extremely concise theory is a purely software-based theory of mind, and I think it might be the first satisfying theory of mind ever that can be used to break through a large number of fundamental philosophical questions of mind. For example, it can critically and appropriately accommodate the theory of innate knowledge while adhering to the theory of innate tools. By showing that the thinking system is actually similar to the emotional system, it can logically consistently achieve a fusion of the rational and the “irrational”. Many of the effects or inferences that arise from this could be surprising. With the help of the “discrete thinking tool” approach, a large number of spiritual phenomena can be explained or dissolved, so much so that we can reasonably doubt that there are any spiritual phenomena left unexplained. When this theory is applied to humanistic and social issues, its effect can also be magical. In fact, this thinking theory was first developed by me in order to solve some critical basic problems of economic theory. Its simultaneous solution of the basic problems of philosophy and the basic problems of social philosophy can evidence each other that this scheme is effective in achieving the philosophical synthesis. In terms of humanities and social sciences, one of the basic problems to be solved is that the human mind is a relatively independent, equal, and interconnected existence relative to physical entities, which is a premise that humanities and social sciences go hand in hand with natural sciences. Thoughts should be entities in this way, even if they “correctly reflect” external objects. It is especially true if the thoughts are not so correct, or has little to do with external objects (e.g., for fantasies). In this regard, computer science and technology provide significant insights into how mind and matter can coexist and interact within the comprehensive theoretical framework (Section 4.1 & 4.2). Thinking activity now has a wide range of meanings, and even becomes constructive, which is combined with physical external behavior, and together produces a variety of “behavior” and in turn creates all kinds of humanistic and social phenomena. This philosophical stance makes many specific puzzles solvable easily. For example, ethics can now be seen as the product of distortion and solidification of thinking, and alike is the nature of law. Organization, power, freedom, democracy, market, etc., can all be realistically and satisfactorily explained. The logical alignment of rationality and democracy can be seen as a major breakthrough. An unprecedented explanation of money in particular highlights the significance of the materialization of ideas. 7 This introductory part is the pivot of this article. Below I will explain stepwise the basic meanings of this theory of mind and its philosophical applications, and finally introduce this thinking theory itself. The task of this article is arduous, which leads to a high level of difficulty in writing. This article also presents a high degree of difficulty in reading, because much of the discussion is fresh and distinct from tradition. However, I believe that the patient readers will eventually realize that the Algorithmic Philosophy (the philosophy based on this thinking theory) can be fully understood by virtue of common-sense logic; There is no essential difference between the “Algorithmic” argumentation and the traditional reasoning. And, although it is interdisciplinary, it is in fact tightly attached to the existing literature; It is only a marginal development of existing academic output. Although it is dedicated to critiquing and reforming existing doctrines, it is really only a catalyst or glue, a small addition to the existing literature. This extremely simple, efficient, and maneuverable route has unfortunately been overlooked by scholars in various fields. The series writings of mine (see References) are just to point this out. II. The Basic Ideas 2.1 The Reform of “Being” A great problem of philosophy begins with Parmenides’s concept of “Being”, which is resulted from speculating that behind everything there is a unified, single truth that generates and controls everything. This is clearly the counterpart of correct or truthful knowledge. Since this knowledge is obviously only obtained through human efforts and sifting of ideas, the leftover, namely, the other less correct ideas are called “opinions”. Like the semi-finished products, byproducts or waste products in a factory, opinions are the passers-by in a hurry, so they are deemed ultimately “non-existent”. This notion, which sounds quite reasonable in common sense, emphasizes the distinction between different outcomes of human thinking, and thus reminding people to select and discriminatively treat them, rather than to take all the results of thinking as equally effective. That’s the rational side of it. An underlying problem, however, is that it places above all the simple and singular “truth”, thus it is in fact, apparently unconsciously, too obsessed with the economy of thought. Indeed, the discovery of a simple “substance” or law behind numerous and changing phenomena often marks a major success in intelligent explorations, however, not all worthwhile explorations end up in this simpleness. Obviously, such a discovery with a simple form is cherished and admired in large part because of the convenience it brings to people. This article will gradually reveal that this convenience or economy is very significant, far beyond what people usually consciously 8 think of it. The pursuit of the economy of thought has largely quietly and unconsciously dominated the thinking activities of human beings. It not only affects the screening of knowledge, but also permeates all aspects of thinking activities, profoundly changing the content of thought. “Being” is actually an example of this, which means that what is convenient for us and beneficial to us is alleged by us just coincidentally, objectively correct. Moreover, Parmenides expanded it to the point that the whole world was dominated by this single Being; Thus, he presupposed the final outcome of the mental activities of humankind. Afterwards, the mind was assumed to rest, once and for all. Today, we know that this law of economy is an important methodological rule that scientific research deliberately applies, but it is after all a strategy and cannot be confused with the cognitive correctness about external objects. Another inadequacy of the ontology is that in fact, it first determines the “final result” of all cognitive activities, and then uses it as a basis to guide the current thinking activities. Since people don’t know enough about things in the near world, how can they be so sure about the final outcome of all history? According to common sense, people’s knowledge of distant things is usually less reliable than that of near thing, so this notion implicitly put less reliable knowledge above the formation of more reliable knowledge. This inversion has caused some great conflicts within the human knowledge system. In addition, knowledge that is believed to be the truth, and therefore already part of “Being”, has sometimes been found erroneous and hence needs to be corrected, which further exacerbates the suspicion of this ontology. Finally, the development of knowledge causes the quantity of knowledge to grow rather than contract, hence the convergence of the overall knowledge system implied by this ontology is apparently inconsistent with this fact. The criticisms of metaphysics caused by this ontology is one of the main themes of modern philosophy, and it is not necessary to enumerate the criticisms here. However, metaphysics emphasizes a certain truthfulness, absoluteness or definiteness of knowledge and rejects relativism, which we need to draw on. It seems difficult to maintain our belief in the truthfulness while acknowledging and embracing changes in existing knowledge, i.e., the innovations. In this case, Kant came into play (Rohlf, 2024). The meaning of Kant’s philosophy is understood divergently, but I agree that Kant meant that human thinking is simply to combine what is inherent in the mind with empirical materials. This inherent thing is not necessarily entirely suitable for processing all sensory materials; it is just specific and fixed, i.e., concrete. Moreover, since it is specific, fixed and concrete, it is clear that it may not be entirely suitable for understanding the world. It’s just something with its individuality, and after it works, it has some characteristic results. Then, correct knowledge 9 and incorrect knowledge will naturally coexist, and thus we are not in a state of staying only with the complete truth of the world. Kant’s discourse contains a lot of inappropriate and even misleading contents, but I think this is one of its reasonable kernels. This notion is a blow to traditional metaphysics. However, Kant’s philosophy has a less conspicuous but fatal weakness, which is that it fails to put forward the concept of “discrete thinking tools”. In his concepts of category type, judgment type, etc., the tools of thinking have been almost imminent, and they can also be discrete, but this breakthrough has not been achieved after all. If thinking is atomic and discrete, and works one after the other, namely, in a serial manner, then the “combinatorial explosion” will occur, and the thinking activities will run forever and endlessly, and Kant would avoid the statics that his philosophy finally falls into and hence he was criticized. In this statics, there is an absolute gap between knowledge and the thing itself, and correspondingly, the later concept of “bounded rationality” is interpreted as some absolute limits of knowledge. Traditional metaphysics has therefore quietly continued in this “discounted” way. This drawback also happened to many later philosophies. For example, after entering modern times, the understanding of Being became the explanation of the nature of science, and Nietzsche made a pioneering contribution to this. However, Nietzsche’s philosophy ended up in the quagmire of eternal recurrence due to the lack of the perspective of combinatorial explosion (Copleston, Vol. VII, pp. 407-420). Husserl’s doctrine of intentionality is disconnected from his life-world, and thus his system, which is quite close to the Algorithmic philosophy we propose here, is not completed (Kearney, Vol. VIII, pp. 8-25). Heidegger rather convincingly dissolved “Being” and even implicitly described the roundabout production of thoughts, but he made his philosophy anticlimactic by breaking his promise of continuing his “Being and Time” (Kearney, Vol. VIII, pp. 33-56). Unlike Kant was Russell. Russell’s atomic theory of thought (Russell, 2010) can be dynamic, but he was so obsessed with the intention of completely denying metaphysics that he failed to recognize that subjectively summarizing the whole world is important and metaphysics is necessary in any era, so that his philosophy eventually stays in a rudimentary state. Now, on the basis of the work of these predecessors, the “Algorithmic” work of this article is an increment. We start by pointing out that, based on the inspiration of computer science, we now know exactly what a thinking tool is. A number of thinking tools process information in a serial manner, alternately and selectively, to constitute real thinking activities. Thinking can be seen as an encounter between two types of “strangers”, and it produces a large number of mixed results, i.e., ideas or knowledge. These pieces of knowledge are formed into different modules according to their different goals, properties, 10 functions and relations, and together form the human knowledge system. In the modern knowledge system, the metaphysics in ancient Greece is divided into several parts such as logic, metaphysics, science, and religion. Science is a representative of the “reliable knowledge”. However, in order to achieve this reliability, science has narrowed its scope and become a limitedly-sized, boundary, and distinct type of knowledge from other knowledge. Moreover, its reliability is only relative to other knowledge, and it is formed, identified and respected primarily because it can distinguish itself from them. What modern rationalists or positivists do not understand, however, is that since the scope of reliable knowledge like science is limited, what can be done by actors involved in practical affairs? How do they deal with the practical questions that science can’t answer and hence avoids? Can the questions be put on the shelf and left alone? In fact, decisionmaking issues are often urgent and cannot be avoided. Therefore, the actors must try to answer or deal with them comprehensively, even if their answers to them are imperfect because they are in the midst of the cognitive processes. Therefore, by clarifying the contents or types of thinking tools and analyzing thinking processes and consequences, this theory of mind, known as the “Algorithmic Thinking Theory”, aims to explain how a person is forced to turn to carry out the subjective thinking processes under the pressure of factor completeness requirement and the thinking economy, and then to hastily close the thinking processes (the “forced closure of thinking”). With this “subjective turn”, or “mental distortions”, he or she engages in various common, less reliable but colorful mental activities other than deduction, such as induction, speculation, assumption, analogy, estimation, imagination, lottery, experimentation, etc., resulting in the fact that the everyday world we live is diverse, dynamic, and seemingly less “rational”. However, that mechanical, cold and sequacious thinking shall not be the general appearance of rationality, but only a special facet of it. The true workings of human reason must be re-revealed in the light of this “concrete rationality” and the economy of thought. In fact, scientists also frequently engage in imagination, assumption, and unreliable reasoning. The difference between them and ordinary people shall only be detailed and technical. Now, philosophy has also found its own place and role in this system: it both explains the nature of science and works on the periphery of science. Since science has boundaries, it needs to have a foundation, including basic hypotheses, positions, opinions, methods, values, principles, and so on. To a large extent, this work can only be carried out abstractly and subjectively, and cannot be rigorously argued like science itself. Sometimes scientific research requires an estimate of the extremely distant, or ultimate, state of the world, and philosophers must therefore work like traditional metaphysicians in these 11 aspects. Even if they come to the conclusion that the future is unknowable, it is a metaphysical conclusion that can be helpful to science. However, metaphysics now requires a conscious awareness that itself, like all other knowledge, is finite, and hence requires self-restraint and appears only when it is necessary or compelled (the “metaphysical minimization”6). Moreover, metaphysics shall also be understood as a stopgap measure that is inevitably to be updated from time to time. Following similar logic, other types of knowledge can also find their places in this proposed new system, so that the ancient ontology can now be replaced with this new epistemology (and ontology) about a unified human knowledge system. Here, all knowledge, including science, philosophy and religion, should step down from the altar, get along with each other equally, and compete fairly. Analytic philosophers can now realize why it is not enough to simply conduct their analytic work. The logical atomism implicitly involves a doctrine about the quantity of knowledge. Under monism, the question of the quantity of knowledge is unimportant. Now we need to introduce a linear view of knowledge, that is, we need to recognize that new knowledge often needs to be represented by new information or data, and therefore the progress of knowledge means in principle an increase in its quantity. Pluralism is, first and foremost, a theory that advocates a linear view of knowledge and thus an expansion of the knowledge quantity. Of course, the reality shall lie somewhere between monism and pluralism, i.e., knowledge development is not strictly linear, and sometimes the amount of knowledge decreases or remains unchanged – because old pieces of knowledge are replaced by new ones. Also important, however, is that this irregularity is another manifestation of bounded rationality (or concrete reason), because the limited thinking capacity cannot ensure that the knowledge system will, at all times and places, showcase the regularity that we like, so that the various irregularities, including this irregularity, not only do not need to be eliminated now, but their presence proves that the new system is just self-consistent, logically. On the other hand, the strategy adopted by analytic philosophy also bore fruit, e.g., the development of mathematical logic eventually led to the creation of computers. We can “Algorithmically” discover how the various philosophical schools converge here. 2.2 Mental Entities (1) This way of micronizing, atomizing, and objectifying thoughts means that we have to look at them as if they were ordinary things. Although thoughts are invisible, we can “feel” 6 It is well known as “Ockham’s Razor”, or “ontological parsimony”, see Spade, 2024, Section 4.1. 12 them, and thus we can imagine their concrete existences. They can be the objects of our study just like physical objects, and can interact with physical objects in specific ways (that we are actually familiar with). Literally, this shall be a commonsense understanding. However, before reaching this point, philosophy has gone through longtime and tortuous explorations; hence, in order to arrive at this position, we need to overcome many obstacles. Early philosophies focused on the external world. Later, when Descartes reminded that philosophy was actually about whether and how subjectivity and objectivity corresponded, it turned to the human mind. However, under the framework of mind-matter dualism, either the mind wants to dominate matter, or matter wants to annex the mind. In short, it is always difficult to establish a situation of equal coexistence between mind and matter. Meanwhile, simply retreating to pluralism or nihilism is something philosophers are unwilling or dissatisfied with. In my view, this is the mentality of aphasia of contemporary philosophy. In order for thought to be one type of many things, existences or beings, as mentioned above, it means that thought shall be treated as a type of reality, fact, entity, or “substance”7, and thinking activity shall also be treated as a type of real action. It was obviously difficult to achieve this before the information technology revolution, because in those days, manual labor was much more arduous than thinking activity, and thinking activity was generally considered to be able to proceed quickly. Because of the slow and sporadic paces of innovation, they were often neglected, and it was assumed that thinking activity could easily reach its end. This was one of the reasons for the popularity of metaphysics whose mission is to summarize the whole world. Thinking activity comes and goes capriciously, and it is not easy to divide it into small components for analysis. It is also not easy to distinguish between the flow and stock of thought. Thus, whether the thoughts within the mind of an individual, or the thoughts of many individuals or even the entire human race, have been viewed roughly as a whole, the complexity of individual selfcommunication or interpersonal communication has been ignored or relegated to a secondary position. In short, this understanding allows the thoughts to be seen either as an absolute whole or as a highly unstable whole that is constantly in flux, and how can “one thought” In such a context, the meanings of words such as “entity”, “substance”, “existence”, “being”, and “fact” have subtle differences, and some of them (such as “fact”) were used to avoid those existing disputes. In modern times, the word “entity” has become increasingly popular, apparently intentionally to avoid certain philosophical. This article follows this habit and mainly uses “entity”; however, this is not to avoid the controversies, but to clarify its meanings and usage, like never before. 7 13 among the many thoughts be juxtaposed equally with concrete, tangible physical objects? In particular, since the mind of the researcher is active at this time, and from the above perspective, it shall be tightly, instantly and constantly interrelated and integrated with the mind of the object person of study that is also active, then, how can the latter be separated again and thus concurrently become the object of the former? It can thus be recognized that the micronization and atomization of thought, as well as the independence and discreteness arising therefrom, are the key to the substantialization or materialization of thought. In other words, only when thought has these characteristics, it would be feasible to be identified as entities, substances, things, beings, materials, existences, facts, and so on, and thinking would become a real action. Now, we need to philosophically clarify and realize this. First of all, both thinking tools and information must be understood as existing in discrete ways. Thinking tools of the human brain to carry out operations, namely, to process pieces of information, are to objectify information as rudimentary thought. Processing is a finite, concrete, resource- and time-consuming act. After the processing operations are completed and the specific ideas (or knowledge) as “products” are generated, the existence of these ideas is objective, separated from the thinking tools, the information and the thinking activities, and thus can be processed again as new objects. Second, these “products” can be stored permanently, perhaps differing from each other, in some “solid” and definite forms – unless they are intervened, changed, decayed, forgotten, or destroyed over time, intentionally or unintentionally. According to common sense, we know that these stocks of ideas can be expressed and disseminated into the outside world, and then to be objectified by others. In short, the micronization or atomization of ideas provides an important foundation and conditions for the materialization of ideas. People can now re-object both their own or others’ thoughts and their own or others’ mental activities. 2.3 Mental Entities (2) However, the establishment of mental entities is not so simple, because many theories throughout the ages have hindered or explicitly opposed this position, such as reflectionism, mind-matter identity, mind-matter parallelism, or behaviorism, especially Gilbert Ryle’s concept of “category mistake”. Ryle argues that if we do this, we would be double-counting and hence making a fundamental mistake in category (Ryle, 1963). This explicit argument reveals a philosophical secret, namely, that philosophers generally regard thought as something very special; In other words, although they attach great importance to thought, they do not consider it a “thing”, do not consider it to be one of 14 ordinary objects, but place it on a completely different logical level from ordinary physical objects—so that ideas cannot be considered to be equal to physical objects and hence to be able to interact with them. In respect to the “correct ideas” that “correctly reflect” external objects, Ryle’s position seems to be justified. However, in addition to this, there are a large number of less correct ideas, and their proportion in the ideological system seems to have not decreased significantly. Are they also not independent and therefore cannot solely “exist”? Obviously, the “category mistake” concept shall at least allow the temporary existence of these “less correct” knowledge; further, if the existence of the latter is not temporary, but generally permanent, then this position is hardly convincing. “Less correct knowledge” is just a temporary wording of mine, and its content is actually rich and colorful. It is true that some parts of it are only used temporarily in the process of deriving correct ideas, and they are to be eliminated after the correct ideas are produced. However, this “elimination” is generally relative, and these temporarily discarded ideas are not necessarily useless, and their meanings may be rediscovered in the future. And, an right idea is not necessarily always right, but it may be improved and discarded again. Moreover, in order to arrive at new correct ideas, one must constantly resort to “mistakes”, and therefore must constantly produce new “less correct ideas”, in this spatiotemporal context. Secondly, in the tradition, cognitive scientific knowledge has been placed in the center of the knowledge system, while applied, practical, engineering, and executive knowledge has been subordinated. However, as everyone knows, only when it enters into the decision-making and practical stages, a thinking process is able to accomplish its relatively complete journey. In the practical stage, the coexistence, collaboration, interaction and competition between mental processes, physical processes and the body movements are particularly vividly manifested. For example, human upright walking is closely related to the human brain’s strong ability to think and control (Eccles, 1989). Body movements are inseparable from the control and command of the human brain at all times, they are carried out concurrently. The mind transforms specific decisions (thoughts) into actions, and then into specific states of the physical world. In turn, other specific states of the physical world are transmitted to the brain through the sensory organs to form thoughts. Isn’t this cycle of thought and action the interaction between mind and matter? Here, they alternate between the upstream and downstream of the behavioral chain, each with different time and energy consumption. People can draw certain conclusions through practice and experiments, and can also draw conclusions through thinking activities or “thought experiments” (Mach, 1893); Isn’t this the competition and substitution between 15 thinking activities and physical activities? How untenable is the concept of mind-matter parallelism or the accusation of “double-counting” here! The substantiality or materiality of thought is also closely related to the question of whether the human spirits, wills, goals, emotions, sentiments, feelings, experiences, histories, relationships, stories, etc., have meanings independent of the physical world. In addition to science, are literature, art, and culture valuable? Is there any value in a sole, ordinary life? Traditionally, these issues have been treated separately from intellectual activity. The theory presented in this article will show that they are intimately linked to the activity of the mind and are inseparable; The concepts of “spiritual activity” and “mental activity” can almost be equated. Traditional philosophies sometimes value the spiritual activity of human as particularly and absolutely noble. Arguably, this is another way for spirit to avoid being treated as an ordinary kind of entity. It is now necessary for us to pull spirit down the altar and remove its mysterious veil. This brings us to another underlying attitude towards mental activity, that mental activity is our human own affair and is something that we humans can completely control, and therefore we cannot regard it as a completely objective fact like an external object. This view reminds us that we must not be arrogant and must remain humble in front of the physical world, and we must always be ready to change our minds to adapt to external objects, not simply enshrine our thoughts as fait accompli. This view could make sense, but it is ultimately incorrect. This view is still more or less associated with the neglect of the economy of thought. For example, it ignores the side of the human mind that is not easily changed. It may not be easy for you to change your mind. For example, homesickness is a thought, and it is not easy to change. New words are also not easy to remember. One is less likely to change the minds of others; This can be seen in the difficulty with which we try to convince others. In today’s era of big data, when the heat emitted from a data center can heat a mountain range, or AI experts predict that the power consumption of AI devices in the future will be comparable to the total electricity consumption of the whole society (IEA, 2024), even I was shocked. A computer not only provides a model of the mind, but also help us recognize the tip of the iceberg of the thinking economy. Now, if we change the angle to look at it again, we will find that although we often modify our thinking to adapt to external objects, we also often modify external objects to adapt to our thoughts. For example, in the face of evil regimes, civilized countries have to be ready to fight at any time, because not all disputes can be solved at the negotiating table. The reality of the mind sometimes oppresses us like a mountain, forcing us to use the physical resources we can control to deal with it. This relativity is particularly useful in illustrating the meanings of “mental entity” in this article. 16 2.4 Mental Entities (3) With regard to some of the existing misconceptions about mental entities, it is necessary for us to continue to refute them. These arguments are all related to the nature of correct thoughts. For example, the idea that when the thoughts are correct, they do not exist and can be replaced by the external objects they are reflecting. At this time, it is equivalent to the external objects running directly into the brain, and the brain directly deals with the external objects, without the need for sensory materials, information or knowledge as an intermediary (Taylor, Vol. 1, 1997). Another view is that since thoughts are the product of the human brain as a material entity, they cannot be identified as entities again outside of the material entity. At the other end of the spectrum, but in fact related, the argument is that to identify thoughts as entities is to admit that all entities have the same nature, and that they can therefore simply be equated and added up. For example, the three tables that Plato mentioned: the real table, the table in the painting, and the table in the human mind add up to make three tables. The objective effect of such an argument is actually to lead to a general rejection of the identification of thought entities once again. As mentioned earlier, the existence of thought has a dimension of time and space. Then, does a correct thought also have a dimension of time and space? So is it, of course. Correct thinking is often the result of eliminating or correcting wrong thinking, so it often takes more time, space, and resources. The main purpose of being a student is to learn the right thoughts, to “move” them from the outside and “store” them in one’s brain. If traditionally the understanding of these physical features of thought has remained specious, in computer science they have been accurately portrayed. The problems of big data, storage space, data transmission, computing speed, and communication are so prominently manifested in computer and information technology that it is enough for us to, from multiple angles, clearly understand the physical features of thought. A “correct thought”, even if it is really correct, must be considered to be independent of the external object to which it corresponds, and not directly the external object itself. This is because thoughts are indeed thoughts, not external objects; What run into the mind are the informational pieces about the external objects, not the external objects themselves. Considering that thoughts often develop unexpectedly, it is prudent to assume that even a “correct thought” should be set as an intermediate or processual variable and not be radically cancelled out by external objects. This setting should be permanent, because thinking activity is permanent in principle – remember the perspective of the “combinatorial explosion”. 17 Then, how can minds, which exist as independent entities, discover the truth of the objective world? According to some modern notions8, one way of thinking and understanding external objects is to establish correspondence rather than directly to “know”, that is, based on the information received, the brain first conceives multiple cognitive schemes, and then compares and selects between them, in reference to other knowledge or practical results, and finally identifies the relatively rightest scheme as the “correct”, and vice versa as the “wrong”. This is like how people with color blindness recognize colors. In this way, although we cannot directly go beyond the curtain of cognition to know the behind truth about external objects, we will never learn nothing. Whether or not our thinking tools are “perfect”, we will always be able to get some relatively high-quality knowledge, e.g. science, as well as relatively low-quality knowledge. Conversely, with such an approach, the importance of the metaphysical question of “whether our thinking tools are perfect” also declines itself. Looking at the status of the mixed real mental activities, doesn’t it show the legitimacy of this view? The stubbornness of traditional philosophy also lies in the following argument that can arise from Ryle’s behaviorism: since activity of the mind is generally presumed to be a function of the biological tissues of the human brain, perhaps it can be further attributed to a certain structure and activity of the relevant physical atoms or biomolecules; therefore, it would be better that we just wait for scientific research to finally prove these hypotheses. However, the next question is, after the day when the hypotheses are proved, will we humans be able to learn the content of our mental activity through observation or detection of the microscopic particles of matter in the brain? Is it possible to replace mental activity by manipulating these particles? Obviously, even if the answers to the both questions are all “yes”, we generally don’t have to do it because, as Descartes stressed, “we” are first and foremost our own thoughts, not anything else; We don’t have to pretend that we are more familiar with the physical things but alien to “ourselves”; Any existence and attributes of physical objects (including the brain of which these particles are constituted as a “thing”) are only the arguments, or the contents of our opinions in the minds, and their truthfulness does not in any way exceed the mind’s perception of its own existence. Therefore, in many cases, it is necessary or economical to directly identify the thought entities and directly reveal and discuss their contents. Even if thought is a kind of accessory characteristic or phenomenon of some specific physical objects, those physical objects must be treated as an exception and cannot be simply treated as common physical objects. Of course, it makes sense for Ryle’s argument to be a reminder against some double counting. For example, we cannot demand compensation from an employer for Relevant ideas seem to be fragmented in literature, and hence still need to be systematically examined and sorted out under the "Algorithmic Thinking Theory". 8 18 wages just because the mind is now recognized as a “new” entity (which is certainly not entirely new). Second, we need to distinguish between the content and form of a thought. Sometimes we just discuss the existence, movement, and change of a thought, not revealing what its content is. The question of criteria for identifying entities shall also be discussed here. Does an entity have to meet standards such as length, width, height, mass, etc.? Not necessarily. This problem has been solved when physics established the concept of energy. Looking at concrete examples, it is found that any specific attribute of an object shall not be sufficient to be a general criterion for identifying entities -- although we have invoked the physical properties of thought to account for the materiality or substantiality of thought. So, where are the real criteria for identifying entities? As we all know, phenomenology has provided an answer that everything can be an entity, including a thought, without any limitations.9 But why is there “everything”, or a thing? This requires our Algorithmic Thinking Theory to answer: the existence of the “minimum or unit thinking activity” requires an object often to be revamped or “tailored”, the large one may need to be divided or reduced, the small one may need to be enlarged, and the qualitative complexity shall also be considered. The object world is divided into tremendous units by the mind according to the mind’s own needs, thereby the units as “entities” are produced at the convenience of analysis. Many phenomena are attached to an entity as its characteristics or attributes, which can reduce the number of entities and bring about the economy of thinking. Entities are then allowed to move, change, develop, and interact with each other, giving rise to the concept of “relationship”, and thus the concept of entity is supplemented and extended. Separating statics from dynamics and then linking them together is a stepwise, serial approach; Together, they form a conceptual group whose purpose is just to meet the needs of atomized mental activity, as much as possible. This is to say, the juxtaposition of certain objects as entities only indicates that they are equal to each other as objects of thinking. This “as an object” nature is the key (and perhaps the only) criterion for identifying an entity. This does not necessarily mean that they can simply be equated or added up in their specific properties, and therefore the assertion of “three tables” is incorrect. They are only three different objects of mental activity, and because they cannot completely contain or replace each other, they are identified separately. This identification does not preclude the links among them. It can be recognized that even in the phenomenological literature this point is still somehow obscure and controversial. For a summary of this issue, see Rettler, 2024. However, it would be better to put aside these disputes for now until the Algorithmical concise and clear position has been fully explained and understood, and then, at an appropriate time, to return to its historical review again. 9 19 It can be further pointed out here that when we humans adopt a posture of “knowing”, “explaining” or “transforming” the world, this in itself implies a distinction, that is, a distinction between subject and object. This distinction shows that our space, sight, and intellect are all finite, and that our objects are also finite, and often fragmented, divided into many. Therefore, theories about the limitations of vision and reason are the bases of problem occurrence and the basis of philosophy. When we ask the truth and the cause behind a phenomenon, we are trying to go from one place to another, from the seen to the unseen, or from this time to that time. This posture shows that we will never be satisfied with a phenomenon itself; Even if we are lucky enough to come across the “ultimate truth” and put it on the table, we will continue to treat it as a “phenomenon” and continue to ask for the “causes” behind it – as we will not be able to confirm its “ultimacy”. Thus, the activity of the mind will never end, as if those enormous particles are always trying to interact with each other. At the same time, one of the advantages of reducing a phenomenon to some invisible causes is that because it is no longer visible, the inquiry into a deeper cause that follows will naturally and be weakened or attenuated, as an economical effect. III. Philosophical Applications Such a theory of thinking can be said to be unprecedented and totally new. Subjectobject dualism is not new, but the concept of “‘thinking tool + information’ constitutes a minimum unit of thinking activity” is new, so thinking tools, information, ideas (or knowledge), thinking activities, etc. must exist as entities or entities’ activities, and they use the breadth of space to build up the stocks of knowledge that support and promote the historical and infinite development of thinking. While many of the elements of this overall scheme existed before, this wholeness has not been completed until now. This led to a philosophical breakthrough and synthesis, which covers a wide range of fields that are difficult to exhaust in one article. Below, we will give just a few examples to show how revolutionary it could be. 3.1 Logic Logic to date has shown processing of the given knowledge. However, after knowledge has been micronized and discretized, knowledge first needs to be searched. Limited thinking abilities make the search for knowledge incomplete and therefore requires the use of strategies. Thus, “association” becomes a way to search for knowledge. The existing logic is really just a demonstration of the processing of the knowledge that has already been searched. 20 Atomic thinking activities indicate that individual thinking activity is free, and after one thinking action is completed, the person concerned is free to conceive and choose the next action. Consequently, any thinking process that has traditionally been understood continuous can now be interrupted, suspended, or re-structured at any step. This is because the action of thinking, as a real action, now needs to be examined in terms of its economy. Its appropriation of resources, including the attention of the actors, may have crowded out or delayed the conduct of other, more important actions. Even if the thinking action that was originally intended to be carried out is logically rigorous, coherent, correct, or sufficiently reliable, it may be set aside. Conversely, even some less reliable mental actions, such as imagining or guessing, are now likely to be rationally arranged and prioritized. In this way, the chain of thinking actions will not be as mechanical and monotonous as traditional logic shows, and its components will not be limited to reasoning, but will include various types of non-deductive thinking activities such as searching, associating, imagining, guessing, analogizing, assuming, etc. These mental activities have traditionally been described by natural language and are actually known well to us, even emphasized in different contexts10. They are concocted and arranged chronologically and constitute the flow of a person’s colorful mental action. This “kaleidoscopic”, and seemingly “chaotic” arrangement is not in fact chaotic, but is subject to a cost-benefit analysis that the actor is making alternately at any time – although this analysis, like any other mental activity, is generally imperfect and even subjective. That is, there is now another new logic that governs the formation of this chain of thinking; We call this the “Algorithmic Logic” (Li, 2022*). Under the condition of the discretized thinking, Algorithmic logic asks the question of “what thinking action(s) should be carried out economically in the next step(s)”, then launches and organizes the cost-benefit analysis, and finally conceives the sequence according to the analytic results, and arranges the thinking actions along the chronological order. Therefore, Algorithmic logic is a kind of “logic about logic”; In other words, it synthesizes various logical operations into a unified system, different kinds of logics are linked together through this system, and a specific logic can be a component or particular of it. In this way, logic is now extended to the theory of mind. Again, the above process illustrates how different thinking tools or methods can be combined and used in the “hodgepodge”. The credibility of this conclusion can be further enhanced by identifying the specific types of thinking tools and methods, and by dissecting their specific steps. For example, the classic syllogism shows how two propositions that For example, almost all of the behaviorist research can be encapsulated as an enumeration of the various phenomena of "mental distortions" that will be explained below, since they have not been convincingly explained by behaviorists. 10 21 match each other can reliably produce a third proposition, but these two propositions that match each other obviously need to be searched and carefully sifted, to get there. And, tracing back to their origins, how did these two propositions come about? Obviously, the original propositions could only come from methods other than deduction, such as the inductive method. In this way, we find a clear connection between the deductive and inductive methods, the way in which they can be integrated – that is, they are linked together at the micro level with the help of the logic of the thinking economy. Then, a methodological synthesis between Plato and Aristotle can be achieved. The activity of the mind is like a zigzag chain of links made of different materials, rather than a straight line from the raw material to the conclusion, deductively. This “bending” of thinking is one of the main conclusions that can be drawn from Algorithmic Thinking Theory. As another example, below illustrates how this Algorithmic logic can be used to critically synthesize dialectics. When a specific thinking tool encounters any information or knowledge, or two specific pieces of information or knowledge (under the working of a thinking tool) encounter, “collide”, or combine with each other, because they are concrete, just as two physical objects have their concrete shapes, it is normal for them to have some “matching” or “conflicting” effects. In other words, if there is only one best way to match them, then other ways must be more or less contradictory. Thus, consistency and conflict (or contradiction) must be common phenomena in this “Algorithmic world” (i.e., the world in which the actors think in the way according to Algorithmic Thinking Theory), and they must be fairly evenly mixed. A contradiction, however, refers to the “complete opposites”, which is a logically neat and regular construct. However, the actual situations may not be exactly in line with this logical construct, some may not have their opposites, and some opposites may not be fully equivalent, and some are just “conflicts”, as if in a traffic accident there is only a side collision between two cars, not in the exact opposite direction. Now, with the help of the thinking theory and Algorithmic logic, we can reasonably come to this infinite diversity and irregularity. In this context, we can further realize that it must be relatively rare for the real physical world to move in perfect and regular contradictions. This kind of regular “dialectical movement” often occurs just within the thinking system, that is, due to the limited ability, our thinking can often only move in one specific direction first; After a certain point, the marginal return decreases significantly; In order to efficiently compensate for the losses caused by this one-sidedness, we often simply choose to go in the opposite direction. This kind of “rushing left and right” is nothing more than a simplification of the way to deal with problems, or an attempt by the boundedly rational individuals. In politics of the United States, this approach has led to the bipartisan rotation and the policy swings. 22 The contradictions, conflicts, and consistencies here are generally not absolute, but would be transformed into each other under certain conditions. And, the transformation is generally not regular, but logically diverse and pluralistic, including elements of innovation and development. Moreover, as long as the opposites here are in different spaces, they can generally live in peace. It’s like that a firewood doesn’t burn as long as it is separated from a fire. We now refer to this possibility of peaceful coexistence and mutual transformation as the “High-Order Consistency”, a consistency existing only in this Algorithmic world, and the abundant diversities and differences that exist between the opposites as the “Softness”. Softness, in particular, refers to the mixedness of small quantitative and small qualitative (or structural) relationships. Traditional philosophy has often valued extreme states, even pursuing simple states. Now, we need to introduce the soft, diverse, and comprehensive state into the core of philosophy. The traditional notion of “rationality” has also changed, as we find that there is more rationalities in what would otherwise be called “irrational”, and that traditional “rationality” is not really rational enough. Now let’s illustrate again how Algorithmic logic can be used to solve some logical paradoxes. Some logical paradoxes are related to the difficulty of “self-containment”, such as “I’m lying” (Copleston, Vol. VIII, p. 430). Traditionally, a proposition is naturally defaulted to be applicable universally, including to itself. However, with this new thinking theory, we now know that no proposition can be tested entirely and unexceptionally. In particular, a proposition, when it is made in the first place, should in principle be considered not to include itself. The mental activity of making a single proposition is a single mental action, which not only cannot include all situations, but is in fact only concerned with the individual situation at hand—even if it is deliberately establishing a general proposition for all situations, this “globality” is only a concept built up by memory, the stock of knowledge, and/or guesswork. Under the above thinking theory, when a mental action is ongoing and not yet complete, how can it know for sure its condition and thus include itself? This is technically impossible in any way. The current mental action can only objectify itself with the help of the mechanism of prediction or memory. Strictly speaking, no mental activity can objectify itself in real time. The last example is “the gold mountain does not exist” (Copleston, Vol. VIII, p. 431). This traditionally brain-burning paradox is primarily a reflection of a false philosophy that ideas are not real or substantial. This rigid principle leads to the inability of “gold mountain” as an idea to “exist” simultaneously as a reality or entity. Now, after the materialization of thought, the “gold mountain” can first exist in people’s minds as a “thought entity”, and “the gold mountain does not exist” simply means that this mental entity that exists in the mind does not correctly reflect or predict a physical entity in the outside world, since there is no gold mountain in the physical world. In the context of the existence of thoughts as a 23 relatively independent kind of “third party” between human and the physical world, the mind can both talk to itself and be busy with its own business, as well as predict external objects and assume entities. It can make predictions and assumptions before testing them. The raw materials of this hypothetical entity come from the sensory information about the external world (e.g., “mountain” and “gold”); The human brain just uses thinking tools to process them. It’s like that in a computer, the numbers processed are still numbers, and the images processed are still images (although they may no longer be entirely the images of foreign objects). 3.2 Ontology and Psychology The existence of thinking time is common sense, however, its importance has not been sufficiently revealed until the framework of “thinking tool + information” is adopted. Now, under this dualistic, discrete theory of thinking, the situation has become different: thinking activity is like mining with a machine; Between the machine and the ore deposit, a working face is formed, which advances gradually. It is clear at a glance for the specific knowledge produced in any specific spatio-temporal environment to be finite. The object of mental activity is divided or made into a “size” or “specification” suitable for its processing. Processing activities can only be carried out with a certain intensity and rhythm. For the sake of convenience, the results obtained can be collectively referred to as “thoughts”, “knowledge”, “ideas”, and so on, without always having to distinguish between how correct they are. In a particular context, we can discuss only the forms of knowledge, or only the contents of knowledge. The distinction between the form and content is also a consequence of adapting to the limitations of individual mental activities, which allows us to take a stepwise approach to narrative or analysis. In the face of complex phenomena and the pressure of the thinking economy, the actors will have a strong incentive to simplify. One of the strategies is to merge the same or similar situations and then process them in batches. This gives rise to the “universal”. However, the atomization of mental activity can help us realize that any “universal” abstracted from concrete objects is only a local feature of the concrete objects, and its scope of application is limited, so that a concrete object is in principle a combination of universals and particulars. It shall not be economical to extract any feature of a concrete object and define it as a universal. The activity of extracting universals shall be refrained. Another situation is to discover the “substance” or “essence” of the object, with the intention of discriminatively treating the different characteristics of the object, emphasizing some while relegating or neglecting others. These are among the strategies for simplifying and for cost-effectively grasping objects. 24 Also for the economy of thought, an individual entity or object can be fragmented or invisible, such as a “system” or “structure”. Although it is not intuitively easy to grasp, the recognition of it can be beneficial for subsequent analysis and processing. A universal or an essence can also be materialized, and it can be conceived that it exists within a concrete object (the “universal in things”) and governs the movement and performance of the object in the same way that the brain governs the human body. This anthropomorphization can bring convenience to thinking and reasoning. Moreover, under the dynamic and discrete conditions, it is easy to understand both the “universal before things” and the “universal after things”. People can also assume that some substantial universals exist somewhere outside of the concrete objects. The location may be undecided, but that doesn’t prevent it from being able to manipulate the concrete objects from a distance. The examples include the aforementioned “human knowledge thesaurus” and Parmenides’s “Being”. However, they are all nouns, sometimes not enough, so as a variant of “Being”, we can find that “God” is not only a noun, but also something that speaks and acts, like a man, and beyond a man, which has different descriptions and interpretations in different theological philosophers. This plurality reflects the limitations of any single approach to catering to our way of thinking. Mathematics and logic are like exercises or “rehearsals” for thinking tools, which show the nature of the thinking tools themselves and their stipulation of what is logically right and wrong, but they say little about the outside world. That being the case, when these thinking tools are used to process external information, the value or reliability of the conclusions drawn shall be highly questionable. That is to say, it is not easy to draw highquality, general conclusions at this time, unless the objects processed are particularly abundant, and luck is sufficient. This is the bifurcation between theory and experience, which Hume emphasized (Copleston, Vol. V, pp. 273-277). Empirical knowledge is often probabilistic, or reliable at varied degrees, and rarely as absolute as logic or mathematics. Probability and uncertainty are some forms of limited knowledge, which are between “complete certainty” and “absolute ignorance”, and are closely related to concepts such as softness and irregularity. “Uncertainty”, in particular, is sometimes objectified as a property of an object. This objectification has been criticized as a conceptual confusion. However, just like the principle of “universal in things”, its rationale lies in the economy of thinking. Besides the above "Hume's Fork", this theory of mind can also be used again to deconstruct the bifurcation between subjectivity and objectivity, especially by the “Algorithmic psychology”. 25 A single, atomic mental activity is like drawing a line on a plane. The line is both skinny and has a beginning and an end. This reveals a hidden, crucial feature of the mental activity as an act or behavior, that is, its ability is limited. “Finiteness” (or “concreteness”) is an essential feature of a behavior. “Infinite behavior” shall be inconceivable (but it has unfortunately become an implicit assumption of traditional philosophy). This finite behavior moves in time and space, giving rise to its beginning and end. This is one of the meanings of the term “intentionality” (Pierre, 2023), which emphasizes not only the concept of “object”, but also the “purpose”. A purpose is like the dot on the plane that this thin line is to reach, and the thinness matches the smallness of the dot. Obviously, purpose is the product of finite behavior. Heidegger emphasizes the acquired and endogenous nature of purposes.11 Purposes require behaviors, and behaviors actually also require purposes. Certain purposes, such as desire, are innate, and others are born in practice. Because, without experience, you often don’t know what you would like. Some purposes are derived: goal B has to be pursued in order to pursue goal A, and the initial goal A may change during the pursuit of goal B. This is, from the perspective of perfect rationality, ironically called “alienation”, but it is in fact a phenomenon that often occurs in the world of individuals with bounded or concrete rationality. However, goals don’t always exist, and individuals will have times of emptiness, lingering, and no direction. For the sake of frugality, individuals should not set goals capriciously. Emotions have traditionally been viewed as the antithesis of reason. However, any rational thinking activity will inevitably be bent in one way or another. How is it to be “bent”? It becomes arbitrary, capricious, willful, and brutal, and its tendency is to get closer to the goal as quickly as possible. In these cases, obviously, it is similar to emotions. Following this logic, we can find the rationality of emotion: it is like a child who is in a hurry to enjoy the meal at a party, keeping on judging whether a matter is beneficial for him to start the enjoyment as soon as possible. If it is yes, he will be happy, and if it is no, he will become impatient or sad. The affective system reminds us that in the face of the urgency of the purpose and the reality of the matter, thinking activity must not always be calm and sequacious, but must from time to time consider the connection between the matter and the purpose; In order to pursue brevity or timeliness, only the most important factors shall be considered currently, and the relatively minor factors shall be ignored, and some shortcuts must be found out. Actions that forget purposes and values are meaningless. 11 According to Heidegger, Dasein is thrown into the world. See Heidegger, 2010, pp. 131-143. 26 Therefore, emotions are conducive to decision-making, and a lack of emotion leads to a lack of the ability for decision-making.12 Such an emotional system exists inherently in the mind, which is much equivalent to an innate knowledge. It is as innate as the thinking tools, but it is different in nature. Is this binary innateness justified? This requires some analysis and speculation. A key question is: in the continuous generation of individuals and in the intergenerational inheritance, is it sufficient for an infant to inherit only the tools of thinking from the parents? The answer is obviously no, because it would have to take a long time to use the thinking tools to produce knowledge that helps the infant survive. Thus, a reasonable and necessary arrangement for a newborn child is that certain critical, minimum knowledge must be “equipped” at the time of birth so that the time for the infant’s own knowledge development can be made. According to this logic, we can find that the innate knowledge provided includes not only emotions, but also desires, instincts, and so on. Desires, as the messenger of the physiological system, transmit the requirements of the body to the thinking system, but this transmission, like thought itself, is crude and economical, not entirely precise. On the basis of these simple signals, the thinking system can continue to develop more precise knowledge of health care. Instincts direct the newborn baby to behave in certain ways to satisfy desires. We can see that this binary genetic structure is similar to that of a new computer. Computers that have just left the factory not only carry a basic instruction system, but also come with a lot of application software. Some software is even embedded in specific hardware to secure its safety and reliability. The latter is called “hard software” (or “firmware”; Ralston, 1983, p. 373). Desires, emotions, instincts and so on could be regarded as the hard software of the human body, which is inherited and exists in a biological, “hardware” way, but plays the role of “knowledge”. Since the individual has been separated from the maternal body, this “hard software” cannot be changed or updated, so it stays all along one’s lifetime, and the gap between it and the rapidly developing system of thought, namely, “pure software”, is getting wider and wider, so that it is relegated to the “irrational”. Traditionally they are all the objects of psychological research. Now, under this new perspective, it is clear that the psychological objects can exist as a subsystem of the thinking system, and psychology will thus become a subfield of the “science of thinking”. 12 For a clinical study between apathy and inability to making decisions, see Fahed, 2021. 27 3.3 Social Sciences If thought is merely a reflection of external objects, especially when it is correct, should people act according to this correct thought? Are the actions of people changing the world? Is this change a human product or the development of the world itself? In the face of such problems, it has traditionally been held that the human mind is different from external objects, and therefore the branches of philosophy (such as ethics) that study the spiritual life and value system of human beings are also very different from other philosophies. This understanding completely separates the minds from the objects, the subjectivity from the objectivity, the thinking activities from other spiritual activities – although it contradicts the realism that has equated minds with objects. This approach seems to be justified in terms of acknowledging the distinction between them, but it has failed to establish a certain link between the two, with the result that the social sciences have not yet been able to establish a connection with the natural sciences, and by far the question of the principles and methods of the “general social science” has not yet been properly addressed. Materializing, concretizing and discretizing ideas is a panacea for all these problems at once. “Concretization” means that if it is A, it is not B, and if it is this, it is not that, so that the question of equating ideas with external objects no longer exists, regardless of whether an idea is “correct” or not. Since thought is a relatively independent and “strange” thing, there is nothing surprising about the merger of thought and spirit into one type. Since thought grows as an “independent third party” between human and external objects, it is only natural that human actions and their consequences driven by thought grow as some new things in the world. The consequences can be intellectual, physical, or a combination of them, for it is now true that the minds and the physical objects can, at any time and place, be combined according to their concrete characteristics. We need to note that the core part of social phenomena is actually ideological, the product of enormous interactions between thought and thought; now, whether thought interacts with other thought or with external objects, that is all fine; They can all interact with each other freely and legitimately. The natural objects can be discretionarily separated from or combined with the social objects. This concreteness also shows the stepwiseness and modularity of mental activity. Cognitive activities have only led to certain cognitions. However, purposeful individuals also need to act. Retrospectively from the purposes, the combinatorial explosion perspective can help us recognize that there is a complex, tight or loose network of relationships among specific actions, specific cognitions, and the raw information on which they are based. This network of relationships provides both a variety of options and 28 uncertainty. Hume’s question of the relationship between experiences and social norms (the “Is-ought problem”; Hume,1960) can now be clearly answered in it. At the same time, the question of the relationship between engineering and science can also be answered, and the importance and relative independence of engineering in the human knowledge system, as well as those of enormous fragmented practical knowledge, can be highlighted. Does “law” refer to certain ideas in the human mind, or does it refer to the tall and majestic buildings of courts? Does “president” refer to a specific person, or does it refer to a specific position made up of specific conventions and relationships? From questions such as these, we can realize that the objects of social science are not only human thoughts, but especially certain conventions or behavioral guidelines, not just some epistemic thoughts. Moreover, these conventions or guidelines for conduct must exist as a given fact13, not as something that has been integrated with the ideas of their current researchers, nor as something undetermined and thus to be determined in this ongoing research activity. In other words, thought must exist really and discretely, and therefore can be distinguished into subject and object, ex ante and ex post facto, and so on. Social phenomena are a class of phenomena in which the entities of “thoughts” in the human brains grow according to their own characteristics, and this growth is obtained by consuming various physical resources. This is as if biological phenomena are another type of phenomenon in which organisms grow according to their characteristics, consuming resources as well. In this way, whether you are a social scientist or not, you have to acknowledge and confront the existence and growth of social entities. Furthermore, as long as you acknowledge this specific ontology about the mental entities as social entities, it is possible for you to, naturally and logically, raise social questions and develop the principles and methods of the general social science. The natural and social sciences now share the common or similar logic and methods, and their differences are primarily only in their objects. Interaction between people implies interaction between thought and thought. Just like interaction between thought and external objects, interaction between thought and thought is by no means a simple reflection of one thought to another, but must contain a two-way process of distortion, strategy and construction, and therefore must have its own characteristics. Barriers of communication lead to close connection and strong consistency between different ideas within an individual’s mind, while connection between the thoughts of different individuals shall be weaker and looser. Each person carries a specific version of the knowledge system, in which they answer the same or similar For example, Emile Durkheim especially emphasized the nature of “coercion” of the social facts for the actors; see Durkheim, 1982, p. 45, 51, 70. 13 29 questions of the world and of life, in the same or different ways, so they must agree or disagree with each other, and cooperate or confront each other at similar levels. Game theory thus becomes a specialized branch of general social science. The other branch is political science on democracy and freedom. As long as we recognize the extensive, permanent, confrontational, and inevitable nature of subjectivity according to the above logic, then the voting mechanism is necessary and reasonable as a method of reconciling different opinions among people. It’s just averaging people’s opinions. I think this is the first time ever that we have found the place of democracy along a rational, scientific track, and strictly clarified its meanings. This conflict between rationality and democracy has occurred since Socrates. However, irregularities lead to the fact that there are certain parts of people’s knowledge that are superior or inferior relatively, so education and management are necessary in some places and for certain people. Another meaning of management is the coordinated execution of decisions. Only when we recognize that thinking activities required to carry out a decision are also the real and time-consuming activities, we will find that even if the knowledge levels of the actors involved are identical, the establishment of a command center and hierarchical management system is often necessary, and the advantages of decentralization and centralization are just relative -- For example, an individual head gives commands fast and quite consistently, while the collective leadership is slow and swaying, and so on. Therefore, society needs to carefully weigh the circumstances under which to vote and under which to command. In organizational management, laws, bylaws and regulations are formulated as solid knowledge to give fixed output, to straighten behaviors. In terms of the solidity, these rules are indifferent from other kinds of stocked knowledge, and their differences are just technical. For example, ethics and morality as the “informal rules”14 are necessary for the dynamic interactions among boundedly rational people, but they run only on the basis of voluntariness rather than coercion, and hence are “informal”. Moreover, bounded rationality, or concrete reason, makes it impossible to anticipate all organizational situations in advance and to institutionalize the behaviors, therefore, even all the rules can only help to address a part of the managerial issues while others have to be weighed on the spot realtimely by leaders, then orders are issued and executed immediately. This is the distinction between the legal and administrative systems.15 It is also an irregularity. Individuals’ decision-making is largely dominated by the stock of knowledge grasped by the individual. However, there is still a great deal of freedom at site for current thinking 14 15 Douglass C. North (1991), “Institutions”, Journal of Economic Perspectives. 5 (1): 97–112. In this way the governmental branches can be logically defined, unprecedently. 30 activities in order to solve specific problems. To a certain extent, the stock of knowledge is mixed and discrete, and the actors can quite freely choose to combine them. A person is free to process any information with any thinking tool of his/her choice, and he/she is free to doubt or reject any guidance from the stock of knowledge. The physical independence of the human body, in which the mind resides, is a strong guarantee of individual freedom. This is because this independence has led to many collectivist ideas being uneconomical and therefore unviable. The limitations of thinking ability hinder the efforts of a certain individual or center to control a group or the whole society, and the resulting difficulties in the exchange and negotiation of ideas become a major obstacle to collectivism (Mises, 1962). On the other hand, the individual’s thinking tools and knowledge stocks directly serve the individual, and this “self-serving” arrangement is necessarily a primary, economical arrangement of the society (the “homo economicus”). Therefore, in principle, the market economic system has superiority over the planned economic system. An important and hidden reason for commodity transaction is the limited ability of individuals to make or use commodities, both physical and intellectual. An important reason for the division of labor lies in the thinking aspect. The economies of scale and scope that exist in this heterogeneous environment result in a certain level of complexity in the work of individuals, which must be coordinated with communicational, transportational and managerial factors (Coase, 1988). As a result, the transaction of goods and services can only be carried out within a certain scope and frequency, and can only have a certain impact on the allocation of resources. Resource allocation needs to be supplemented by other technical, managerial, political and social means. Money is also produced in this sense. The various reasons for the emergence of money (calculating costs, trust problems, hoarding for the future, etc.) all need to be rooted in the theory of thinking. The materialization of ideas directly leads to the materialization of money as a physical symbol of thought, and the problem of the quantity of money can be vividly and intuitively highlighted. However, the help of money is still limited. Thinking of commodity trading as a panacea is a kind of mental laziness after all. As long as we recognize this criticism, the creation of public goods, and hence the governmental intervention, do not need to be explained any more here. It’s just another irregularity. From the above logic, society must be a combination of the market and the government. The market is by no means perfect, but it contains tremendous diversity and asynchronicity. An idea that one just generates may be what a lot of people have, and it may have been tested repeatedly by others. As a result, the market is pregnant with the future at every moment. Moreover, the direction to the future is continuously tested and “voted” through big data, so this market-oriented developmental mode is relatively stable and low-risk. In extreme cases, individuals in a free society develop independently of each 31 other, and macro statistics are only the sum of the results of their independent actions. This can be called the “pluralistic growth”. While there is much room for improvement in such a free society, the weight of governmental action in the society depends largely on the government’s ability to administer it. Government power is often centralized and reflects the wisdom of a few, so it does not always improve society. This approach of bounded rationality, or concrete reason, eventually leads to relativity and competition. To a large extent, it can be used to rationalize, scientificize, technicalize and detail the existing ideological debates. Although general theoretical analysis cannot directly answer the question of whether a particular specific policy is the best or not, the relevant principles, methods and framework can be established from this. We can then try to answer the specific questions in the following specific analyses. Freedom and democracy refer to “consciously accommodating the views of others while knowing they are different from mine”, so this is a higher-order attitude, a thought about thought; It can only be established with the help of a theory of the generation, interaction and development of thought. In this way, it can avoid both being a rigid dogma and merely being empirical. From here we can clearly see why by far the proper general social science has not been established, and how it can be constructed now. IV. The Algorithmic Thinking Theory 4.1 Instructions For more than half a century, a computer has been treated as a mental model of the human brain, and many scholars have interpreted it from different angles in an attempt to discover general principles of human thinking. Computer-based AI engineering has caused fierce philosophical debates. However, in the midst of many approaches, there is one route that has been missed by all, and from a philosophical point of view, it shall be the most concise and effective. This’s the software-only approach. Many scholars have tried to reveal the mystery of thinking from the perspective of brain anatomy, physiological performance, and mind-body relationship, but they have ignored that if we put aside the “hardware” and simply use software language to explain thinking phenomena, that is, to explain thinking with thinking, it may effect better. As Descartes said, the mind is what we are most familiar with, and if the problem of the mind can be solved within the scope of the mind itself, why resort to other methods, deviously? This shall be a method that is unique to humanities and social sciences and should have long been practiced, and it can help us to answer many 32 questions of the humanities and social sciences independently without having to wait for the expected scientific breakthroughs about life and thinking from the natural sciences. After opening any textbook on computer principles, the first few pages of it are impressively written the basic principle of “computation = instruction + information”. An “instruction” refers to a command of a user to a computer to perform a specific operation on specific information, which is later translated into the most basic operation that a computer can perform. There are only a few dozen core instructions, which could be grouped together in a list. The user can only select the instructions in this list and “ask” the computer to execute them. A single instruction can usually process no more than two data (information) to arrive at no more than one result. An instruction is “run” or executed once, constituting a minimum unit of computational activity. Other instructions (e.g., “Return” or “Halt”) do not directly process information, but are used to perform some services or administrative functions. There are also instructions that communicate with the hardware or direct the hardware to work (e.g., displaying a dot on the screen, or operating a printer). 16 “Instructions” can be the thinking tools of the human brain, or the expression of thinking tools of the human brain in software or thoughtful language17. We can imagine that there are multiple thinking tools inside the human brain, each named as a different instruction. We can also imagine that the human brain has only one thinking organ, but it can perform those computational actions that the instructions refer to. In the former case, multiple tools can work at the same time, i.e., the “parallel processing”; In the latter case, one action can only be performed after the other action has been completed, i.e., the “serial processing”. However, in the case of parallelism, the computational operations that can be performed in parallel are still limited in number, so processing still require significant time. In order to simplify, and to meet intuitive experience (e.g., “one mind cannot be used for two purposes”), we choose to follow the manner of classical computers which are generally only considered for serial processing. In fact, the contents of instructions mostly refer to various mental activities that we are familiar with, such as “comparing”, “searching”, “transmitting” data, and so on. When we recall something, we search in our memory, and search is related to “comparing”, which can be interpreted as a process comparing the data stored in various “places” one by one until the targeted datum is found; then, it can be “transmitted” (or copied) to a place like a “workshop” (e.g., the “central processing unit” in a computer) for processing. The processing of data, at the earliest stage, was only to carry out mathematical and For a brief introduction of instructions and the relevant principles, see G. Frieder, “Machine Instruction Set” in Ralston, 1983, pp. 899-904. 17 This is an interesting fact that Jerry Fodor authors both books of “The Language of Thought” and “The Modularity of Mind”. The two books could have been combined into one book, Algorithmically. 16 33 quantitative calculations such as addition, subtraction, multiplication, and division. However, the development of mathematical logic has made deductive logical reasoning (e.g., “Or”, “And”, “Not”) possible to be performed in the similar way as mathematical operations, and is therefore classified as a specific type of “computation”. Henceforth, the concept of computation was expanded to include some qualitative analytic activities, and began to resemble ordinary human thinking activities. These different kinds of computations are embodied into different instructions. Each instruction has a certain format including certain requirements for the type of data to be processed. However, studies have shown that these core instructions used to process information in classical computers can be further simplified, that is, the permutations and combinations of some instructions can be used to achieve the functions of other instructions. For example, since logical operations can be used for deductive reasoning, and the quantitative relationship between numbers is a deductive relationship, then after this quantitative relationship is defined, the instructions for performing logical operations can also be used to perform mathematical operations—just with more steps. In this way, all the core instructions are eventually reduced to a single instruction, and there is more than one scheme for this single instruction (Gilreath, 2003; Nürnberg, 2004). This shocking fact shows how concrete, microscopic, and simple the basic thinking mechanism of human beings is, if a computer can really be regarded as a model of the human brain! The great philosophical significance of instructions cannot be overemphasized! The purpose of this article is to illustrate that it can be an epoch-making concept in the history of philosophy. The contents and specific kinds of instructions are actually very familiar to us, except that computer science treats them in a different way from traditional perspectives. That’s one reason why we keep this particular term. However, the philosophical subversiveness of this concept was not fully realized even by computer scientists themselves, until it is revealed in my Algorithmic writings. This is to treat human minds as some concrete material entities and concrete real actions. The mind coexists and interacts with external objects, and knowledge (or thoughts) is the product of its continuous operation in the environment of time and space. Thoughts can be represented as certain states of physical materials, and mental activity can be expressed as changes in the states of these physical materials. The information-processing instructions coexist in the instructional system with the instructions responsible for dealing with material organs (then external objects), and thus the mind-matter relationship is demonstrated in a very clear and figurative way, taking a big step towards solving this philosophical problem that have existed for thousands of years. The operation of a single instruction constitutes a single, minimal unit of thinking activity, the generation, storage, accumulation, elimination and expansion of data or information very delicately depict and illustrate knowledge’s 34 existence and development, and the communication and network between computers can also be used to understand human society analogously. The analogies made in the past have been not effective enough. Now, after rediscovering the instructions and the thinking economy, as mentioned earlier, we can make, including principles such as mind-bending, many new discoveries. In this way, computationalism can gain a new vision and momentum. The importance of instructions can also be reflected in the enormous malleability of the computer system that takes the instructional subsystem as its core. Let’s take a brief look at this. 4.2 Formation and Development of the Computing Mechanism (i) In order to explain the development of the computer system, it is first necessary to explain its formation. Once we understand how all of this is achieved physically, the mystery surrounding computers and artificial intelligence will disappear, and the philosophical questions and debates associated with it will be answered or eliminated. Traditionally, humans have used physical materials primarily to perform certain actions in order to replace physical labor—such as using water vapor to propel a train forward. And when physical materials were used to assist humans to think, it did not arouse a sense of mystery at first, which was created later. For example, in the case of knotted-rope counting, I’m afraid no one thought of the rope as a model of the human brain and held that it was “thinking”. However, a computational system is not essentially different from the knot counting: a physical state of a physical object “coincidentally” can be used to represent a numerical value, a change in the physical state “coincidentally” can be used to represent a certain computational action, and the new physical state naturally shown by the physical object after the action can be used to represent the result of the computation. Whoever finds such a physical mechanism, or invents such a physical device, then he/she has built a computing mechanism - the next step is just to compare the efficiencies of different computing mechanisms. It should be emphasized here that an important reason why rope can be used as the computing mechanism is that it is imperishable, so the knots tied on different dates will remain and not disappear because of rotting. Different knots should be on the same rope, close to each other, similar in shape, so that misidentification or errors could be avoided when counting. A number of hidden structural features come together to make up this computing mechanism. However, this mechanism is relative to people who use it, and it is interpreted by the users themselves, and it is none of the “business” of the physical materials. Strictly speaking, you can’t just say that the physical materials are “calculating” or “computing” themselves; the knot is not 35 rotten, but you can’t say that the rope is “remembering” or “displaying” something because of this. “Memory” and “display” are only their meanings to the users, and these meanings are attached by the users themselves. Without users, these meanings would not have arisen. To a large extent, the mystique surrounding computers in later generations comes from the terminology. The use of a large number of anthropomorphic words such as “instruction”, “transmission”, “reading”, and “output” makes readers who are not familiar with computer principles mistakenly believe that computers are really autonomously “thinking” like humans. The basic mechanism of modern computers is not substantially different from knot counting. It’s just that it can “remember” more data, and the “access” to and “operation” on data are faster and more accurate. The data access mechanism and the computing mechanism are actually the same mechanism, both of which change the original value and generate a new value. The differences between them are only logical, i.e. artificially defined. What are stored in the computer are only the different states of magnetism or electrical potential of the physical components, and what are running in the computer are the electric currents; They themselves do not “know” anything; What they mean to human beings are defined and realized outside of computers. The computational structure of “instruction + information” already existed before the creation of computers. For example, in a mathematical formula, we usually indicate what data to be processed and what operation to be performed. These two parts are generally represented by different types of symbols, and there is a clear distinction between them. The job of a computer scientist is how to use a machine to simulate this process. These concepts, distinctions, and processes are not inherent in machines. They exist in people’s minds before they are “copied” into machines. Of course, due to the limitations of physical means, not all thinking operations have been successfully “copied” and simulated from the beginning, only some simple and basic parts have succeeded. In computers, instructions are actually represented in the same way as data, i.e., they are both represented as some sequences of high and low potentials of the electronic components, the sequences of 0s and 1s. The reason why they are different is that they adopt an artificial convention, that is, in a certain sequence of 0 and 1, specific and different positions represent the instruction and data respectively. Different types of information are also artificially stipulated. Thus, even a series of numbers that appear identical on the surface may represent numbers in one case and graphics or sounds in another. This depends on the serial approach. It is not possible for it to represent different information at the same time and place. It’s like a venue that can be used for a market during the day and for entertaining performances at night. This takes advantage of the 36 length of time. Similarly, the width of the space is also very prominent in the computing mechanism. For example, a transistor can only be used to represent a 1 or a 0, but a sequence of 0s or 1s can be made by a large number of transistors arranged together. Thus, in a computer, a minimal unit of information is usually represented by a sequence of numbers. That’s the basic idea of binary. But that’s still not enough. The width of the space must be combined with the length of time to represent more information. For example, add some kind of note or indicator between different sequences to convert the type of information. It’s like in the entrance ceremony of the Olympics, where each country’s team is led by an athlete holding a sign stating the team’s nationality. The information on the sign needs to be “read” and “understood” before the following is to be treated, and “reading” generally refers to copying that information into a specific location in another relevant computing process. If the location is incorrect, the information will not be “read” and “understood”. The so-called “understanding” refers to, for example, comparing the information with some pre-stored information, or triggering some automatic physical mechanism to make an action of “judgment”. Therefore, the abilities of computers to “understand” and “judge” are limited. It is impossible to understand or judge each and every information. Supposedly, such a machine can run continuously and freely as long as it is set up, perform any “operation” on any “information”, and produce some meaningful or nonmeaningful results that the user can or cannot understand. It’s as if we humans can be free to think in our free time. Cranky thinking is not necessarily valuable. Therefore, in order to improve the productivity of a computer and ensure that its results are more conducive to users, it has not been allowed to run freely. Then, programs, i.e., fixed sequences of instructions in chronological order, are compiled in advance of operations, which are executed stepwise by the computer that is prohibited to perform any unplanned operations -- Obviously, programs are an example of the solidification of knowledge. The computer executes an instruction, reads the next instruction in a designated location, and then executes it again. Interestingly, since computers can store information, it is also possible to store programs as a specific type of information that can then be read and executed. In this way, some programs that have been compiled by hand do not have to be compiled again, but can be stored and used repeatedly. It can also be spread and shared among different users. Then, the computational efficiency has been further improved. All of these processes are mechanical, similar to other mechanical devices with varied degree of automation. There is no mystery to them. As long as you are interested, you can learn more about their details. Moreover, in order to take advantage of such a mechanism, certain processes are deliberately elongated and hence become more 37 cumbersome. However, since the electronic components run extremely fast, the cumbersomeness or “clumsiness” of computers has been covered up. The computing results are displayed on the screen for users to read. It’s like reading a book. Readers are able to understand books and gain meanings because they have mastered the same language as the author’s beforehand.18 In the same way, the “intelligence” of a computer is achieved in cooperation with users, and only for the human users. Therefore, humans don’t have to be hostile to, or jealous of computers. Computers are just mirrors and tools of humans. 4.3 Formation and Development of the Computing Mechanism (ii) When a computer behaves like a human, we need to carefully analyze what the resemblance is and how it is formed. This intelligence manifests itself first and foremost in its ability to do some of the computations that we humans undertake, and even better (due to the superior properties of physical materials) than humans. When computers evolve from quantitative to qualitative computing, they become more human-like. When combining various types of computations and turning a few more zigzags, they behave somewhat magically. As mentioned earlier, computers can also mimic human thinking whimsically, or randomly, to produce low-value or meaningless results. Since computer engineers have forbidden or restricted this functionality, it has somewhat hindered observers from using computers to understand how human minds are structured to operate. People who are new to computers are often confused by the question of how a computer “knows” what to do next. Although this problem is largely solved by following the man-made programs, the programs do not originally exist in the human brain, but are acquired by human programmers through long-term knowledge accumulation and learning. In other words, for human beings, there are also such a problem themselves. Since the function of many types of knowledge is just to tell us what to do next, then, if we always know what to do next, the problem of knowledge or truth has been largely solved. Therefore, the answer to this question is actually very simple, that is, how much knowledge is accumulated, how much computation can only be extended. Although there is no limit in principle to the way and scope of computation, it can only develop concretely and historically. Hermeneutical philosophy has enormous narratives on text and its understanding, such as “reading is the concrete act in which the destiny of the text is fulfilled” (Ricoeur, 1981, p.164). 18 38 The function of “logic” is to a large extent to answer the “next step” question: what answer should be derived from a computational action, or what to do next, and so on. Philosophers could consider redefining logic from this behavioral, temporal perspective. Computers are sometimes dull and not as witty as humans because computer scientists have not yet clearly and adequately viewed computing as behavior – even though they are often concerned with the economy of computing, and have developed a branch of the discipline from it. According to the Algorithmic logic described above, it is now possible to establish a logical connection between seemingly unrelated computational behaviors. If the sequence of computations is redeployed according to this logic, the computer could become more human-like. For example, when a computation in progress is at 12:30 p.m., the machine suddenly interrupts it and reminds the user that it’s time to think about lunch; Or, you ask the computer a question, and the computer answers, “I don’t like you, so I don’t want to answer your question.” Isn’t it a computer more human-like? Even in a classic computer, these features are not impossible. This kind of jump in the tasks, and even the establishment of certain logical relationships, can be artificially set to make them happen. The problem, however, is that AI has been suffering the dilemma of “to help people” or “to imitate people”. Many people’s lives are uneventful, even mediocre; Imitation of these people and these matters does not do much good. A goal of scientists is to make computers human’s right-hand assistants, so that they can actually help human users. In other words, their goal is to make computers as capable of accumulating concrete knowledge and thus outputting decisions as good as those of current real people. To that end, it’s worth taking a closer look at the recent tremendous advances in “indeterminate reasoning” in computers. Traditionally, computers have dealt primarily with deductive, or deterministic, reasoning and computation. When uncertainty is involved, it relies primarily on probabilistic operations. Random numbers are given either by functions or by physical “random number generators”. This means that, strictly speaking, computers lack true “free will”. It is not like a human being, who can generate free will without evidence. However, with the support of probabilistic calculations, this difference is not big in practice. Because, when the generation of a will has more or less some rational basis, it is not a bad thing. Also, our real free wills may not be entirely unfounded. To a large extent, the freedom of wills could be interpreted by the freedom to find different bases for decision-making in the pluralistic and discrete ocean of knowledge. Isn’t it a contradiction and absurdity that determinate computations are relatively easy while indeterminate ones become a difficult task? This question also happens when mathematicians think that deductive reasoning is not creative, and only less reliable non39 deductive thinking activities are “creative”. This obscures the truth of the world of minds. The great contribution of science to humankind is due to the fact that scientists have discovered or established many reliable chains of reasoning, thus forming knowledge modules that are more internally tightly than other knowledge. These deterministic modules are like islands in the ocean, rare and precious. Indeterminate reasoning is certainly comparatively much easier. The existence of uncertain, low-quality knowledge has been more abundant. When we see a certain phenomenon and then obtain a certain information, we can use various methods such as induction, analogy, assumption and association to draw many tentative conclusions. These are easy to be identified as some (indeterminate, uncertain and unreliable) “reasonings”. The problem is just that highquality reasoning is rare and especially worth pursuing. Such indeterminate reasonings, as some forms of bending of the mind, can often be used to fill the gaps between those fragmented deductive chains (in this case a consistent and whole deductive chain is understood as costly or arduous to the point of diseconomy or impossibility), and to play a role in scientific research and daily life. The same is true for computer and AI research. It’s easy to construct a small amount of indeterminate reasoning, but it’s very difficult to draw conclusions competitive with human users. It has been a long time of more than half a century from the birth of the idea of “connectionism” (Boden, 1990) to the success of today’s large language models (Radford, 2019). The longer this period is, the more it can be used to demonstrate the hidden truth of human knowledge, that is, knowledge as a whole is a mixed, pluralistic and expansive system in which uncertain reasoning and uncertain knowledge account for a significant proportion. Although we can gradually condense and refine the existing knowledge, the divergent effect always breaks through the convergent effect, so that the knowledge system keeps growing in its total size, and keeps the looseness in its internal structure. A single indeterminate reasoning is easy, but its result need to be improved by repeating a large number of uncertain inferences. For this reason, classical serial processing is not enough, and parallel processing needs to be adopted within a certain range in order to save time. Parallel processing of the first order is also not enough, and multiple higher-order processing (deep learning) needs to be developed again. Even so, the results were not satisfactory to the researchers. Finally, they found that the raw data themselves shall be improved; Only by using a huge amount of online real data from human real life as the basis for reasoning, and continuously using the real data to correct the previous results in each step, can the machine produce desirable human-friendly results. This fact is enough to evidence that the real knowledge of human beings is not simply deduced from any existing information or knowledge, but also a blunt addition to it. 40 It’s as if a student’s knowledge is not only self-developed, but instilled from the outside. This is an inevitable consequence of the introduction of subjectivity into computations. Meanwhile, the additional and current reasoning activities are also essential, because the current problem is always more or less unique or novel – or, the enormous cost of searching for a ready-made answer stops the actor to do new computations instead. 4.4 Algorithmic Thinking Theory At this point, this article is nearing its end. The meaning of this article is that if we assume that human thinking activities are carried out in a spatio-temporal environment in the form of “tool + object” like body movements, we will obtain a series of new principles and insights that can be used to integrate the existing philosophies into a whole while solving many puzzles and shortcomings in them. This possibility has always been hidden, and computer science provides a concrete mirror image of the human mind, allowing us to grasp this precisely and vividly. Then, with its developmental history, including the failures and achievements of more than half a century, computer and artificial intelligence engineering proved to us that it is indeed appropriate to establish such a thinking theory. Again, instructions are originally human’s, not computer’s; The human brain is the source and home of instructions and the operational structure of “instruction + information”. The remaining question is, do the instructions in the computer fully reflect the basic thinking tools in the human brain? The answer depends on the maturity of AI engineering today. If you think it’s mature enough, or that it’s insignificant in its differences from the human brain, then you can equate computer instructions with those of the human brain; Conversely, if you think that the human brain has some unique functions that computers can’t do, then we can take a remedy by assuming that the human brain has some “Manual Instructions” that computers don’t have and cannot simulate, but they also work in the “instruction + information” structure.19 Even so, though, it’s safe to believe that these human instructions should usually be able to be expressed in natural language, so it’s still possible for us to speak them out. Alternatively, we can move away from the computer and define all the verbs in natural language that refer to mental actions as “instructions”. These verbs should amount between tens and hundreds. Given what AI is doing today, we can believe that most of the mental actions referred to by these verbs can be “realized” on a computer. In extreme cases, even if a thinking action cannot be According to the previous discussions, it can be considered to take "Randomize" as a manual Instruction. The execution of this Instruction will in principle bring different results for each operation, but we are not necessary to regard this as a violation of the hypothesis of “Instructional constancy”, i.e., an Instruction processing certain information will always get the same result. 19 41 accurately expressed in natural language, a term can be invented or a symbol can be used to refer to it. No matter how many instructions a person has, they must be finite. In the face of the extremely large amount of data, the number of instructions in the hundreds or thousands is still relatively small. The astonishing fact that computer instructions can be reduced to one, while important, is not worth overemphasizing, because it only proves that there may be only one tool for human thinking, but it cannot be used to prove that philosophy and human knowledge can be unified into one. What is required for the synthesis of philosophy is the concreteness (including the finiteness) of instructions. Second, specific instructions work in a space-time environment and in a specific roundabout structure, which can lead to other instructions (and even the “commands” in high-level languages), and then to unfold the magnificent human history. This roundabout operational structure is also significant. This indicates the extreme importance of memory. To sum up, it can be proposed that the first pure software-based thinking theory in human history is as follows: human thinking is to use the innate, finite, universal, and constant instructions in the human brain to process information serially, selectively, roundaboutly, and repeatedly. It can be formulated as below: Thinking = computation = (Instruction + information) × speed × time Among them, the method of selecting Instructions and information to compile programs to solve a problem is called “Algorithm”. We capitalize the first letters of “Instruction” and “Algorithm” to mean that they are referring to human’s, rather than a computer’s. A particular Algorithm is relative to a specific list of Instructions. When the “Instructional List” (or the “Instruction Set”, or the “repertoire”) is given, Algorithms become the core of intelligence, so I chose this word to name this thinking theory as the “Algorithmic Thinking Theory”, “Algorithmic Theory”, or “Algorithm Framework Theory”, to distinguish it from the traditional one-sided emphasis on the importance and limitations of information supply while taking Algorithms for granted. In the current professional and public context, the term “algorithm” is increasingly and mainly referring to those subjective and indeterminate algorithms, and is therefore more suitable to be borrowed by philosophy, humanities and social sciences. Algorithmic Theory and its many inferences, as stated in this article as a series of new knowledge and new principles, can be collectively referred to as the “Algorithmic Principles”, which can further be used to frame humanistic and social study as the “Algorithmic Framework”. Correspondingly, the word “Algorithmic(al)” can be used to mean multiple meanings such as “of Algorithm”, “of Algorithmic Theory”, “of Algorithmic Principles”, “of the Algorithmic world”, “relating to 42 Algorithmic Theory”, and so on. Therefore, the philosophy based on Algorithmic Theory is called the “Algorithmic Philosophy”. These are the minimum new principles and terms that this paper suggests for philosophy, humanities, and social sciences to generally adopt. 4.5 Methodological Issues The “Algorithmic Approach” refers to the approach formed on the basis of Algorithmic Thinking Theory. Since people’s Instruction systems are assumed to be the same, the Instruction systems of the researchers and the actors shall also be the same; and, coupled with the inevitable mixedness of objectivity and subjectivity in knowledge stocks (including both “hard software” and software) and in current computations, it will be feasible for researchers to infer the behaviors actors and the states of the world, but the inferring will not be completely successful. Therefore, the study must be based on a combination of theory and experience. The shift between theoretical methods and empirical methods shall be decided on the comparison of their efficiencies or marginal benefits, which can be shown quite clear under the Algorithmic framework. However, the actors must have done similarly when they study other actors. In the final analysis, the methods of professional researchers in studying the world and society cannot be essentially different from those of real actors, and generally they are only different technically. Algorithmic Theory can now clearly support the computerized simulation to be a new method of theoretical inference, even the standard platform for formalized research. It seems that the social researchers can now consider building giant simulative systems about the human society -- and even through global collaboration. Once the problem of the framing principles about the society is addressed, the role of detailed, technical, and applied research will become prominent. Prospectively, the unified social science and social engineering will have the potential to become much more useful than ever before. The scale and depth of the Algorithmical simulative methods and experimental methods resulting from specialized investments will be unattainable by ordinary actors. Then, private investment may be attracted, and specific research products will ultimately be used by practitioners, which will eventually deeply involve ordinary actors. At the same time, ordinary people also conduct their own various practical experiments, and the advantage of this kind of real experiment is that its scale and content are particularly large. This competitive relationship leads to the fact that the real society has multiple meanings for scholars in the humanities and social sciences. The real world is the source of materials for research. On the other hand, the actors concerned have prepared their own readymade research results on the real society, and the researchers can refer to these results 43 and then try to differentiate their results from the actors’ results, and marginally develop the actors’ results. Thereafter, the researchers’ results are disseminated to society. This is the purpose of researchers, with which the researchers obtain the resources for their own survival. In this sense, researchers or scholars are also a type of actors. However, no study can include all these effects at once. One reason is that the computing power does not allow it, and the other is that this explosive re-objectifying process is infinite. Any research activity and its results are a finite, realistic force that targets only specific, finite objects and moves forward historically in interaction with the objects of study. The barrier of time and the economy of thought will ensure that every existence and every process are meaningful but limited. Just as the tremendous particles of matter have interacted and combined into different stars and planets in the long-term evolution of the universe, the economy of thought gives “gravity” to the “particles of thought”, making them interacted and combined into different types of knowledge, and expanding constantly like the universe. Since the methodological issues have been discussed explicitly or implicitly in many aspects in the previous text, and some of the conclusions are obvious, I will not go into too much detail here. V. Conclusions The purpose of this paper is to propose a concise, software-based theory of thinking to illustrate the concreteness and development of thought. A thinking tool processes finite pieces of information to form a minimum unit of thinking activity. Thinking activities use the breadth of space to store the results as a solid stock of knowledge, and use the length of time to innovate and develop. The resulted knowledge can be reprocessed again and again, leading to a combinatorial explosion and an infinite expansion of knowledge. Prudence requires that the overall picture be considered as much as possible at each point in time, which leads to a subjective turn or “bend” (or distortion) of thinking, which thus historically forms different versions of the body of knowledge. As a manifestation of bounded rationality or concrete reason, the processes of convergence and divergence are intertwined. Computer science and artificial intelligence engineering tell us that such thinking tools are “instructions”. In respect of the needs of a certain study, we can equate the “Instructions” in the human brain with computer instructions, or with the verbs in natural language that refer to mental activities. In addition, psychological objects such as emotions and instincts can be regarded as “hard software” similar in nature to ordinary knowledge, and are attached to the thinking system. The thinking activities that exist and run in the spatio-temporal environment need to be conceived, designed, and sequenced according to their economic effects, and the 44 resulting Algorithmic logic connects different logical operations and functional activities, and then expands the mixed logics into the thinking theory. Such a thinking theory implies the materialization or substantialization of thought. The coexistence and interaction of mental and physical entities imply the synthesis of epistemology and ontology. Epistemology studies thinking processes, while ontology tells the contents of some particular thinking outcome. Metaphysics, as a conjecture of the source or “end” of the world in the context of time and space, is both necessary and inevitably imperfect. Metaphysics requires self-awareness of this imperfection and should consciously minimize, localize, and relativize itself. The ontology of “thought” as a specific kind of entity inevitably produces social issues and social sciences, because the discrete nature of thinking activity now ensures that the self-objectification of thought and the mutual objectification of people are fully feasible as substantive activities. Religion, morality, law, organization, power, money, culture, and all other humanistic and social phenomena can now be regarded as the results of the materialization of ideas. The various branches of philosophy are thus united. In my opinion, it is necessary and even urgent for philosophy, humanities and social sciences to adopt this thinking theory, to repair those many fundamental problems that have existed longtime since the beginning. The split between reason and democracy that occurred in ancient Greece gave rise to both the specific Western Philosophy and those great philosophical problems. Parmenides’s “Being” evolved into Ideas, substance, God, the absolute, and science (and communism), but it has remained separated from the rest of knowledge. Traditional philosophy is a paradigm of “great convergence” that holds that the former will eventually engulf the latter. Now, with the help of this theory of thinking, we have come up with a “big bang” model consistent with the facts of knowledge development, in which convergence is only a partial phenomenon, and other forms of knowledge outside of science have their own roles and uses. The high quality of scientific knowledge is only relative. Such a new, unified view of knowledge can be used to bridge these divisions, in which democracy, freedom, common sense, and secular life all have their clear and relatively independent meanings. In modern philosophy, the analytic school eventually contributed to the emergence of computer science, while other schools (especially the continental philosophy) have put forward arguments from different perspectives that require Algorithmic Theory to infer. These two branches converge in the Algorithmic theory. The various Algorithmic inferences and conclusions are obviously consistent with the facts in principle and in the most important aspects, they have explained the largest number of phenomena with the fewest assumptions, therefore, Algorithmic Theory should be among the theories most worthy of acceptance. The moment a question is answered is 45 also the moment when it is disenchanted. Sanctification or absolutization of spiritual life is in fact the product of our ignorance of the thinking mechanism. It is a kind of mental distortion. We now know that the mechanism underlying is quite simple, which can be seen as a collective conclusion of thousands of years of continuous explorations. However, this is by no means to say that mental activity is just mechanical, and that human beings are not different from machinery. Since it can lead to such a brilliant civilization, this thinking mechanism must not be such “simple”. Our ability to understand and decipher it does not entail that it is bland, because we ourselves are “it”. Such a mechanism is not present in other entities or species -- even if it exists in animals, it is clear that its specific functions are significantly different from those of humans. Therefore, in their “view”, the human mind must have always remained mysterious and unattainable. References Boden, Margaret A. (1990). Philosophy of Artificial Intelligence, Oxford University Press, pp. 14-18. Böhm-Bawerk, Eugen von (1923). The Positive Theory of Capital. Translated by William Smart. London: Macmillan, pp.17-23 Cherniak, Christopher (1986). Minimal Rationality. Cambridge, MA: MIT Press. Coase, Ronald H. (1988). The Firm, the Market, and the Law, Chicago and London: The University of Chicago Press. Copleston, Frederick (1994). A History of Philosophy, Vol. I-VIII, New York: Image Books (Doubleday). Durkheim, Emile (1982). The Rules of Sociological Method, Translated by W. D. Halls, New York: The Free Press. Eccles, John C. (1989). Evolution of the Brain: Creation of the Self, London and New York: Routledge, pp. 48-70. Fahed, Mario and David C. Steffens (2021). Apathy: Neurobiology, Assessment and Treatment. Clinical Psychopharmacology and Neuroscience. 2021 May 31; 19(2):181-189. doi: 10.9758/cpn.2021.19.2.181. Fodor, Jerry A. (1975). The Language of Thought, Thomas Y. Crowell Company Inc. Fodor, Jerry A. (1983). The Modularity of Mind, The MIT Press. 46 Gilreath, William F. and Phillip A. Laplante (2003). Computer Architecture: A Minimalist Perspective, Kluwer Academic Publishers. Heidegger, Martin (2010). Being and Time, translated by Joan Stambaugh, Albany: State University Of New York Press. Hume, David (1960). A Treatise of Human Nature, Oxford: Oxford University Press, p. 469470. International Energy Agency (2024). “Electricity 2024: Analysis and forecast to 2026”, P. 8, https://iea.blob.core.windows.net/assets/6b2fd954-2017-408e-bf08952fdd62118a/Electricity2024-Analysisandforecastto2026.pdf Jacob, Pierre (2023). Intentionality, The Stanford Encyclopedia of Philosophy (Spring 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/spr2023/entries/intentionality/>. Kearney, Richard (ed.) (1994). Routledge History of Philosophy Volume VIII: TwentiethCentury Continental Philosophy, London and New York: Routledge. Li, Bin (2006). The Algorithmic Economics: A General Theory on Bounded Rationality (in Chinese), Economic Research Information, No.6. 3–14. Li, Bin (2009). Algorithm Framework Theory: A Theory for Unification of Social Sciences (in Chinese). Beijing: China Renmin University Press. English draft translation downloadable at https://binli.academia.edu/ Li, Bin (2012). A Preliminary Exploration of Principles of General Social Science: The Algorithmic Approach (in Chinese). Beijing: China Renmin University Press. English draft translation downloadable at https://binli.academia.edu/ Li, Bin (2019). Foundations of Algorithmic Economics: The Cognitive Revolution and the Grand Synthesis of Economics (in Chinese). Beijing: Economic Daily Press. Li, Bin (2019). How Could the Cognitive Revolution Happen to Economics? An Introduction to the Algorithm Framework Theory. World Economics Association (WEA) online conference “Going Digital”. https://goingdigital2019.weaconferences.net/papers/howcould-the-cognitive-revolution-happen-to-economics-an-introduction-to-the-algorithmframework-theory/ Li, Bin (2020). The Birth of a Unified Economics, MPRA paper, downloadable at https://mpra.ub.uni-muenchen.de/110581/ 47 Li, Bin (2020). Why is Algorithmic Theory a Necessary Basis of Economics? MPRA paper, downloadable at https://mpra.ub.uni-muenchen.de/110581/ Li, Bin (2022). How Various “Irrationalities” Proven to be Rational. Academia Letters, Article 4579. https://doi.org/10.20935/AL4579 Li, Bin (2022). Algorithmic Economics as an Economics of Thought. The International Journal of Pluralism and Economics Education, Vol. 13, No.2, pp 176-191. Li, Bin (2022*). The “Algorithmic Logic” as a Synthetic or General Logic. Academia Letters, Article 4936. https://doi.org/10.20935/AL4936 Li, Bin (2022). The Scientific Meanings of Spirituality and humanity: How can a Human be Modeled "Alive"? in “Human Rights, Religious Freedom and Spirituality”, Pune, India: Bhishma Prakashan, 2023. Downloadable at https://binli.academia.edu/ Li, Bin (2022). Algorithmic Economics. MPRA paper, downloadable at https://mpra.ub.unimuenchen.de/113563/ Li, Bin (2023). A Unified Psychology as Part of a General Social Science. Qeios. doi:10.32388/GGSOLK.2. Li, Bin (2024). The Algorithmic Philosophy: A Synthetic and Social Philosophy. Qeios. doi:10.32388/S0AQEE.3. Li, Bin (2025). The Algorithmic Philosophy: A Synthetic and Social Philosophy, forthcoming. Mach, Ernst (1893). The Science of Mechanics, Translated by Thomas J. McCormack, Chicago: The Open Court Publishing Company, pp. 481-494. Mises, Ludwig von (1962). Socialism: An Economic and Sociological Analysis, translated by J. Kahane, New Haven: Yale University Press. Nürnberg, Peter J.; Wiil, Uffe K.; Hicks, David L. (2004). A Grand Unified Theory for Structural Computing, in Hicks, David L. (ed) “Metainformatics: International Symposium, MIS 2003”, Springer, pp. 1–16. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners at https://cdn.openai.com/better-language-models/la nguage_models_are_unsupervised_multitask_learners.pdf Ralston, Anthony et al (ed.) (1983). Encyclopedia of Computer Science and Engineering, Van Nostrand Reinhold Co. 48 Rettler, Bradley and Andrew M. Bailey (2024). Object, The Stanford Encyclopedia of Philosophy (Summer 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/sum2024/entries/object/>. Ricoeur, Paul (1981). Hermeneutics and the Human Sciences, translated by John B. Thompson, Cambridge University Press. Rohlf, Michael (2024). Immanuel Kant, The Stanford Encyclopedia of Philosophy (Fall 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archive s/fall2024/entries/kant/>. Russell, Bertrand (1984) A History of Western Philosophy, London: Unwin Paperbacks, pp. 462-463. Russell, Bertrand (2010). The Philosophy of Logical Atomism, London and New York: Routledge. Ryle, Gilbert (1963). The Concept of Mind, Penguin, pp. 13-25. Spade, Paul Vincent, Claude Panaccio, and Jenny Pelletier (2024). William of Ockham, The Stanford Encyclopedia of Philosophy (Fall 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/fall2024/entries/ockham/>. Taylor, C. C. W. (ed.) (1997). Routledge History of Philosophy, vol. 1, London & New York: Routledge, p. 374-376. Viale, Riccardo (ed.) (2021). Routledge Handbook Of Bounded Rationality, London & New York: Routledge. 49