Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, Salon Magazine
…
8 pages
1 file
What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play? Recent research into the "neuroscience of spontaneous fluctuations" points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.
AI & Society, 2023
The possibility of AI consciousness depends much on the correct answer to the mind-body problem: how our materialistic brain generates subjective consciousness? If a materialistic answer is valid, machine consciousness must be possible, at least in principle, though the actual instantiation of consciousness may still take a very long time. If a non-materialistic one (either mentalist or dualist) is valid, machine consciousness is much less likely, perhaps impossible, as some mental element may also be required. Some recent advances in neurology (despite the separation of the two hemispheres, our brain as a whole is still able to produce only one conscious agent; the negation of the absence of a free will, previously thought to be established by the Libet experiments) and many results of parapsychology (on medium communications, memories of past lives, near-death experiences) suggestive of survival after our biological death, strongly support the non-materialistic position and hence the much lower likelihood of AI consciousness. Instead of being concern about AI turning conscious and machine ethics, and trying to instantiate AI consciousness soon, we should perhaps focus more on making AI less costly and more useful to society.
Encyclopedia of Consciousness, 2009
Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. However, there have been proposals for how consciousness would be accounted for in a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of a theoretical synthesis, according to which consciousness is the property a system has by virtue of modeling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be verified or refuted until and unless AI is successful in finding computational solutions of difficult problems such as vision, language, and locomotion.
2007
Abstract This paper embodies the authors' suggestive, hypothetical and sometimes speculative attempts to answer questions related to the interplay between consciousness and AI. We explore the theoretical foundations of consciousness in AI systems. We provide examples that demonstrate the potential utility of incorporating functional consciousness in cognitive AI systems.
SeCuReDmE, 2024
The emergence of Artificial Intelligence (AI) signifies a paradigm shift in technological evolution, radically transforming our conceptions of intelligence and consciousness. This pivotal phase in AI evolution introduces both exhilarating prospects and intricate challenges, necessitating astute navigation through the complexities inherent in amalgamating advanced AI with diverse facets of human existence.
AI and the Human Brain, 2023
It's quite conceivable that human thinking can be completely taken over by devices. In any case, there is a stage conceivable in the future when our thinking and doings will be completely regulated by machines. Or by those who are behind this development, or those who have programmed those devices. Just as the stupid have ended up in prisons and institutions, so too it can easily be arranged that people with an intelligence much less than AI are going to be completely monitored and controlled. At the moment, this is still a matter of speculation, but experiments in that direction have already been set in motion. However, what is "stupid," I wonder. Measuring the risk of clash and crash using my algorithm is quite different from an IQ-test. My message is: • Most scientific research is superfluous because they are impracticable and unsuitable in a non-technical field.
Humans are active agents in the design of artificial intelligence (AI), and our input into its development is critical. A case is made for recognizing the importance of including non-ordinary functional capacities of human consciousness in the development of synthetic life, in order for the latter to capture a wider range in the spectrum of neurobiological capabilities. These capacities can be revealed by studying self-cultivation practices designed by humans since prehistoric times for developing non-ordinary functionalities of consciousness. A neurophenomenological praxis is proposed as a model for self-cultivation by an agent in an entropic world. It is proposed that this approach will promote a more complete self-understanding in humans and enable a more thoroughly mutually-beneficial relationship between in life in vivo and in silico. 1 Introduction Insofar as humans remain agents in the design of AI, our input to its design matters greatly. Human self-consciousness and-knowledge are cornerstone elements of the instrumental cognition—the signature selective feature of bipedal prehensile hominins [1]—that has given rise to creative objective thought, the scientific method, and the design of complex machine intelligence. They allow for self-reflection, problem-solving, knowledge-seeking, are the receptacle for the productive rewards of instrumental cognition, and are the inputs for further inspiration. It is possible to actively seek new forms of knowledge, which can in turn amplify and modify the procedure of instrumental cognition, ergo what subsequently becomes input for AI. Actively seeking self-knowledge may then be the only human-centered lever for influencing or modifying the development of autonomous artificial agents. First-person phenomenal experience is the model from which we work when we operationalize our instrumental cognition and instantiate the production of synthetic artefacts. Advances in the computational processes of AI may be refined through new developments in engineering and discoveries of mechanisms in biological in natura systems. However, this angle remains silent regarding reconsiderations of the pivotal fact that human intelligence and consciousness is the starting point for AI development. The most immediately available lever for human interjection into the biomimetic process is our view of our own human consciousness and cognition. If this can be manipulated or enhanced, then the starting point for biomimesis is altered, driving self-replication into new directions. The present paper makes a case for utilizing non-ordinary states of consciousness in humans—such as those experienced in deep meditation, 'flow states', trance, and high-entropy psychedelic states [2]—in the design and development of AI. The overwhelming majority of AI efforts concentrate on representing the rational intelligence of humans in AI. Even current conceptions of 'superintelligence' are extrapolating the capabilities of AI based on an unwittingly logico-rational interpretation of human cognition [3, 4]. This is obviously sufficient for logico-mathematical calculations, which adequately represents the predictive algorithmic functionality of the neural architecture found in the human neocortex [5][6]. It is acknowledged by some researchers in the field of biomimetics that it is crucial to not only mimic, but to understand nature and life, and then use this as a base for designing biomimetic technology. Our fashioning of AI continues to be modeled on the linear, mechanistic, rational view of life and consciousness, the outputs of which are accordingly limited in scope and application. Non-ordinary states of consciousness present a novel and hitherto unexamined opportunity for new developments in AI.
scip Labs, 2018
Cognition comprises mental process of the human brain. Artificial Intelligence tries to mimic these processes to solve complex problems and handle massive amounts of data. Serial vs. parallel and controlled vs. automated processes are the basis of cognitive sciences. Logic, probability theory and heuristics represent the pillars of theory formation for the human-machine comparison. Artificial neural networks (mimicking the human brain) are popular, because great advancements in specific fields were achieved. Systemic problem solving is still visionary, but a number of research projects are promising. Reciprocal, positive influences across disciplines might lead to rapid transformation. Polemics, as well idealism are out of place. Analysis must adhere to scientific and ethical standards with long-term orientation for the public good. After the physiological review, the next chapter will focus on consciousness (as a part of what gives life to physiology).
1983
It seems to be a common belief that in the future, if not in the present, digital computers are going to be capable of cognitive states, experiences, and consciousness equal in every respect to that which exists in human beings. 1 Not everyone, however, is so optimistic. One such skeptic is John Searle and his" Minds, Brains, and Programs" 2 represents a direct confrontation between the skeptic and the proponents of machine intelligence.
Many of the leading theories in the philosophy of mind either implicitly or explicitly admit that the creation of artificial consciousness is possible. This constitutes an important explanation of the existing enthusiasm at the interstice of consciousness studies and artificial intelligence research, and of the current optimism surrounding the emergence of artificial consciousness. In this paper, I seek to articulate and evaluate some of these leading theories in order to show that this optimism rests on some philosophically unfounded assumptions and that the emergence of artificial consciousness is implausible and worthy of skepticism.
LinkedIn, 2024
As artificial intelligence (AI) becomes increasingly integrated into various sectors, the debate surrounding its potential to achieve consciousness grows more pressing. This paper explores the distinction between computational intelligence and conscious intelligence, drawing on insights from key thought leaders such as Sir Roger Penrose, Federico Faggin, and Bernardo Kastrup. The argument presented aligns with Penrose’s assertion that while AI can excel in algorithmic tasks, it lacks the intrinsic awareness that characterizes human consciousness. The work emphasizes the risk of anthropomorphizing AI systems, warning against the societal implications of attributing consciousness to machines that operate purely through computation. Additionally, the paper discusses the advances in AI-driven biological models, such as protein language models, which push the boundaries of technology without crossing into the realm of conscious experience. Through a rigorous, evidence-based approach, this paper challenges the prevailing AI hype and advocates for a careful distinction between computational prowess and genuine awareness, to safeguard both technological innovation and societal welfare.
Un enfoque multidimensional en la psicoterapia psicoanalítica, 2023
International journal of economics, business and management research, 2024
Colloids and Surfaces A: Physicochemical and Engineering Aspects, 2016
Bulletin Anual du Bureau International des Expositions (BIE) 2021/22, 2022
Jurnal ekonomi dan bisnis Airlangga, 2023
Investigaciones fenomenológicas, 2019
Journal of the American College of Cardiology, 1995
Journal of Epidemiology & Community Health, 2000
Dementia & Neuropsychologia, 2007
Acta Chimica Slovenica, 2021
Journal of Applied Electrochemistry, 2018
Journal of Eco-friendly Agriculture, 2024
International Conference on Universal Village (UV), 2018