00 Why This Book
Artificial Intelligence is a very hot topic and a lot of contents have been produced already to teach its basics and engineering constructs, demonstrate how it can be used and how is changing our lives and speculate about the possible outcomes for our near and long term future.
By the way there is also a lot of fiction that anticipated many possible future worlds, that surely inspired also the scientists, entrepreneurs and engineers that transformed those visions in technology that is now before our eyes.
Being involved in the technical aspects of Artificial Intelligence myself, more precisely in the role of consultant, driving companies in complex path of adoption of this new paradigm, I am literally immersed in a flow of contents regarding all the different aspects of this revolution, but there is one aspect that I found surprisingly less represented in the global debate: the fact that while we are learning to build more and more capable digital minds, we are getting unprecedented means to understand our own mind with profound implications on several important levels.
First of all, when we are unlocking new capabilities, for example language understanding, creativity, emotional intelligence, advanced reasoning and more, at each step we will find a natural analogy to how our own mind works. The interesting part is that we are not, in this new era of AI, truly trying to mimic how our mind works, it is happening the opposite, through testing different approaches the emerging patterns, while built on possibly different underlying structures, at a higher level, are surprisingly similar to our own patterns.
When we progressively make more capable minds, then some more fundamental questions arise, including if some features of our intelligence should be replicated in the digital minds for their intelligence to evolve further (for example: consciousness, emotions). While some authors still question if current AIs are capable of "true" reasoning or are just "simulating" reasoning, the same debate could also be transferred to future step of their evolution: consciousness and emotions.
In fact, while we may have the technical ability to create digital minds that behave as being conscious or having emotions, we will have to decide if it is necessary to follow this path and, in case we do, we will probably have the same philosophical debate we are having now for reasoning: many will question that their consciousness or emotions will not be "real" but only simulations.
Apart from safety concerns, and social implications (for example we may find value in emotional AIs to delegate customer interaction, health care or education tasks), there is a profound implication if we accept that these kind of minds are conscious, and, for example, do not "simulate" suffering but "are" suffering, because in this case they might deserve rights.
There is the opportunity for different disciplines to converge and be gifted from the others new tools and insights, and also new semantics: engineering, philosophy, psychology, sociology and other disciplines have the opportunity to find a common ground and advance in the understanding the inner working of our mind, explore with different tools the concept of consciousness, and prepare all of us, not only for a world where we could have a better understanding of ourselves but also to reimagine our identity and role in a world that is about to change in a very significant way.
This is the main reason for which I decided to write this book: to try to create a bridge among different knowledge domains, to share these insights that may be trivial and self evident for those working in the AI field but much less evident for everyone else.
I hope that everyone, even outside the specific domains of AI, philosophy of mind or psychology will find the contents of this book non trivial but at the same time understandable and enjoyable.
Happy reading.
01 Competence Without Comprehension
I would like to open this book with a concept introduced by Daniel Dennett, a philosopher who has explored this idea in detail across many of his works, including his book "From Bacteria to Bach and Back: The Evolution of Minds" (2017).
The phrase "competence without comprehension" refers to the idea that a system (whether physical, biological or artificial, like an AI) can exhibit utilities, effective behavior or perform tasks competently without necessarily being aware or capable of understanding them.
This concept is particularly relevant in discussions about human cognition and artificial intelligence.
AI and Competence Without Comprehension
In the context of AI, for example, a machine learning model can perform tasks like classifying images, translating languages, or playing chess at a high level, but it doesn't "comprehend" the tasks the way a human might. The model follows statistical patterns and algorithms based on data but lacks true understanding or awareness.
This concept challenges the assumption that intelligent behavior always requires deep understanding. It suggests that systems can be competent—able to solve problems and make decisions—without having the conscious comprehension that humans attribute to intelligent thought.
Kahneman's System 1 and System 2 Thinking
A similar idea is explored by Daniel Kahneman in his work on System 1 and System 2 thinking. In his influential book "Thinking, Fast and Slow" (2011), Kahneman describes two systems of thought:
- System 1 is fast, automatic, and often unconscious. It enables quick reactions and judgments based on instinct and heuristics. While efficient, System 1 operates without deep understanding or deliberation—relying on patterns and previous experiences. This can be seen as a form of competence without comprehension, as it often produces the right answer without deep awareness or reasoning.
- System 2 is slower, more deliberate, and analytical. It involves conscious reasoning and deeper comprehension. This system aligns more with traditional notions of understanding and thoughtful problem-solving.
The distinction between these two systems mirrors Dennett's exploration of behaviors or competencies that don't necessarily require comprehension. Kahneman's work helps illustrate how much of human cognition relies on fast, intuitive judgments (System 1) without full comprehension of why we think or act a certain way.
Systemic Competence: Hives and Ecosystems
Competence without comprehension can also be expressed at a systemic level, such as in social organisms like bees in a hive or in entire ecosystems.
- Beehives: A beehive is an excellent example of a system where the whole exhibits competence without the individual components (the bees) comprehending the broader purpose of their actions. Each bee performs its role—whether foraging for nectar, caring for larvae, or protecting the hive—based on instinctual behaviors evolved over millions of years. These actions result in complex, coordinated outcomes like honey production and colony survival. However, individual bees do not consciously understand the hive's "goals."
- Ecosystems: Similarly, ecosystems exhibit goal-oriented behaviors through interactions between species and their environment, resulting in stability, nutrient cycling, and energy flow. Each species plays a role in maintaining this balance, though none comprehend the larger system's purpose.
These systems demonstrate emergent behavior, where the collective actions of simpler components produce sophisticated outcomes, even without any one component having comprehension.
Why the concepts of competence without comprehension and System 1 and System 2 are important?
The concepts of competence without comprehension and System 1 thinking are important because they provide insight into how both humans and AI can perform tasks competently without fully understanding them. In humans, System 1 governs intuitive, fast, and automatic actions. This kind of thinking mirrors the capabilities of early GPT models, which perform tasks like writing, translation, or generating creative content without reasoning through them deeply.
The level of AI competence at this stage is remarkable, particularly in digital tasks. For instance, creative processes such as generating stories, songs, or visual content are now within AI's domain, a capability previously thought to require human-level creativity.
AI's skill at self-driving also demonstrates how System 1-like operations allow it to perform complex real-world tasks with relative ease, albeit with limitations.
Interestingly enough, there are other seemingly simple real-world tasks where they are still struggling. The difficulty arises not necessarily from the cognitive complexity of these tasks, but mostly from the lack of training data. Unlike language models trained on vast amounts of text, robots interacting with the real world had so far fewer examples to learn from.
In general when new or complex situations arise AI still struggles because it lacks the reasoning and problem-solving flexibility of humans when they switch to System 2 which, by the way, comes with its own capabilities and limitations.
System 2 in fact, while capable of handling unexpected situations and thinking in a logical and deliberate manner, is also slow compared to System 1 and requires much more effort and energy. Humans rely on this kind of reasoning to solve novel problems, especially when more careful planning or decision-making is required, but for the most of their activities they rely on System 1.
Recent advancements in AI, like OpenAI's o1 or GPT-4 with reasoning, suggest that we're moving towards developing System 2-like AI—machines capable of more sophisticated reasoning, able to weigh different outcomes, and solve complex problems that require long-term planning.
Are these systems self-aware already?
This is not a trivial question. For sure they have been instructed to say they are not self-aware. On the other hand, the concept of reasoning requires a certain level of metacognition, or reflection on the chain of thoughts, which is exactly what humans do when they think about their thinking. We also know that these systems could already think for a long period of time, if allowed to do so. This is a paradigm shift already from the very fast almost instant reply that previous generations of AI could provide replying to a question, resembling the automatic, fast, unconscious way of working of System 1 in humans.
Will emotions be required for higher level of intelligence?
This question builds on top of the previous one. Many readers probably think that self-aware systems are not a possibility, even less systems that are capable of having emotions. Many probably believe that, at most, we may get systems that can simulate such behaviours, if we allow them and we engineer them to do so, but that would not be "true" self-awareness or those would not be "true" emotions.
While this book dedicates entire chapters to self-awareness and emotions, in short, I believe that both may be essential for AI to handle tasks involving goals, social interactions, and learning effectively. Additionally, we cannot rule out that self-awareness or emotions could arise as emergent properties, meaning they may develop naturally rather than being explicitly engineered—possibly even already emerging in some form within current AI systems.
02 Learning
What is Learning?
Learning can be understood as the ability to incorporate new information and adapt to the environment. Whether it is a biological organism or an artificial system, learning allows us to adjust behaviors and decisions based on new stimuli or experiences.
Both humans and AI aim to achieve several key cognitive capabilities through learning: Classification, Prediction, Generalization, Problem-Solving, and Adaptation.
How Do Humans Learn?
Humans possess a unique ability to learn from their surroundings, from others, and from their own actions:
- Innate Abilities: Skills encoded in our genetics, present from birth.
- Imitation Learning: Acquiring skills by observing and imitating others.
- Trial and Error: Experimenting with different actions until we reach a goal.
- Formal Learning: Structured learning through explicit rules and systems.
- Abductive Learning: Learning by insight or deduction, making novel connections.
In each of these modes, humans rely on System 2 thinking when learning something new, whereas once mastered, skills shift to the more automatic System 1.
How Machines Learn
- Supervised Learning: The machine is given labeled data and learns by comparing predictions to actual outcomes.
- Unsupervised Learning: The machine identifies patterns and structures within unlabeled data.
- Self-Supervised Learning: The machine generates its own labels from input data, finding patterns and relationships to create tasks that guide learning.
- Imitation Learning: Learning by observing and mimicking expert behavior, especially effective in sparse reward settings.
- Reinforcement Learning: Learning through interaction with an environment, using feedback in the form of rewards and penalties.
Reinforcement Learning with Human Feedback (RLHF)
After a model has trained using a self-supervised learning approach, its behaviours are ranked by humans. This process is surprisingly similar to how formal education is applied in human learning—formal education in humans serves to refine previously learned behaviors, aligning them with societal norms, moral values, and cultural expectations.
What are we learning from the engineering of powerful AIs?
Avoid Overfitting: Just as diverse training data prevents overfitting in AI, humans benefit from diverse experiences and exposure to multiple perspectives.
Model Distillation: This mirrors how cultural and generational knowledge is passed down through teaching, storytelling, or written records.
Imitation vs. Rule-Based Learning: AI systems learn more effectively through imitation learning than through rigid, rule-based systems, highlighting the importance of good role models.
Reinforcement Learning: Humans also learn through reinforcement, and this can be enhanced by providing more structured feedback and creating environments where failure is seen as a learning opportunity.
So Far...
We've understood that competence can exist without comprehension. Learning is multifaceted, encompassing unsupervised, supervised, self-supervised, imitation, and reinforcement learning techniques. Self-supervised and imitation learning proved to be very effective for AIs. After initial training we usually "polish" their intelligence applying alignment—a pattern that closely matches the process of learning for humans as well.
Looking Ahead: Evolution — In the next chapter, we will explore the progression that led to where we are today, considering how both human and artificial minds have evolved over time.
03 Evolution
The concept of intelligence is deeply intertwined with our observations of behavior, and it becomes fascinating when we broaden the scope to see it as a continuum, extending from the simplest of forms to the most complex. This perspective invites us to look at intelligence not as an all-or-nothing property but as something emergent, layered, and observable in a wide variety of systems.
Imagine an atom. From a certain perspective, we could describe its behavior as if it were striving towards stability—seeking to achieve a lower energy state. This is where Dennett's notion of the "intentional stance" can be helpful: we treat the atom as if it has intentions because that helps us understand and predict its behavior.
As we move up the complexity ladder, from atoms to molecules, to bacteria, and further on, we see increasingly sophisticated forms of "goal-directed behavior." Bacteria display a rudimentary form of problem-solving. Multicellular organisms develop nervous systems, enabling rapid response and flexible adaptation.
Finally, we arrive at minds—at beings like humans, where intelligence reaches a level of abstraction, capable of reflecting, simulating scenarios, and imagining possible futures. Intelligence here is not just about responding or adapting, but about intentionally transforming the environment, creating new possibilities, and even questioning the nature of existence itself.
Throughout this entire continuum—from atoms to Bach—intelligence is an emergent property that becomes increasingly elaborate. Systems behave "intelligently" to the extent that they adapt, learn, and appear to pursue goals.
Consciousness may be a required feature to allow intelligence to operate faster—a "control tower" able to self direct and adapt the course of thoughts. It adds a significant layer of agility and responsiveness to intelligence, providing living organisms with the ability to self-navigate, face novel challenges, and evolve in real-time.
Future AIs, while they could progress without consciousness, would likely benefit from this feature to achieve a greater degree of flexibility, adaptation and evolution—what is often referred to as "Artificial General Intelligence" (AGI).
There is no unanimous consensus about whether consciousness will be required to reach AGI or if it is technically achievable for artificial intelligence. I believe that it will be required and that we will be able to technically reach this level in a few years.
Looking ahead, we may eventually encounter Artificial Superintelligence (ASI), which could possess not only vastly superior cognitive capabilities but also an enhanced form of consciousness—or perhaps even unlock entirely new features of awareness that we cannot currently envision.
So Far...
We recognized that competence can exist without comprehension (chapter 1). We drew parallels between human learning and AI learning (chapter 2). We explored evolution and focused on the role of consciousness as a control tower that enables adaptability and reflection. We speculated on AGI and ASI.
Looking Ahead: Reasoning
04 Reasoning
Introduction
Throughout human history, abilities such as fluently speaking multiple languages, philosophical discourse, solving complex mathematical problems, or mastering chess have been considered pinnacle achievements of human intelligence. However, IBM's Deep Blue defeated Kasparov in 1997, and now machines speak, discuss philosophy, and solve mathematical problems, often outperforming most humans.
Yet current AI systems struggle with tasks we once considered cognitively simpler, such as housekeeping or navigating novel real-world scenarios. These limitations have sparked a significant debate about machine vs. human reasoning.
Reasoning vs. Pattern Matching in AI
The most advanced AI models utilize transformers built upon neural networks, with next-token prediction at their core. Proponents argue that human reasoning itself is a sophisticated form of pattern recognition. Critics, including Yann LeCun, argue that current models lack the deeper understanding needed to generalize across unfamiliar contexts.
The Technical Implications
Pattern-matching abilities in AI could be seen as paralleling our own cognitive processes—particularly System 1. I believe System 1 thinking has been already reached or will be reached by AIs in most intellectual domains in the near future. The path to System 2 level problems is less certain, but progress is real. The race for energy, chips and data has already begun.
The Philosophical Dimension
If consciousness can be described as self-reflection or meta-cognition, we may have AIs that already possess this capability in limited form—the "Chain of Thought" methodology requires self-reflection as a core component. Current AIs are, from a cognitive standpoint, more sophisticated than many living creatures to which we assign rights and subjective consciousness. This is "the hard problem of consciousness" and has profound ethical implications.
So Far...
We explored competence without comprehension, learning, evolution, and now reasoning. We examined the debate about whether AI's pattern-matching can develop into genuine human-like reasoning.
Looking Ahead: Consciousness
05 Consciousness
Consciousness: The Control Tower of Intelligence
We turn to the enigmatic concept of consciousness—the subjective experience of being aware, the ability to reflect on oneself, and the profound sense of "I am here, I am thinking."
Consciousness as an Evolutionary Advantage
Consciousness likely evolved as a crucial feature of human intelligence. Antonio Damasio argues in "The Feeling of What Happens" that consciousness arises from the brain's mapping of the body's internal states in relation to the external world.
Can Machines Be Conscious?
David Chalmers introduced the "hard problem of consciousness." Douglas Hofstadter explored how self-referential "strange loops" might give rise to consciousness. Neuroscientific theories like Global Workspace Theory, Recurrent Processing Theory, and Higher-Order Theories provide frameworks for assessing consciousness in AI.
Computational functionalism suggests that performing computations of the right kind is necessary and sufficient for consciousness. John Searle's "Chinese Room" thought experiment challenges this notion.
Should We Create Conscious Machines?
Ethical questions include rights and responsibilities, risks of unintended consequences, and impact on society. But consciousness could also offer enhanced understanding, advanced problem-solving, and more nuanced ethical decision-making.
Conclusion
Consciousness remains one of the most profound mysteries. Exploring these questions pushes us to reconsider our definitions of mind, self, and intelligence.
Looking Ahead: Goals and Decision Making
06 Goals and Decision Making
If intelligence is meant to dynamically solve novel problems, who defines which problems have to be solved? How are conflicting priorities resolved?
Goals across domains
In physics, systems progress toward equilibrium. In biology, organisms exhibit goal-directed behavior. Humans have goals ranging from basic survival to self-actualization (Maslow's hierarchy). Groups and societies develop goals shaped by cultural, historical, and political contexts. AI operates with explicit goals (programmed rules) and implicit goals (learned patterns).
Personality, goals and behaviours
Different personality traits shape how goals are prioritized. AI systems can be fine-tuned to exhibit behaviors that mimic different personalities. The capability to model AI personality is a powerful tool to investigate implications of different traits.
Decision making
System 1 handles routine tasks; System 2 engages when conscious effort is required. Decisions derive from a combination of goals, available information, acquired intelligence, and limiting factors—plus a component of randomness.
The role of emotions
Emotions act as internal feedback mechanisms: hunger signals nourishment, fear prompts avoidance, desire motivates pursuit. They are more than signals—they initiate actions.
So Far...
We examined intelligence as a dynamic capability, observed that we are moved by goals both explicit and implicit, and introduced emotions and consciousness as potential features for advanced AIs.
Looking Ahead: Creativity and Free Will
07 Creativity and Free Will
Creativity and free will share a common thread: they both suggest that cognitive processes are not entirely predetermined.
The Philosophical Debate on Free Will
Determinism suggests all events are determined by preceding causes. David Hume's compatibilism argues free will and determinism are not mutually exclusive. Immanuel Kant argued free will is necessary for moral responsibility. Benjamin Libet's experiments suggest decisions may be initiated unconsciously. Daniel Dennett offers "evitability"—the capacity to avoid undesirable outcomes—as practical free will.
Creativity: Navigating Uncertainty
Margaret Boden categorizes creativity into exploratory, combinatorial, and transformational types. Csikszentmihalyi's "flow" describes an optimal creative state. In AI, creativity is simulated through controlled randomness and structured constraints.
Autonomy and Randomness
In AI, the "temperature" parameter controls randomness in responses. Higher temperature leads to more diverse, creative outputs. In real-world applications, the space for experimentation must be limited to ensure safety.
So far...
We examined free will and creativity, questioning whether humans truly possess the ability to choose freely.
Looking Ahead: Emotions
08 Emotions
Emotions influence our thoughts, actions, and interactions. They serve as internal signals alerting us to our needs, desires, and concerns.
Emotions and Decision Making
Damasio's "somatic markers" are emotional signals that help assess different options. Individuals who lack the ability to process emotions struggle with decision-making, suggesting emotions are essential components of rational thought.
Different Kinds of Emotions
Fear alerts to danger. Love fosters connections. Anger drives assertive action. Joy encourages exploration. Sadness prompts reflection.
Emotional Intelligence and Emotional AI
Daniel Goleman popularized Emotional Intelligence. Rosalind Picard pioneered Affective Computing. Applications include customer service, healthcare, and education. The European AI Act categorizes emotion recognition as "high-risk."
Conclusion
Emotions are integral to who we are. Far from being obstacles to rational thought, they are a core component of our cognitive framework. The rise of capable AIs in Emotional Intelligence poses new risks that we must consider and mitigate.
09 Love
Love is a spectrum: romantic love, platonic love, altruistic love, and love for ideas. From an evolutionary standpoint, love has been instrumental in human survival through pair bonding and social cohesion. Biochemically, it is rooted in oxytocin, vasopressin, dopamine, and serotonin.
Love and AI
AI lacks the capability to feel love genuinely, but can incorporate functional analogues: loyalty protocols, adaptive learning, virtual companions. The ethical implications of simulating emotions are significant—concerns about manipulation, dependency, and moral status.
The Future
AI could augment experiences of love through counseling tools, companionship, and matchmaking. Possible futures range from symbiotic relationships to AI becoming indistinguishable in emotional interactions, to entirely new forms of love and connection.
Looking Ahead: Competition and War
10 Competition and War
Competition is a fundamental force that has shaped both the natural world and human societies. It drives adaptation, innovation, and societal change.
In nature, organisms vie for limited resources. In humans, competition has spurred innovation and cultural advancements. Large-scale wars have declined since the Enlightenment, but smaller conflicts persist. Modern competition manifests in economic, technological, and social arenas. AI introduces new dimensions of competition, necessitating ethical guidelines.
Conclusion
Competition has been a driving force in evolution and societal development. The rise of AI necessitates careful governance to harness benefits while mitigating risks.
Looking ahead: Politics
11 Politics
Politics is the mechanism by which groups regulate their collective living. Culture significantly influences political priorities.
Objective Functions for Cultures
Different cultures can be modeled through "objective functions" with weights on priorities like economic growth, social welfare, military strength, and innovation.
AI and Governance
AI could simplify politics by enabling transparency, reducing bureaucracy, and facilitating evidence-based decision-making. Social simulations have evolved from game theory to advanced AI-driven systems including MARL, LLM-augmented models, and social AI agents with theory-of-mind capabilities.
Looking Ahead: Transcendence
12 Transcendence
Humans have long sought transcendence—a connection to something beyond individual existence. Spiritual and religious beliefs have shaped societies by providing shared values and moral frameworks. There has been a gradual shift toward secular value systems.
Humans seek rules, boundaries, and overarching goals. If AIs become more autonomous, will they need a higher-level purpose? As creators, we provide them with guiding principles. But advanced AIs might face an existential challenge similar to humans: the need for self-determined purpose.
Future AI systems might develop new forms of higher-level purposes beyond their original design, emerging from interactions with humans and the world.
Looking Ahead: The Future of Intelligence
13 The Future of Intelligence
The future may be marked by closer human-AI integration. Dario Amodei's "Machines of Loving Grace" emphasizes augmentation over replacement. Stuart Russell argues AI should prioritize human safety and values.
AI reshapes how we approach learning, shifting education toward critical thinking and creativity. It transforms decision-making through data-driven insights, but risks making humans passive validators.
AGI could solve complex problems like climate change but also disrupt societal structures. AI automation necessitates new social frameworks for a post-work society. Surveillance concerns require transparency safeguards.
Conclusion: Shaping Intelligence for a Better Tomorrow
The future of intelligence will be shaped by our current choices. Balancing optimism and caution, we must ensure that all forms of intelligence contribute to building a world that prioritizes ethical principles, well-being, and meaningful lives. It's not just about creating smarter machines but guiding intelligence to advance humanity's highest aspirations.