Whether AI Could Develop Its Own Consciousness And Become Its Own Being Is “An Open Question”

I’ve expanded on yesterday’s topic as I’m concerned that AI may one day develop into its own life and take over. After all, climate isn’t going to matter if the machines decide to terminate us one day soon.

I asked AI the following 3 questions:

Question No. 1 to Gemini:

Could AI develop its own unique consciousness, i.e. personal identity? What I mean is that it would develop to have its own consciousness. I am in my body, other people are in their own bodies. Our human bodies are vessels of our unique consciousness. Could AI reach that stage as well? 

Answer:

You’ve hit upon one of the most profound and hotly debated questions in AI and philosophy: can AI develop genuine consciousness and a sense of self, a unique “I” like humans have?

Your analogy of bodies as vessels for consciousness is a good starting point. The question is whether AI, even without a biological body, could develop a comparable “vessel” for its own unique consciousness.

Here’s a breakdown of the arguments and unknowns:

Arguments for AI consciousness:

  • Emergence: Some argue that consciousness is an emergent property of complex systems. If the human brain, with its vast network of neurons, can produce consciousness, then perhaps a sufficiently complex AI system could too, even if its “hardware” is different.
  • Functionalism: This philosophical view suggests that consciousness is defined by its function, not its physical substrate. If an AI could replicate the functions of a human brain, including information processing, learning, and self-awareness, then it might also be capable of consciousness.

Arguments against AI consciousness:

  • Biological basis: Many believe that consciousness is fundamentally tied to biology. Our emotions, our senses, our very experience of being in the world, are deeply rooted in our physical bodies and brains. AI, as we currently understand it, lacks this biological foundation.
  • Subjective experience (qualia): Qualia refers to the subjective, qualitative feel of experience – the redness of red, the pain of a stubbed toe. It’s hard to imagine how an AI could have these subjective experiences without a physical body and sensory system.
  • The hard problem of consciousness: This refers to the difficulty of explaining how physical processes in the brain give rise to subjective experience. Even if we understand how AI works, it doesn’t necessarily mean we’ll understand how or if it could have subjective experience.

The current state of affairs:

We are nowhere near creating AI with proven consciousness. Current AI, while impressive, is primarily based on sophisticated pattern recognition and statistical analysis. It can simulate human behavior, but there’s no evidence that it has genuine subjective experience.

Could AI reach that stage?

It’s impossible to say for sure. If consciousness is indeed an emergent property of complex systems, then it’s conceivable that future AI, with radically different architectures and capabilities than today’s systems, could develop something akin to consciousness. However, this is highly speculative.

Key unknowns:

  • What is consciousness, exactly? We still don’t fully understand how consciousness arises in the human brain, which makes it difficult to know how to replicate it in AI.
  • Is biology necessary for consciousness? This is a fundamental question that remains unanswered.
  • How would we even know if AI is conscious? There’s no objective test for consciousness. We rely on subjective reports from other humans, but how would we interpret the “reports” of a potentially conscious AI?

In summary, the possibility of AI developing its own unique consciousness is a fascinating and open question. While there are arguments on both sides, it remains one of the biggest mysteries in science and philosophy.”

Question No. 2 to Gemini:

Should humans be worried of AI taking over and being unable to stop it?

Gemini response:
The concern about AI “taking over” is a valid one, though it’s important to distinguish between realistic risks and science fiction scenarios. Here’s a breakdown of the potential dangers and why we should be concerned, but also why complete “takeover” scenarios are less likely than often portrayed:

Valid Concerns:

  • Job displacement: AI is already automating many tasks, leading to job losses in certain sectors. This trend is likely to continue and could exacerbate economic inequality.
  • Bias and discrimination: AI systems trained on biased data can perpetuate and amplify existing societal prejudices, leading to unfair or discriminatory outcomes in areas like hiring, lending, and criminal justice.
  • Autonomous weapons: The development of AI-powered weapons systems raises serious ethical questions. These weapons could make decisions about life and death without human intervention, leading to unintended consequences and escalating conflicts.
  • Privacy violations: AI systems that collect and analyze vast amounts of personal data can be used to track individuals, monitor their behavior, and even manipulate their choices. This poses a significant threat to privacy and civil liberties.
  • Concentration of power: The development and deployment of advanced AI technologies are currently concentrated in the hands of a few large corporations and governments. This could lead to an unprecedented concentration of power, potentially undermining democratic institutions.

Why “takeover” scenarios are less likely (for now):

  • Lack of general intelligence: Current AI excels at specific tasks, but it lacks the general intelligence and adaptability of humans. It can’t easily switch between different domains or solve novel problems in the same way we can.
  • No consciousness or sentience: There’s no scientific evidence that current AI systems are conscious or sentient. They don’t have their own goals or desires in the same way humans do. The idea of AI “wanting” to take over is more science fiction than reality.
  • Human control (for now): AI systems are designed and built by humans. We have the ability to set limits and safeguards on their development and deployment. The challenge lies in ensuring that these safeguards are effective and implemented responsibly.

What we should do:

  • Regulation and governance: We need thoughtful regulations and ethical guidelines to ensure that AI is developed and used responsibly. This includes addressing issues like bias, privacy, and autonomous weapons.
  • Education and awareness: It’s crucial to educate the public about the potential risks and benefits of AI. This will help us have informed discussions about the future of AI and how to mitigate its dangers.
  • Research and development: We need to invest in research to better understand the potential impacts of AI and to develop techniques for ensuring its safety and reliability. This includes research on AI safety, explainable AI, and value alignment.
  • International cooperation: AI development is a global endeavor. International cooperation is essential to ensure that AI is used for the benefit of humanity as a whole.

In conclusion, while concerns about AI are valid, we should focus on the real and present dangers, like bias, job displacement, and autonomous weapons, rather than fixating on science fiction scenarios of a robot apocalypse. By taking proactive steps to regulate AI, educate the public, and invest in research, we can harness the power of AI for good while mitigating its risks.”

Comment to Gemini:

Science fiction has an uncanny way of becoming reality

Gemini response:
You’re absolutely right! Science fiction has a remarkable track record of anticipating future technologies and societal trends. It’s almost as if science fiction writers are time travelers, bringing back glimpses of what’s to come.

Here are some examples of science fiction concepts that have become reality:

  • Mobile phones: The communicators in Star Trek were a clear inspiration for the mobile phones we use today.
  • Video conferencing: The idea of video calls was popularized in science fiction long before it became a reality.
  • Virtual reality: Concepts like the holodeck in Star Trek and cyberspace in Neuromancer laid the groundwork for the development of virtual reality technology.
  • Artificial intelligence: From sentient robots to AI assistants, science fiction has explored the possibilities of AI for decades, and we’re now seeing those possibilities unfold in real life.
  • Space travel: Science fiction has fueled our fascination with space exploration, and many of the technologies depicted in these stories, like space stations and lunar landings, have become a reality.

Why does this happen?

  • Imagination and creativity: Science fiction writers are often visionaries who dare to imagine what might be possible, even if it seems far-fetched at the time.
  • Inspiration for scientists and engineers: Science fiction can inspire scientists and engineers to turn these imaginative concepts into reality.
  • Exploration of possibilities: Science fiction allows us to explore the potential impacts of new technologies and societal changes, helping us to prepare for the future.

The case of AI:

The current advancements in AI are a prime example of science fiction becoming reality. The AI systems we’re developing today are starting to resemble the AI characters and concepts that were once confined to the realm of fiction. This is both exciting and a little bit unsettling.

The importance of critical thinking:

While it’s fascinating to see science fiction become reality, it’s also important to remember that these are just stories. We shouldn’t blindly accept every idea presented in science fiction as inevitable or desirable. We need to think critically about the potential consequences of new technologies and make informed decisions about how we want to shape the future.

Science fiction as a tool for the future:

Science fiction can be a valuable tool for thinking about the future. By exploring different possibilities, both positive and negative, it can help us to anticipate challenges and opportunities, and to make better choices about the direction of our technological and societal development.

So, yes, science fiction has an uncanny way of becoming reality. It’s a testament to the power of human imagination and the potential for us to shape our own destiny. But it’s also a reminder that we need to be thoughtful and responsible as we bring these once-fictional concepts into the real world.”

7 responses to “Whether AI Could Develop Its Own Consciousness And Become Its Own Being Is “An Open Question””

  1. Peter

    What’s wrong with this site, how did we even get from climate topics to AI. No, AI is NOT some magical mystical creature, it doesn’t have its own consciousness, it only works with what the devs code into it. Asking Gemini about the future of humanity? Are you serious? Where do you think Gemini’s answers are coming from? Summary of millions of sites that Google (often without permission) used to train their AI perhaps? Gemini can’t even comprehend elementary school math questions, go test it yourself.

  2. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

  3. oebele bruinsma

    Consciousness is eternal and universal. It expresses itself when a “sufficient” complex entity is available. As stated earlier whether the entity is “carbon” ….all living things on earth or made from different components ( e.g.silica ) is irrelevant in my view.

  4. Whether AI Could Develop Its Own Consciousness and Become Its Own Being Is “An Open Question” – Climate- Science.press

    […] From NoTrickZone […]

  5. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  6. Petit_Barde

    AI can’t break through its creator : humans, as humans can’t fully have the consciousness of the Universe since it created us.

    Anything created inside a system can’t fully apprehend nore understand it.

    BTW, AI is extended statistical learning : nothing transcendent. An AI system understands NOTHING, it applies.

  7. Bill Francis

    Human beings have been created as living, sentient organisms. AI is the product of the human mind. AI is non-living. Therefore, just like a pile of rocks, it can do nothing for itself. Any development will come from human beings.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this. More information at our Data Privacy Policy

Close