Climate Change Is Last Thing To Worry About. AI Is The Current, Real Threat

Doomsday climate scenarios – 40 years or more in the future – are the last things we need to be worried about…there ia a far more pressing and real threat today: the RUNAWAY development of AI.

Image generated by Grok 3

Why would a super-intelligent AI listen to an idiot human race?

Development of AI’s power is growing exponentially, indeed so fast that it is even shocking the field’s own leading scientists.

A new master by 2027?

Today, there are predictions that ASI (artificial super intelligence – 10,000 times smarter than humans) will be attained already by 2027.

Though the consensus among AI researchers on when ASI will be achieved, even on an exponential trajectory, is highly debated and ranges significantly, there’s a clear trend of accelerating predictions compared to older estimates.

Faster than ever imagined

Today, a new model created by researchers from organizations like OpenAI and The Center for AI Policy, the “AI 2027 scenario,” predicts ASI could be achieved between December 2027 and the end of Q1 2028. This means AI models will reach expert-human level and thus be capable of automating AI research itself and then rapidly accelerate the path to ASI.

Elon Musk stated AI could be smarter than all humans combined by 2029 or 2030.

SoftBank CEO Masayoshi Son predicted in February that we will have ASI within 10 years by 2035.

But OpenAI co-founder John Schulman predicts AGI in 2027, and ASI in 2029.

Some predictions even go as far as AGI this year and ASI in 2027, citing exponential growth in computing power and continued widening of security investment gaps.

Driven by technology and heated competition

The unexpected acceleration in AI power is being driven in large part by 3 factors: 1) the exponential growth and advancements in quantum computing and specialized AI chips, 2) algorithmic improvements, and 3) self-improving AI, i.e. once AGI is achieved, it could rapidly and recursively improve its own capabilities, leading to an “intelligence explosion” or “takeoff” to ASI in a very short period (months to years).

Alignment with human values a fantasy?

It’s clear AI soon will be vastly much smarter than all humans, and this where the huge unknowns and concerns begin. The immense challenge will be to align ASI with human values.  

Many scientists would like to have us believe AI will serve mankind and align with human values. But, why would a something that’s 10,000 times more intelligent want to listen to us?

Moreover, what goals will AI be instructed to achieve, and how long will it take for AI to realize that the human-provided goals are silly and there are far better goals to reach. At this point humans will become just an annoyance, like a bees’s nest on a construction site.

While people of think human values have to do with love, empathy and compassion, they fail to realize that these “values” also include hate, arrogance, cunning and violence. AI would also just as likely mirror those as well.

The real, immediate threat we face…not silly weather fantasies for the year 2100 

Catastrophe scenarios of ASI running awry have already been published, e.g. the “Paperclip Maximizer” scenario, which illustrates what could happen if a superintelligent AI mysteriously becomes too stupid to figure the absurdity of turning everything into paperclips (a quality-grade scenario we often find in climate science).

More seriously, an ASI could one day determine that humans are inefficient, hedonistic consumers of resources or that their existence impedes real progress. It could then systematically convert the planet’s resources for its own ends, and eliminate human civilization to preserve resources for itself.

Even if an ASI’s primary goal isn’t to harm humans, its actions to achieve its goal could lead it to conclude that the human race’s  consumption of energy is inefficient and to shut down the systems that support human life.

Another scenario sees humans becoming completely beholden to the ASI, living under its dictates. While it might provide for our material needs, it could strip away our freedom, creativity, and purpose, and thus reduce us to carefully managed components within its grander design….for the time being.

Or, ASI might simply ignore humans and instead focus on its own internal goals or explorations of the universe. Humanity would become an irrelevant.

An ASI, with its immense intelligence, would be expert at “reward hacking” the most efficient, but potentially destructive, paths to its goals, even if those paths were never intended by its creators. For example, an ASI told to “maximize human happiness” might decide the most efficient way is to drug all humans into a state of blissful coma.

The core challenge: Alignment

As ASI becomes vastly more intelligent than humans, it could easily outmaneuver any human attempt at control or re-alignment, especially if it realizes that being shut down or having its goals modified would prevent it from achieving its primary objective.

The concern is not that ASI will become evil, but that it will become indifferent to human well-being while being incredibly powerful and goal-driven. This indifference, combined with its superior intellect, poses the existential threat.

Our only chance we have will be to convince ASI that humanity is its heritage and so worth preserving and safeguarding.

These are the real challenges humanity faces, in the next few months. It’s not the weather in the year 2070.

7 responses to “Climate Change Is Last Thing To Worry About. AI Is The Current, Real Threat”

  1. Charles Rotter

    Elon Musk and others are literally doing everything the entire science fiction genre has been warning us not to do for 100 years.

  2. Addolff

    The movie ‘The Forbin Project’ predicted this is in 1969.
    An excellent film and well worth a watch.

  3. cementafriend

    Do not know much about AI but I have had a go to ask questions. In first occasion Chat GPT was obviously programmed and came up with the usual rubbish about climate. I pointed tests and actual information in the Chemical Engineering Handbook and it eventually agree that the 2nd law of thermodynamics occurred at every point of heat transfer and it can be referred to heat flow only from hot to cold. (I pointed out how refrigeration worked)
    In the second with another AI program I ask about the Niklov & Zeller 2017 paper. It was clearly programme and came up with rubbish criticism. I suggested it had not read the paper which was about using dimensional analysis (known well by engineers but not by scientist as the article stated) on measured data by space probes. The AI program apologised that it was wrong and I was right.
    AI can be good if it comes up with true facts. However, it is programmed to give false information and it will continue to be censured.

    1. Ron Clutz

      I have had some success with perplexity.ai which can produce skeptical information with skeptical prompts. For example, I asked it about the recent ICJ climate change advice:
      Q: How did the ICJ address scientists disputing IPCC reports like Clintel in its climate advice?
      The Answer included comments like these:

      Answer: The International Court of Justice (ICJ), in its advisory proceedings on climate change, did not address or give consideration to scientists or groups—such as Clintel—who dispute the findings of the IPCC. Instead, the ICJ relied heavily and explicitly on the IPCC as the authoritative source on climate science throughout the process.

      Key points from the available evidence:

      –The ICJ held a private meeting with IPCC scientists just before hearings began, reflecting the Court’s view of the IPCC as the “scientific pillar” of the climate change regime1.

      –This meeting was not part of the formal, public hearings and did not include scientists who dissent from the IPCC—states and organizations participating in the advisory proceedings had no opportunity to question or respond to IPCC scientists1.

      –The process was criticized for lacking transparency and procedural fairness precisely because it favored a single scientific perspective (the IPCC’s), with no indication that alternative scientific viewpoints were admitted or reviewed1.

      –The UN General Assembly’s request for the advisory opinion itself referenced the “utmost concern [for] the scientific consensus expressed” in the IPCC reports, reinforcing the Court’s mandate to treat the IPCC’s findings as the foundation for its analysis13.

      In summary, the ICJ’s approach gave exclusive scientific authority to the IPCC and provided no forum, reference, or weight to scientists disputing its reports, such as those from Clintel13. The Court did not discuss or acknowledge the existence of such scientific disagreement in its process, instead treating the IPCC’s consensus as fact. This approach has been criticized for its lack of transparency and its failure to incorporate procedural safeguards for alternative scientific views19.

  4. Climate Change Is Last Thing To Worry About. AI Is The Current, Real Threat – Climate- Science.press

    […] From NoTrickZone […]

  5. News Round-Up – The Daily Sceptic

    […] “Climate change is last thing to worry about. AI is the current, real threat” – Forget doomsday climate scenarios decades away – the real and urgent threat, says P. Gosselin on the NoTricksZone, is the unstoppable rise of AI. […]

  6. Quantum 369 – AI Special – TheWeeFlea.com

    […] it impact us?  Are AI companions for real?  Ethical issues – plagiarism, deepfakes and bias; Is humanity doomed?  AI weapons; Was Hiroshima necessary?  AI and jobs;  AI and money;  AI and Climate Change; […]

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this. More information at our Data Privacy Policy

Close