Site icon The Punching Bag Post

Concerns Growing over AI Threats to Humanity and Earth

Call them alarmists, soothsayers, or just analysts anxious to show the darker side of the tech revolution, the voices critical of the impact of Artificial Intelligence (AI) on human society and future are increasing by the day.

Last year in May, BBC drew attention to a statement on the website of Center for AI Safety that has been supported by dozens of important people in the world of AI, science, academia, political leadership, and media. The statement reads:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

As BBC reported, the signatories of the statement are concerned that AI could lead to the extinction of humanity in one or more disaster scenarios, including but not limited to: weaponized use of AI, enfeeblement of humanity to the point where its very existence depends on AI, and destabilization of human societies by AI-generated misinformation.

The story also cited an open letter signed by Tesla and Twitter/X boss Elon Musk that urges halting the development of the next generation of AI technology. The key fear here is development of “nun-human brains” that could eventually outsmart and entirely replace humans.

BBC noted that Prof LeCun, who works at Meta, along with others have called such fears surrounding AI development overblown.

The Guardian reported in a recent story (January 10) that the World Economic Forum (WEF) sees AI-driven misinformation/disinformation as a threat to global economy by influencing key looming elections in America, European Union, and India – a position that itself sounds politically self-interested.

The fears of AI’s potentially devastating effects also extend to the environment and biosphere. Joshua Murdock wrote in St. Louis Post-Dispatch (January 10) that concerns have been raised in Montana regarding the use of AI-generated photos of wildlife can subvert the wildlife management and conservation. Along the same line of concern, Indian journalist Sanjeev Kumar opined in Russia Today (January 9) that specialized hardware needed for today’s AI technology has a massive carbon footprint that is likely to affect all “vulnerable countries.” Last year, Earth.org called AI’s environmental impact “staggering.”

The entertainment industry is also at risk of takeover by AI as Neil Fox expressed in an interview with GBN. Fox said that thousands of songs produced daily by AI bots are outcompeting songs by real human artists

\

In May 2023, a story in The Guardian also rang the alarm over the health risks millions could face from AI. The story cited a British Medical Journal (BMJ) Global Health article authored by a group of health professionals from different countries. Their analysis concluded:

The risks associated with medicine and healthcare “include the potential for AI errors to cause patient harm, issues with data privacy and security and the use of AI in ways that will worsen social and health inequalities.”

And for political philosopher Michael Sandel, the societal shift toward AI incurs ethical concerns. As cited in The Harvard Gazette, Sandel recognizes three main of ethical concern for society: privacy and surveillance, bias and discrimination, and the role of human judgment.

Exit mobile version