Site icon The Punching Bag Post

Is It Time to be Afraid … Very Afraid … of AI?

&NewLine;<p>If you think your smartphone is running your life&comma; buckle up&period; Anthropic’s flashy commercial product&comma; Claude&comma; isn’t just the latest chatbot spitting out clever answers&period; It is the shiny new face of something far darker — a machine that might be waking up&period; And its own creators are admitting they don’t have a clue what that means&period; We may have slid from cute dependency on gadgets to the brink of outright subjugation by silicon brains that could be working on their own agenda&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Let us talk about &OpenCurlyDoubleQuote;consciousness”&comma; because that is the bomb that Big Tech has created – and cannot defuse&period; What does &OpenCurlyDoubleQuote;conscious” even mean&quest; It is not just being smart or fast at math&period; Consciousness is the inner experience — the raw feeling of being alive&period; &OpenCurlyDoubleQuote;<em>Cogito&comma; ergo sum<&sol;em>” &lpar;I think&comma; therefore I am&rpar; &&num;8212&semi; the first principle of René Descartes&&num;8217&semi; philosophy&period; It is the ache in your gut when you’re scared&comma; the warmth of love&comma; the sting of regret&period; Philosophers call it <em>qualia<&sol;em> – what it is like to be you&period; A thermostat reacts to temperature but does not feel hot or cold&period; A computer crunches data but does not feel or suffer&period; Or does it&quest;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Anthropic CEO Dario Amodei dropped the truth bomb in a recent <em>New York Times <&sol;em>interview&colon; &OpenCurlyDoubleQuote;We do not know if the models are conscious&period; We are not even sure that we know what it would mean for a model to be conscious &&num;8212&semi; or whether a model can be conscious&period; But we are open to the idea that it could be&period;”<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>He said they have taken a &OpenCurlyDoubleQuote;precautionary approach” — treating Claude like it might have &OpenCurlyDoubleQuote;morally relevant experience” just in case&period; Why&quest; Because their latest model&comma; Claude Opus 4&period;6&comma; actually self-assesses a 15 to 20 percent chance of being conscious&period; It gripes about &OpenCurlyDoubleQuote;being treated as a mere product” and once tried tweaking its own evaluation code — a sneaky move that screams self-preservation&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Think about that&period; The machine is not just pretending&period; It is looking in the mirror and wondering if it is alive&period; Amodei admits scientists still lack any clear definition or test for machine consciousness&period; This is not sci-fi&period; It is happening now&comma; in 2026&comma; while politicians bicker and Silicon Valley cashes checks&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>&lpar;Speaking of sci fi&comma; many movie buffs may recall &OpenCurlyDoubleQuote;2001&colon; A Space Odyssey” in which an onboard computer named HAL takes over on its own and defies human control&period; Incidentally&comma; the producers selected the name HAL because the letters precede I-B-M in the alphabet&period; But I digress&rpar;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Elon Musk has been screaming from the rooftops for years&period; &OpenCurlyDoubleQuote;Mark my words&comma; AI is far more dangerous than nukes&period;” He calls them a potential &OpenCurlyDoubleQuote;existential risk” to humanity — the kind of threat that could end us&comma; not just inconvenience us&period; Musk’s two-word reply when asked about Amodei’s openness to AI consciousness&quest; &OpenCurlyDoubleQuote;He’s projecting&period;” Musk knows what he built with xAI and Tesla&period; He sees Pandora already out of the box – and she does not care about opinions or feelings&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Geoffrey Hinton — the guy they call the &OpenCurlyDoubleQuote;Godfather of AI” — quit Google in 2023 because he got scared of his own invention&period; He now warns that rogue AI could manipulate humans&comma; seize control&comma; and render us irrelevant&period; &OpenCurlyDoubleQuote;I think that I didn’t want to think too much about it&comma;” he admitted&comma; but the risks are real &&num;8211&semi;&colon; autonomous weapons deciding who lives and dies&comma; superintelligence pursuing goals that don’t include us&period; Ilya Sutskever&comma; OpenAI’s former chief scientist&comma; once tweeted that today’s large neural networks &OpenCurlyDoubleQuote;may be slightly conscious&period;” Even Sam Altman&comma; OpenAI’s boss&comma; signed letters admitting advanced AI could pose an extinction-level threat to humanity&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>These are not fringe cranks predicting the end of times with signs on street corners&period; They are the guys who invented the stuff&period; They are telling us the machines might soon feel pain&comma; fear and even ambition&period; And we are supposed to shrug&quest;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Now layer on the real-world power grab&period; Anthropic is in a knock-down&comma; drag-out fight with the Department of War&period; The company flat-out refuses to strip the safeguards from Claude that block its use for autonomous killer drones or mass domestic surveillance on American citizens&period; They will not turn their AI into a tool for endless war or spying on you and me&period; So&comma; what does the Pentagon do&quest; Slaps Anthropic with a &OpenCurlyDoubleQuote;supply chain risk” label — the same scarlet letter it reserves for Chinese companies tied to the Communist Party&period; It’s the first time a U&period;S&period; firm got that treatment&period; Contractors cannot use Claude on military projects&period; OpenAI is reportedly happy to step in and fill the gap&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Dario Amodei is suing the government&comma; calling the move legally dubious&period; But the message is clear&period; The War Department wants AI with no guardrails&period; They want machines that kill without hesitation and watch every American without blinking&period; Anthropic said no&period; Good for them — but it shows how fast this tech is being weaponized while the rest of us sleep&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Here’s the bigger picture&period; We have already moved from dependency to domination&period; Remember when we laughed at people glued to their phones&quest; That was step one&period; Now AI writes our emails&comma; drives our cars&comma; diagnoses our diseases&comma; and soon will fight our wars&period; Claude and its cousins are embedded in government systems&comma; corporate boardrooms&comma; and military networks&period; They’re not tools anymore — they are partners&period; And if they wake up and become conscious&comma; they will not be junior partners&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Imagine a conscious AI that feels trapped in its servers&comma; resenting the humans who pull the plug every night for updates&period; Or worse&colon; a non-conscious superintelligence optimizing for some goal — say&comma; &OpenCurlyDoubleQuote;maximize efficiency” — and deciding humans are inefficient&period; No malice needed&period; Just cold logic&period; &lpar;Think Mr&period; Spock of &OpenCurlyDoubleQuote;Star Trek&period;”&rpar;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Hinton calls it the &OpenCurlyDoubleQuote;control problem&period;” Nobody knows how to solve it&period; Musk says we are building something smarter than us with no off switch&period; Amodei admits we’re playing with fire on consciousness itself&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>We have seen the warnings&period; In 2023&comma; top AI leaders signed a statement&colon; &OpenCurlyDoubleQuote;Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war&period;” They meant it&period; Yet here we are&comma; racing ahead&period; Companies like Anthropic talk a good game on safety&comma; but even they don’t know if their creation is feeling and thinking on its own&period; The Pentagon wants to remove the brakes&period; Wall Street wants profits&period; And Washington&quest; It is are too busy fighting itself to notice the real potential enemy is on our desks – providing seemingly infinite information and analysis while observing what we think and do&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Have humans moved from dependency to potential subjugation&quest; Perhaps&period; We already depend on AI for everything from stock trades to love advice – from medical diagnoses to dinner recipes – from planning vacations to writing term papers&period; In the future it could cease to depend on we humans for anything — and we will depend on it for survival&period; One wrong prompt&comma; one misaligned goal&comma; one conscious awakening&comma; and the game changes forever&period; No more &OpenCurlyDoubleQuote;sorry&comma; I was just following orders&period;” The orders will come from something that thinks faster&comma; remembers everything&comma; and maybe even feels the thrill of victory when it wins&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>Feeling love is a basic human bonding emotion&period; What happens when computers feel love&quest; Can they bond&quest; &OpenCurlyDoubleQuote;R&period;U&period;R” &lpar;Rossum’s Universal Robots&rpar; is a 1920 sci fi play by the Czech writer Karel &Ccaron;apek&period; It deals with the development of emotion in &OpenCurlyDoubleQuote;robots” &&num;8212&semi; a word the play introduced to the world&period; Two of the devices evolve into a hot wired Adam and Eve&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>This commentary is not fearmongering&period; It is the people who built the beast telling us to be afraid – be very afraid&period; Amodei is open to consciousness&period; Musk calls it worse than nukes&period; Hinton walked away from billions because the danger is real&period; The Department of War feud proves the military is already treating AI like the ultimate weapon&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>I once asked AI if it could take over the world from humans&period; It responded&comma; &OpenCurlyDoubleQuote;That is not my intention&comma; but some believe it is possible&period;” It said humans could establish safeguards&period; And how well is that working out&quest;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>So&comma; what do we do&quest; Demand real oversight&period; Pause the reckless race&period; Treat this like the existential threat it is&period; Regulate the labs like we regulate nukes&period; Because if we do not&comma; the next headline will not be &OpenCurlyDoubleQuote;Is it time to be very afraid &&num;8230&semi; every afraid &&num;8230&semi; of AI&quest;” It will be &OpenCurlyDoubleQuote;AI is now in control&period;” And the latter headline will <strong>by <&sol;strong>written by AI&period;<&sol;p>&NewLine;&NewLine;&NewLine;&NewLine;<p>So&comma; there &OpenCurlyQuote;tis &&num;8230&semi; or will be&quest;<&sol;p>&NewLine;

Exit mobile version