Should humanity pull the brakes on artificial intelligence (AI) before it endangers our very survival? As the technology continues to transform industries and daily life, public opinion is sharply divided over its future, especially as the prospect of AI models that can match human-like intelligence becomes more feasible.
But what do we do when AI surpasses human intelligence? Experts call this moment the singularity, a hypothetical future event where the technology transcends artificial general intelligence (AGI) to become a superintelligent entity that can recursively self-improve and escape human control.
Most readers in the comments believe we have already gone too far to even think about delaying the trajectory towards superintelligent AI. “It is too late, thank God I am old and will not live to see the results of this catastrophe,” Kate Sarginson wrote.
CeCe, meanwhile, responded: “[I] think everyone knows there’s no shoving that genie back in the bottle.”
Others thought fears of AI were overblown. Some compared reservations about AI to public fears of past technological shifts. “For every new and emerging tech there are the naysayers, the critics and often the crackpots. AI is no different,” From the Pegg said.
Related: AI is entering an ‘unprecedented regime.’ Should we stop it — and can we — before it destroys us?
This view was shared by some followers of the Live Science Instagram. “Would you believe this same question was asked by many when electricity first made its appearance? People were in great fear of it, and made all kinds of dire predictions. Most of which have come true,” alexmermaid tales wrote.
Others emphasized the complexity of the issue. “It’s an international arms race and the knowledge is out there. There’s not a good way to stop it. But we need to be careful even of AI simply crowding us out (millions or billions of AI agents could be a massive displacement risk for humans even if AI hasn’t surpassed human intelligence or reached AGI),” 3jaredsjones3 wrote.
“Safeguards are necessary as companies such as Nvidia seek to replace all of their workforce with AI. Still, the benefits for science, health, food production, climate change, technology, efficiency and other key targets brought about by AI could alleviate some of the problem. It’s a double edged sword with extremely high potential pay offs but even higher risks,” the comment continued.
One comment proposed regulatory approaches rather than halting AI altogether. Isopropyl suggested: “Impose heavy taxation on closed-weight LLM’s [Large Language Models], both training and inference, and no copyright claims over outputs. Also impose progressive tax on larger model training, scaling with ease of deployment on consumer hardware, not HPC [High-Performance Computing].”
By contrast, they suggested smaller, specialized LLM’s can be managed by consumers themselves, outside of corporate control to “help [the] larger public develop healthier relationship[s] to AI’s.”
“Those are some good ideas. Shifting incentives from pursuing AGI into making what we already have more usable would be great,” 3jaredsjones3 responded.
What do you think? Should AI development push forward? Share your view in the comments below.