Join the movement to end censorship by Big Tech. StopBitBurning.com needs donations and support.
Indian scientist Shekhar Mande warns of AI's dangers – including viral outbreaks, nuclear war and HUMAN EXTINCTION
By kevinhughes // 2023-08-23
Mastodon
    Parler
     Gab
 
A scientist has warned that getting too comfortable with artificial intelligence (AI) could pose a danger to humanity. Indian scientist Shekhar Mande issued this warning during a lecture, saying that humanity should be ready for AI to take over and create viral outbreaks, nuclear war and even human extinction. According to Mande – the former director general of India's Council of Scientific and Industrial Research – AI will be the principal cause of human extinction. Experts in the field have predicted that AI will be the first cause of humanity's extinction, followed by nuclear war and viral outbreaks. His elucidation on the three threats that could render humanity extinct invited reflection about the fine balance between progress, security and the preservation of humanity. The Indian scientist is not the first person to think about the problems mankind faces with AI. While humans have made progress in science and technology by creating computers that think like people, some troubling thoughts are popping up as well. (Related: AI likely to WIPE OUT humanity, Oxford and Google researchers warn.) This pivot toward AI is not in the best interest of humanity. Yuval Noah Harari, a close adviser to Klaus Schwab of the globalist World Economic Forum, stated that AI is going to perform the hard task of controlling the slave class and making them obsolete. Harari's argument centers on the ruling class employing this technology against the slave class. Once a critical mass of the slave population completely realizes their situation, the machines will do the tough job for the sociopaths at the top.

Top U.S. official recognizes the risks of AI

Meanwhile, a top American official for cybersecurity earlier warned that humanity could be at risk of an "extinction event" if tech companies fail to self-regulate and work with the government to reign in the power of AI. The warning came from Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA) under the U.S. Department of Homeland Security (DHS). Easterly's remarks followed the release of a May 2023 statement involving hundreds of tech leaders and public figures who compared the existential threat of AI to a pandemic or nuclear war. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the one-sentence statement issued by the San Francisco-based nonprofit Center for AI Safety (CAIS). More than 300 individuals affixed their signatures to the statement, including Open AI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. Other public figures outside the tech industry also signed the statement, including neuroscience author Sam Harris and musician Grimes. In response to questions about the CAIS statement, Easterly asked the signatories to self-regulate and work with the government. "I would ask these 350 people and the makers of AI – while we're trying to put a regulatory framework in place – think about self-regulation, think about what you can do to slow this down, so we don't cause an extinction event for humanity," Easterly said. "If you actually think that these capabilities can lead to [the] extinction of humanity, well, let's come together and do something about it." For his part, Altman told senators during a hearing that he backs government regulation as a means of preventing the harmful effects of AI. Such regulatory steps include the adoption of licenses or safety requirements required for the operation of AI models. "If this technology goes wrong, it can go quite wrong," he said. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models." Follow FutureTech.news for more news about AI. Watch Yuval Noah Harai explaining how AI can destroy humanity below.
 
This video is from the Thrivetime Show channel on Brighteon.com.

More related stories:

AI takeover is INEVITABLE: Experts warn artificial intelligence will become powerful enough to control human minds, behaviors. EXTREME SCENARIOS: Artificial intelligence could revolutionize tech sector forever – or wipe out the human race. Researchers: AI decisions could cause "nuclear-level" CATASTROPHE. Big Tech, globalist elites join forces in secret meeting to talk about artificial intelligence. Elon Musk announces creation of new AI company after spending YEARS criticizing rapid AI development. Sources include: SHTFPlan.com ABCNews.go.com Brighteon.com
Mastodon
    Parler
     Gab