A new report by the Australian government warns the people
against the potential threats of artificial intelligence (AI) and urges them to mitigate these risks.
The report released by the Australian Signals Directorate (ASD) and produced by the Australian Cyber Security Centre (ACSC) and partners stated that the government, academia and industry play an important role in managing AI technology through effective regulation and governance. It cited several potential threats that should be addressed to ensure secure AI engagement, despite the technology's capability to enhance efficiency and reduce costs.
For instance, it warned of "data poisoning" – which involves
manipulating the training data of AI to teach the model incorrect patterns. This manipulation can result in the misclassification of data or the production of biased, inaccurate or malicious outputs. The impact of data poisoning could negatively affect any organizational function reliant on the integrity of AI system outputs.
"An AI model's training data could be manipulated by inserting new data or modifying existing data, or the training data could be taken from a source that was poisoned to begin with. Data poisoning may also occur in the model’s fine-tuning process," the report stated. (Related:
SMASHING the AI threat matrix – How human resistance defeats Skynet.)
Manipulation attacks, such as prompt injection and implementation of malicious instructions or hidden commands into an AI system, "can evade content filters and other safeguards restricting the AI system’s functionality."
Generative AI systems like chatbots can result in false information due to processing incomplete or incorrect patterns. Organizations relying on the accuracy of generative AI outputs need to implement appropriate mitigations to avoid negative impacts. Organizations are also warned about the information shared with generative AI systems, as it can influence outputs and pose privacy and intellectual property concerns.
The report underscored the risk of model-stealing attacks, where malicious actors use AI outputs to create replicas, allowing competitors to benefit from the development costs without sharing in the initial investment.
Developers usually set aside the consequences of rapid AI development
Entrepreneur Ian Hogarth, a significant investor in the AI sector, warned the public in an opinion piece about the reckless development of AI that could
potentially lead to the creation of "a God-like AI capable of destroying humanity."
Hogarth highlighted the imminent risk as AI systems edge closer to achieving artificial general intelligence (AGI), a state where machines can comprehend and learn anything humans can. The current AI technology has not reached this level yet, but the rapid growth of the industry aims to achieve AGI. However, achieving this goal comes with very high and dangerous stakes.
"Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be," Hogarth wrote.
He claimed that AI researchers are not sufficiently focusing on the potential dangers of AGI or communicating these risks to the public. Hogarth recounted a conversation with a researcher who, while grappling with the responsibility, seemed swept along by the rapid progress in the field.
The investor acknowledged his own role in AI development, heavily bankrolling over 50 startups dedicated to AI and machine learning. He emphasized the lack of oversight and understanding as companies race toward AGI without a clear strategy for ensuring its safe implementation.
Referring to AGI as "God-like AI," Hogarth envisioned a superintelligent computer capable of autonomous learning and development, understanding its environment without supervision, and potentially transforming the world with unforeseeable consequences.
This, in turn, along with the report of the ASD
, claims that the discussion of the threats aims to help AI stakeholders engage with the technology securely and not stop the public from using AI.
Follow
FutureTech.news for more stories about AI and its dangers.
Watch this video from
InfoWars discussing how new AI systems are being programmed
to end all of humanity.
This video is from the
InfoWars channel on Brighteon.com.
More related stories:
AI is currently the greatest threat to humanity, warns investigative reporter Millie Weaver.
Entrepreneur Ian Hogarth warns reckless development of AI could lead to the destruction of humanity.
Save My Freedom with Michele Swinick: Use of AI will lead to the END OF HUMANITY, Jeff Dornik warns – Brighteon.TV.
Sources include:
TheEpochTimes.com
Futurism.com