Australia is taking steps to establish an expert advisory group to evaluate and develop options for mandatory guardrails on AI research and development. Minister for Industry and Science Ed Husic announced the government’s consideration of mandatory guardrails for AI development and deployment in high-risk settings. This includes potential changes to existing laws or the creation of new AI-specific laws.
Australia is also working with the industry to develop a voluntary AI Safety Standard and options for voluntary labeling and watermarking of AI-generated materials to ensure greater transparency. The mandatory guardrails will include requirements related to testing to ensure product safety before and after release. The Australian federal government is also keen to ensure accountability, which may include training for developers and deployers of AI systems, certifications, and clearly defined expectations of accountability for organizations developing and deploying AI systems.
Australia’s interim response to the consultation paper on Safe and responsible AI in Australia goes beyond voluntary restraints on AI development, recognizing the risks of biases, errors, and limited transparency. In addition, the Australian government announced an investment of $101.2 million to support businesses in using quantum and AI in their operations.
Several countries, including the EU, UK, US, and China, are working on regulations to manage AI technology. Australia is keen to harmonize its AI regulations with other countries, recognizing the importance of international governance frameworks for AI-enabled systems supplied on a global scale.
As AI adoption gathers pace, countries are trying to come up with regulations quickly to better manage and govern the impact of the technology. AI is known to enhance productivity and has the potential to transform various industries.
2024-01-18 11:41:02
Article from www.computerworld.com