Executive summary
- Australia must fully weigh the downside risk associated with advanced AI alongside its potential economic benefits. AI-enabled harms are already occurring and as models become more capable the potential hazard only increases.
- Well-designed regulation enables innovation, providing the certainty and consistency needed by industry to invest. Clear rules foster investment, create a level playing field, and catalyse innovation by reducing legal uncertainty and increasing user trust by delivering safety and security.
- Seizing the benefits of AI requires uptake, and uptake requires building user trust. Without public confidence in AI systems, adoption will stall and be uneven. Australians support balanced regulatory responses to AI, that include oversight and safety mechanisms.
- ‘Wait and see’ and framing AI-specific regulation as a “last resort” are flawed approaches. A regulatory approach confined to reforming existing frameworks cannot effectively manage the novel and cross-cutting hazards associated with AI. Work on AI-specific regulation must be done concurrently with the regulatory gaps review. Delay will only increase uncertainty, hinder investment, and make regulatory intervention more difficult and expensive in the future.
- The Government requires better visibility of AI harms. This can be achieved through a dedicated, economy-wide monitoring and adverse incident reporting mechanism to aggregate data, enable cross-regulator coordination, and support corrective actions.
