How Australian Councils Are Using AI (and Where It's Going Wrong)


When you think about AI adoption in Australia, local councils probably aren’t the first organisations that come to mind. But they should be. Councils are some of the most interesting AI testing grounds in the country — partly because they deal with such a wide range of problems, and partly because they’re spending public money, which means the stakes for getting it wrong are higher.

Over the past 18 months, councils from the City of Sydney to regional shires in Queensland have been experimenting with AI across dozens of use cases. Here’s what’s working, what isn’t, and what ratepayers should be paying attention to.

What’s working: the operational stuff

The AI applications that work best in local government are the unsexy, operational ones. No surprise there.

Pothole and infrastructure detection. Several councils, including City of Melbourne, are using camera-equipped vehicles and AI image recognition to identify road damage, cracked footpaths, and damaged street furniture. Instead of relying on resident complaints (which skew towards wealthier, more vocal areas), councils get a systematic survey of infrastructure condition across the entire municipality.

It’s not perfect — the systems still flag false positives, and a human needs to review the results — but it’s faster and more comprehensive than manual inspections.

Waste management optimisation. AI-powered route optimisation for garbage trucks is saving fuel and time. Smart bins that report fill levels mean trucks don’t have to visit every bin on every route. Some councils have reported 15-20% reductions in collection costs.

Customer service triage. Councils handle thousands of inquiries — from barking dog complaints to planning questions to rate payment issues. AI-powered chatbots and email classification tools are routing these to the right team faster. The City of Brisbane’s online services portal uses automated triage, and response times have improved measurably.

What’s working (with reservations): planning and approvals

This is where it gets interesting. Several councils are experimenting with AI to assist in development application (DA) assessments. The idea is that AI can check an application against planning rules — setbacks, height limits, heritage overlays — and flag potential issues before a human planner reviews it.

In theory, this speeds up the process. In practice, results are mixed. Planning rules are complex, often ambiguous, and full of exceptions that depend on context. AI can handle the straightforward checks well. It struggles with the judgment calls.

The NSW Department of Planning has been cautiously positive about AI-assisted assessments, but has emphasised that AI should support planners, not replace them. That’s the right approach. A DA that affects someone’s property rights shouldn’t be decided by an algorithm.

Where it’s going wrong: predictive policing and compliance

Some councils have experimented with predictive analytics for compliance — using data to predict where illegal dumping, unapproved building work, or parking violations are most likely to occur and directing enforcement resources accordingly.

The problem is the same one that plagues predictive policing everywhere: the predictions are only as good as the historical data, and historical data reflects historical enforcement patterns. If you’ve always patrolled certain areas more heavily, you’ll have more data about violations in those areas, and the AI will tell you to keep patrolling there. It’s a feedback loop that reinforces existing biases.

The Australian Human Rights Commission has raised concerns about algorithmic bias in government decision-making, and councils need to take these concerns seriously. Using AI to direct enforcement without understanding its biases risks targeting disadvantaged communities disproportionately.

The transparency problem

Here’s what concerns us most: many councils adopting AI tools can’t clearly explain how those tools work. They’ve purchased a vendor’s product, it makes recommendations, and they follow those recommendations without fully understanding the methodology.

This is a governance failure. When a council uses AI to influence decisions about planning approvals, enforcement priorities, or resource allocation, ratepayers have a right to know how those decisions are being made. “The algorithm recommended it” isn’t good enough.

Some councils are getting this right. The City of Sydney, for instance, has published information about its approach to AI governance. But many smaller councils are adopting AI tools without any governance framework at all.

The procurement challenge

Council procurement processes weren’t designed for AI. They’re built for buying trucks, hiring contractors, and purchasing software licenses — well-defined products with clear specifications.

AI is different. It’s often experimental. Outcomes are uncertain. The product might need significant customisation. And evaluating whether an AI tool actually works requires technical expertise that many council IT departments don’t have.

This creates a power imbalance. Vendors who understand AI are selling to buyers who don’t. The result is councils paying for tools that don’t deliver, or adopting technology without understanding its limitations.

AI consultants in Melbourne and other capitals are increasingly being brought in to help councils evaluate AI proposals independently — not selling a product, but providing the technical assessment capability that councils lack in-house.

What councils should be doing

Based on what we’ve seen, here are the principles that separate the councils doing AI well from the ones stumbling:

Start with a problem, not a technology. The councils getting value from AI started by identifying a specific operational problem — slow DA processing, inefficient waste collection, poor road maintenance — and then explored whether AI could help. The ones struggling started with “we should do something with AI.”

Keep humans in the loop. Every council AI implementation that works has meaningful human oversight. Every one that’s caused problems has automated decisions without adequate review.

Be transparent. Publish your AI governance framework. Explain to residents what AI tools you’re using and how they influence decisions. This builds trust and accountability.

Build internal capability. Don’t rely entirely on vendors. At least one person in your organisation should understand how the AI tools work well enough to challenge the vendor’s claims and identify potential problems.

Measure outcomes. Did the AI tool actually improve the thing it was supposed to improve? This seems obvious, but many councils adopt AI tools without establishing clear baseline metrics to measure against.

The bigger picture

Local councils are a microcosm of AI adoption across Australian organisations. The same lessons apply: start with clear problems, keep humans involved, be honest about limitations, and measure whether the technology is actually helping.

The councils getting this right are building a template that other public and private sector organisations can learn from. The ones getting it wrong are learning expensive lessons that taxpayers are footing the bill for.

As residents and ratepayers, it’s worth asking your local council: what AI are you using, and how do you know it’s working? If they can’t answer clearly, that’s a conversation worth having.