Data Availability and Transparency Act Implementation: 2026 Status
The Data Availability and Transparency Act 2022 set up the framework for sharing Australian Government data with accredited users for specific purposes. Three years into operational implementation, it has produced a body of practical experience that’s now worth taking stock of.
For agency staff working through data sharing decisions, and for the research and policy users on the receiving side, the patterns of what’s working and what isn’t have become reasonably clear.
The core machinery is functional
The accreditation framework operated by the National Data Commissioner has settled into something workable. The approval processes for data service providers and accredited users have a reasonably predictable shape, the documentation requirements are consistent, and the timelines, while not fast, are no longer unpredictable.
The volume of data sharing arrangements operating under the DATA Scheme has grown steadily. The published register of authorisations gives a useful picture of the application areas - public health research, social policy evaluation, productivity research, environmental monitoring, and a slowly expanding range of cross-portfolio analytical projects.
The sharing arrangements that have worked best tend to share certain features. The use case is well-defined and time-bounded. The data flows are documented in advance. The accredited user has a track record of operating within similar regulatory frameworks. The data custodian agency has a senior staffer with the authority to make decisions and the time to engage with the process.
The friction points are where you’d expect them
The persistent friction in the system clusters around a few areas. The threshold judgment of whether a particular sharing request meets the public interest test continues to absorb significant agency time and produces variable outcomes across different agencies. Two custodians considering broadly similar requests can reach different conclusions, and the appeals or escalation pathways for disagreements about these judgments remain underdeveloped.
The privacy impact assessment workload for novel sharing arrangements is heavy. The PIA requirements are appropriate - we’re talking about sensitive data flowing to new users for new purposes - but the practical effect is that any non-routine sharing arrangement involves months of work before data starts moving. For research projects with funding cycles that don’t align well with this timeline, the friction has been significant.
The data preparation costs sit predominantly with the custodian agencies. Several agencies have absorbed substantial workload increases without commensurate resource increases, which has affected their willingness to engage with new requests. The “user pays” framing in the legislation has not translated cleanly into operational cost recovery in many cases.
Inter-agency sharing has been the surprise
A genuinely positive development that the original legislative intent didn’t fully anticipate has been the use of the DATA Scheme to facilitate inter-agency analytical work. Several major cross-portfolio analytical projects have used the scheme as the legal foundation for sharing administrative data between Commonwealth agencies for policy analysis purposes.
The productivity-related work that the Productivity Commission and other analytical bodies have been doing on linked administrative datasets has benefited substantially from the framework. The scheme’s structure has provided a workable basis for doing this work without falling back on the older, more cumbersome inter-departmental agreement model.
State-Commonwealth sharing, where the state agency or analytical body is acting in an accredited user capacity, has been more variable. The legal complexities of state-level governance interacting with the Commonwealth scheme have produced some workable arrangements and some that haven’t progressed past the initial discussions.
The accredited user landscape
The accredited user community has grown to include the major university research entities, several state government analytical bodies, the principal Commonwealth analytical agencies, and a smaller number of independent research organisations. The scheme has not been used extensively by private sector users, partly by design and partly because the accreditation requirements set a high bar that’s harder to meet for entities without established research governance infrastructure.
This is a deliberate feature of the scheme, but it does mean that some categories of useful analytical work - particularly market research and applied economic analysis that could inform policy without being conducted by traditional research entities - aren’t well-served by the current arrangements. Whether this matters depends on where you sit on the legitimate users question.
The technical infrastructure has lagged
A practical issue that gets less coverage than the legal framework is the technical infrastructure for actually moving data securely between custodians and accredited users. The legal framework has matured faster than the operational data delivery technology in many cases.
Several sharing arrangements still rely on essentially manual data transfer processes - extracts produced on a schedule, transferred via secure file transfer, and loaded into the user’s environment. The friction this introduces matters less for one-off sharing arrangements and more for ongoing analytical work that needs reasonably current data.
The Five Safes implementation across different secure analytical environments is also variable. Some hosting environments are mature and well-trusted; others are still being built. Users working across multiple data sources have had to learn the nuances of multiple different secure environments, which is operationally expensive.
The investment case for shared technical infrastructure - secure analytical environments that can serve multiple custodians and multiple users with consistent governance - is strong, but the cross-agency funding model for this kind of infrastructure has been hard to assemble. Several agencies have built their own environments because waiting for shared infrastructure was not feasible.
AI and the data sharing question
A 2026-specific topic the framework is starting to grapple with is what happens when accredited users want to feed shared data into AI training or fine-tuning workflows. The original legislative drafting predates the current AI context, and the secondary use considerations for data feeding into model weights are not clearly addressed.
Several recent sharing requests have surfaced this question, and the National Data Commissioner’s office has been working through guidance. The likely direction is a tightening of the secondary use restrictions for AI training applications, with explicit consideration required as part of the sharing approval process.
For agencies working on AI applications using shared data, the practical effect is that the use case needs to be clearly articulated and the model retention and deletion arrangements need to be specifically agreed. Some agencies have been engaging external AI specialists working in the public sector AI governance space to help structure these arrangements properly.
Where the next reform conversation should focus
A few areas look like the priorities for the next round of attention. The custodian agency resourcing model needs realignment - the current arrangement where custodians absorb most of the cost while users get most of the benefit is not stable. Standardised technical infrastructure for data sharing and secure analysis would reduce the per-transaction cost of getting work done. The threshold judgment processes around the public interest test would benefit from more shared guidance and more transparent appeals mechanisms.
The DATA Scheme has been a measured success against modest expectations. Pushing it to the next level requires investment in the operational machinery rather than further legislative refinement.