Flat White

A five-point plan for Aussie AI

When a tech giant is thinking harder about public policy than the government, something has gone wrong

12 April 2026

12:31 PM

12 April 2026

12:31 PM

Last week, OpenAI released a white paper titled Industrial Policy for the Intelligence Age.

It’s unusually candid about the risks of artificial intelligence: wealth concentration, labour disruption, and the inadequacy of existing policy frameworks.

Cynics might call it regulatory frontrunning. Either way, it is more serious than anything the Albanese government has produced to date.

All Canberra has done is release ‘expectations’ for data centre developers. These focus on infrastructure but not what is running on it or regulating the scaffolding of the AI economy while ignoring the intelligence layer that creates value.

It is a familiar mistake.

Australia has spent decades exporting raw resources while others captured the upside. We risk doing the same with artificial intelligence, hosting the infrastructure while exporting the gains.

AI is already disrupting Australian workers more than data centre construction. Entry-level roles are the most exposed, stripping away the first rung of the career ladder.

This is the social contract question at the heart of artificial intelligence: if work disappears, what replaces it?

The government has no expectations for the developers of AI models, or for the businesses using them to siphon creators’ content and cut jobs.

In the first three months of this year Australian tech companies eliminated 4,450 roles, more than five times the total for all of 2025.

The government has no plan for any of this, so here’s one for free:

1. AI tariffs

As labour income falls and unemployment rises, so does the tax base that funds health, pensions, and services. A tax base built around labour income doesn’t work in an economy that is rapidly replacing it.

Tariffs on AI-driven profits, capital gains, and automated labour are the mechanism. Call them tariffs or taxes, the point is the same: if AI erodes the tax base, it must help replace it.


Returns should flow to citizens via a sovereign wealth-type fund, seeded by AI companies and invested in the broader AI economy, so that every Australian holds a stake in the growth they are helping to finance, not just those who can already afford to.

2. Protect workers and share the dividend

Shareholders and workers alike deserve to know when AI is reshaping a workforce. That means consultation before job conditions change, honest assessments before redundancies, and no hiding behind AI washing.

AI should not become a scapegoat for poor management decisions and pandemic-era over-hiring.

Efficiency gains should translate into shorter working weeks or higher benefits, not just higher margins.

The OpenAI paper proposes piloting a four-day work week at no loss of pay, converting productivity gains into permanent time back or banked leave.

Australia’s enterprise bargaining system is the obvious vehicle.

Benefits should also travel with workers, not employers. Portable entitlements, superannuation already shows the way, mean that healthcare, training, and leave don’t evaporate when a job does.

Companies should disclose the workforce impact of AI adoption. If that gives unions something to actually do for once, so much the better.

3. Universal access

AI can lower the barriers to starting a business if paired with financing and support. That means more small businesses created, more entrepreneurs, more startups.

Communities, schools, and small businesses need access to the tools driving productivity. The market will get there eventually. The question is whether Australia can afford to wait.

4. Pay creators and create a premium for human-generated content

Australian content is being used to train AI systems without compensation.

The News Media Bargaining Code already established that platforms must pay for value created by others. A licensing framework for AI training data is the same principle applied consistently.

If data is the input to intelligence, Australians should have a claim on how it is used.

A certification scheme for human-generated work could command an artisan premium as AI slopification homogenises everything. Think Australian Made, by actual Australian humans.

5. Safety and accountability

The risks are not hypothetical. Systems are already being described as capable of severe disruption if misused.

Whether overstated or not, the trajectory is clear.

Robo-debt showed what happens when automated systems are deployed without proper oversight.

The NDIS shows what happens when a well-intentioned program is built without the controls to prevent waste and rorting at scale.

AI governance needs to learn from both.

The OpenAI paper calls for statutory oversight, incident reporting, and strict guardrails on government use.

Support for displaced workers should scale automatically when unemployment indicators spike, not after a taskforce reports back in eighteen months.

Government procurement should enforce these standards. Not a taskforce, not a working group, not a consultation process. A regulator, like the ACCC but with actual teeth.

Every major technological revolution only delivered shared prosperity because governments stepped in where markets failed. AI is failing in exactly the ways you’d expect: monopoly, externalities, and gains that compound at the top.

Australia dug the iron ore and watched others build the cars. We can’t make the same mistake twice, this time, the resource is intelligence, and the window to act is closing.

Got something to add? Join the discussion and comment below.


Close