The emergence of agentic artificial intelligence (AI) poses a direct challenge to one of the foundational concepts of law, legal agency.
Agentic AI systems are capable of autonomously initiating actions, making decisions, and interacting with third parties. Unlike traditional software tools, agentic AI systems can act with minimal human oversight and may appear, to external parties, to stand in the shoes of their deployers.
Australian law, which has long relied on technology neutral legal doctrines, must now confront whether and how those doctrines apply when the apparent decision maker, an agentic AI, is neither human nor a legal person (such as a corporation).
In simple terms, an AI agent is software that can do more than respond to a single instruction; it can execute actions to achieve goals with a degree of independence. For example, an AI agent can:
To understand how agentic AI challenges existing doctrine, it is first necessary to outline the basic principles of agency under Australian common law, particularly as they relate to contract formation.
At common law, an agency relationship arises where one party, the agent, is authorised to act on behalf of another, the principal, so as to affect the principal’s legal relations with third parties. Agency arises from substance and conduct, not formal appointment.
The defining feature of agency is authority. Where authority exists, the law treats the agent’s acts as those of the principal. Australian law recognises several forms of authority through which an agent may bind a principal.
These doctrines promote commercial certainty by allocating the risk of miscommunication to the principal
Where an agent acts within authority, a contract formed by the agent binds the principal. The test focuses on objective intention. Internal limits on authority, such as policies or instructions, are irrelevant unless communicated to third parties.
This principle has particular significance where authority is exercised through automated or AI‑driven systems.
Agentic AI is not legally challenging simply because it is technically sophisticated. It is challenging because it blurs boundaries that the law has traditionally relied upon. In particular, the concept of agency at common law is well understood, but is predicated on assumption that the agent is a legal person and executes actions on behalf of a principal within a scope of authority.
Agentic AI challenges those assumptions, including:
AI agents can act autonomously, but they have no legal personality. They cannot owe duties, enter into contracts, or be sued in their own name.
Third parties interacting with an AI agent may reasonably assume it is an actual person and authorised to act. The law of agency has long recognised apparent authority, but it developed in a world where the agent was a human being. AI agents make authority harder to see, verify, or challenge.
An AI agent can make thousands of decisions in seconds. A mistaken or poor decision can cause harm at a scale far beyond a human agent. This amplifies legal risk under contract, tort (such as negligence), consumer protection laws, and data privacy law.
Many agentic systems do not behave the same way twice, even when given similar inputs. This complicates questions of foreseeability, reasonableness, and control, all of which are central to Australian liability regimes (such as duty of care in a negligence context).
How the law will deal with these issues remains to be seen as case law is limited, though we have set out below some interesting real-life examples which canvass some of these issues.
Although the term “rogue AI” is often sensationalised, there are already concrete examples of AI driven systems acting in ways that caused legal consequences.
In 2024, an Air Canada customer relied on information provided by an AI chatbot hosted on that airline's website about bereavement fares. The information was incorrect, and Air Canada attempted to deny responsibility by arguing that the chatbot was merely a tool. A Canadian tribunal rejected this argument and held the airline responsible for the chatbot’s representations.
There have been multiple instances globally where algorithmic or AI driven trading systems executed unintended trades, causing substantial financial losses within minutes. Although not all were fully agentic by modern standards, newer systems increasingly incorporate autonomous decision making.
Australia’s Robodebt scheme was not agentic AI, but it illustrates the risks of automated decision systems operating at scale. Robodebt was an Australian Government debt recovery scheme that used automated income averaging to issue incorrect debt notices to social security recipients, later exposed by a Royal Commission as a serious failure of legality, governance, and accountability in automated decision making.
There is no dedicated legislation regulating the adoption, use and distribution of AI in Australia. Instead, as of December 2025, the Australian Government has indicated it will rely on existing legislation and regulators to identify and manage harms associated with the adoption of AI, such as privacy, consumer protection, and anti-discrimination laws.
Some of the key liabilities to consider in agentic AI include:
Next steps
Agentic AI does not require Australian law to be reinvented, but it does require legal, technical and governance measures within an organisation.
Organisations considering, or already using, agentic AI should take the following steps.
Identify what decisions AI agents are permitted to make, what actions they can initiate, and where human approval is required. Authority should be defined technically, contractually, and operationally, not assumed.
Consider how AI agents present themselves to customers, counterparties, and the public. Be transparent so people know when they are dealing with an AI tool and when they are dealing with a person.
Ask how existing legal obligations would apply if an AI agent made a mistake at scale. This includes consumer misrepresentations, privacy contraventions, negligent decisions, or unauthorised contractual liabilities.
This article was written by Partner Hayden Delaney, a specialist technology, privacy, and intellectual property lawyer at Thomson Geer.
For further assistance on understanding the implications of AI on your company, contact our Technology team.