Technology and Telecommunications

Agency reimagined: Liability in an era of agentic AI

March 27, 2026

The emergence of agentic artificial intelligence (AI) poses a direct challenge to one of the foundational concepts of law, legal agency.

Agentic AI systems are capable of autonomously initiating actions, making decisions, and interacting with third parties. Unlike traditional software tools, agentic AI systems can act with minimal human oversight and may appear, to external parties, to stand in the shoes of their deployers.

Australian law, which has long relied on technology neutral legal doctrines, must now confront whether and how those doctrines apply when the apparent decision maker, an agentic AI, is neither human nor a legal person (such as a corporation).

What Is an AI agent? A plain English explanation

In simple terms, an AI agent is software that can do more than respond to a single instruction; it can execute actions to achieve goals with a degree of independence. For example, an AI agent can:

  • receive a goal, such as “book travel within budget” or “optimise customer refunds”;
  • decide for itself which steps to take to achieve that goal;
  • interact with other systems, websites, or tools; or
  • adjust its behaviour based on feedback or changing conditions.
Common law agency in Australia

To understand how agentic AI challenges existing doctrine, it is first necessary to outline the basic principles of agency under Australian common law, particularly as they relate to contract formation.

  • The Concept of agency

At common law, an agency relationship arises where one party, the agent, is authorised to act on behalf of another, the principal, so as to affect the principal’s legal relations with third parties. Agency arises from substance and conduct, not formal appointment.

The defining feature of agency is authority. Where authority exists, the law treats the agent’s acts as those of the principal. Australian law recognises several forms of authority through which an agent may bind a principal.

These doctrines promote commercial certainty by allocating the risk of miscommunication to the principal

  • Contract formation by agents

Where an agent acts within authority, a contract formed by the agent binds the principal. The test focuses on objective intention. Internal limits on authority, such as policies or instructions, are irrelevant unless communicated to third parties.

This principle has particular significance where authority is exercised through automated or AI‑driven systems.

The rise of the machines: Agentic AI clashes with common law agency

Agentic AI is not legally challenging simply because it is technically sophisticated. It is challenging because it blurs boundaries that the law has traditionally relied upon. In particular, the concept of agency at common law is well understood, but is predicated on assumption that the agent is a legal person and executes actions on behalf of a principal within a scope of authority.

Agentic AI challenges those assumptions, including:

  • Autonomy without legal personality

AI agents can act autonomously, but they have no legal personality. They cannot owe duties, enter into contracts, or be sued in their own name.

  • Apparent authority without a human face

Third parties interacting with an AI agent may reasonably assume it is an actual person and authorised to act. The law of agency has long recognised apparent authority, but it developed in a world where the agent was a human being. AI agents make authority harder to see, verify, or challenge.

  • Scale, speed, and replication

An AI agent can make thousands of decisions in seconds. A mistaken or poor decision can cause harm at a scale far beyond a human agent. This amplifies legal risk under contract, tort (such as negligence), consumer protection laws, and data privacy law.

  • Non-deterministic behaviour

Many agentic systems do not behave the same way twice, even when given similar inputs. This complicates questions of foreseeability, reasonableness, and control, all of which are central to Australian liability regimes (such as duty of care in a negligence context).

How the law will deal with these issues remains to be seen as case law is limited, though we have set out below some interesting real-life examples which canvass some of these issues.

When AI agents go rogue: Real world examples and legal implications

Although the term “rogue AI” is often sensationalised, there are already concrete examples of AI driven systems acting in ways that caused legal consequences.

  • Air Canada chatbot case

In 2024, an Air Canada customer relied on information provided by an AI chatbot hosted on that airline's website about bereavement fares. The information was incorrect, and Air Canada attempted to deny responsibility by arguing that the chatbot was merely a tool. A Canadian tribunal rejected this argument and held the airline responsible for the chatbot’s representations.

  • Automated trading and financial systems

There have been multiple instances globally where algorithmic or AI driven trading systems executed unintended trades, causing substantial financial losses within minutes. Although not all were fully agentic by modern standards, newer systems increasingly incorporate autonomous decision making.

  • Robodebt: A cautionary precursor?

Australia’s Robodebt scheme was not agentic AI, but it illustrates the risks of automated decision systems operating at scale.  Robodebt was an Australian Government debt recovery scheme that used automated income averaging to issue incorrect debt notices to social security recipients, later exposed by a Royal Commission as a serious failure of legality, governance, and accountability in automated decision making.

Liability and governance considerations

There is no dedicated legislation regulating the adoption, use and distribution of AI in Australia.  Instead, as of December 2025, the Australian Government has indicated it will rely on existing legislation and regulators to identify and manage harms associated with the adoption of AI, such as privacy, consumer protection, and anti-discrimination laws.  

Some of the key liabilities to consider in agentic AI include:

  • Unintended contractual liability: One of the most immediate risks is that an organisation may be bound by contracts entered into by an AI agent that exceed internal expectations or instructions.
  • Consumer law liability: Agentic AI systems that interact directly with customers can create risks under the Australian Consumer Law. Misleading or deceptive conduct can occur regardless of intent. For example, consider an AI agent providing a consumer incorrect information about pricing, eligibility, refunds, or product features (see the Air Canada example referenced above).
  • Privacy laws: Agentic AI systems often access, use, and potentially disclose, personal information. This creates risks under the Privacy Act 1988 (Cth) and the Australian Privacy Principles. Because agentic systems can initiate actions without human review, there is an increased risk of unauthorised secondary use, excessive data collection, or disclosure beyond the original purpose.
  • Data loss: Because of the access granted to data, Agentic AI creates risks of unintended data exfiltration, data loss or data corruption. There have been known examples of AI agents deleting data or code bases, or chatbots being unwittingly talked into handing over large amounts of data by a threat actor.
  • Negligence: Organisations may face liability for harm caused by an AI agent if they owe a duty of care, the liability was reasonably foreseeable and could have been mitigated through better system design, testing, supervision, or safeguards.
  • Directors' duties: From a governance perspective, agentic AI increases directors’ exposure under duties of care and diligence. Australian regulators have consistently emphasised that reliance on technology does not dilute accountability. Directors are expected to understand, at a high level, how agentic systems operate and what risks they introduce.

Next steps

Agentic AI does not require Australian law to be reinvented, but it does require legal, technical and governance measures within an organisation.

Organisations considering, or already using, agentic AI should take the following steps.

  1. Map authority explicitly

Identify what decisions AI agents are permitted to make, what actions they can initiate, and where human approval is required. Authority should be defined technically, contractually, and operationally, not assumed.

  1. Review outward representations

Consider how AI agents present themselves to customers, counterparties, and the public. Be transparent so people know when they are dealing with an AI tool and when they are dealing with a person.

  1. Stress test legal risk scenarios

Ask how existing legal obligations would apply if an AI agent made a mistake at scale. This includes consumer misrepresentations, privacy contraventions, negligent decisions, or unauthorised contractual liabilities.

This article was written by Partner Hayden Delaney, a specialist technology, privacy, and intellectual property lawyer at Thomson Geer.

For further assistance on understanding the implications of AI on your company, contact our Technology team.

Download pdf
Recent posts

Keep
learning