AI in Procurement: The Legal Minefield Boards Cannot Ignore
- Dragan Gasic
- 3 days ago
- 3 min read
The race to automate procurement and contract negotiation has turned from a jog to a sprint. From automated tender assessments to algorithmically generated contract terms, AI now sits inside the commercial engine room of many organisations. It promises speed, uniformity, and efficiency. But behind the glow of productivity lies a set of legal risks that many boards are not adequately confronting.
This isn’t optional. Directors cannot delegate accountability to the machine. As AI embeds itself deeper into high-value procurement processes, decisions once made by humans - judgment calls, risk balancing acts - are now being made by code. But legal liability remains firmly human.
You Can’t Outsource Legal Responsibility to a Machine
Let’s dispel the fantasy: automation doesn’t dilute accountability. It concentrates it. When AI shortlists vendors, evaluates compliance, or recommends contract awards, those outcomes are attributable to the organisation and ultimately, to its directors. Faulty data? Embedded bias? Skewed outputs? These are not IT issues. They are governance failures.
Regulators are watching. Boards are expected to understand how material decisions are made, especially those with contractual, financial, or stakeholder implications. The Corporations Act doesn’t stop applying just because the decision came from a model instead of a manager.
Misleading Conduct Has No Exemption for Algorithms
AI doesn’t intend to mislead but that’s irrelevant. If an AI-generated response overstates capabilities, omits material caveats, or creates a misleading impression during procurement, it can give rise to a breach of the Australian Consumer Law. Intention is not required. Impact is everything.
Liability for misleading or deceptive conduct arises from the effect on the recipient. Directors must ensure that AI-generated communications - contract clauses, pricing structures, or automated supplier responses - are legally reviewed and validated before being relied upon. A system that auto-generates a mistake doesn’t absolve the company of liability. It magnifies the risk.
Standard Form Contracts: One-Sided Means Exposed
AI can rapidly produce standard form contracts. But if it reproduces outdated templates riddled with unfair terms - terms that significantly favour one party, restrict remedies, or impose one-sided indemnities - the company may find itself exposed under the expanded unfair contract terms regime.
The risk isn’t hypothetical. Clauses that once went unnoticed may now attract substantial penalties. Boards must ensure legal teams not only review the enforceability of AI-generated contracts but also audit the AI’s training data and configuration. What is it replicating? From where? With what legal oversight?
Confidential Data Is Not a Free Resource
AI systems often run on historical procurement data, commercially sensitive material, including pricing models and negotiation history. If contracts with vendors don’t lock down data use, ownership, and post-engagement restrictions, that information may be retained, reused, or exposed especially in cloud-based systems where models are trained across multiple clients.
Boards must insist on clear contractual protections: no training on client data, robust information security terms, and enforceable confidentiality clauses. Don’t assume vendors have this covered. Assume they don’t.
Discrimination by Proxy: Ethical AI Is a Governance Issue
AI systems used in procurement can inadvertently entrench bias, excluding certain suppliers by geography, scale, ownership, or history. For organisations with ESG, diversity, or public procurement obligations, that’s a reputational and legal landmine.
Boards must audit AI outcomes. What is the system optimising for? Are fairness metrics in place? Are anomalies being escalated and investigated? Ethical oversight of AI is not performative. It is essential risk management. Failure to act invites not just public scrutiny but legal exposure under anti-discrimination and social procurement laws.
AI Governance: A Boardroom Responsibility
The use of AI in procurement is not a back-office innovation. It is a governance issue. Directors must demand clear answers:
· Where is AI being deployed?
· What decisions is it influencing?
· What controls are in place?
· How are errors detected and corrected?
Policies must define the scope of AI use, the review mechanisms, and escalation procedures. Risk reporting should include AI-specific metrics. And directors must not accept technical jargon as a substitute for assurance.
As regulators sharpen their focus and litigation follows automation errors, AI governance will become a central plank of corporate accountability. Directors must lead - not follow - the thinking here.
ABOUT THE AUTHOR
Dragan Gasic jis a Special Counsel at BlackBay lawyers with 25 years of experience in the legal field, including time as a former barrister, Dragan Gasic specialises in complex commercial disputes, including shareholder and partnership disputes, director duties, and personal and corporate insolvency. His broad expertise covers a range of sectors, such as joint ventures, franchising, commercial leases, commissions of inquiry, corporate crime, employment law, defamation, copyright, and mortgage enforcement.