AI-Driven Transactions and the Indian Law of Contract and Agency

Artificial intelligence (“AI”)  has become a routine feature of digital life, particularly among urban users with access to the internet and basic technological familiarity, where it is used for a range of functions including content generation, automation, and the management of smart devices. In the financial technology sector, AI is increasingly deployed to enable automation, predictive analysis, and personalised user experiences at scale, resulting in the development of products designed to minimise friction and reduce human intervention in routine transactions. One significant development in this trajectory is the emergence of “agentic commerce”, a system in which AI-powered digital agents act on behalf of users to initiate, authorise, and complete commercial or financial transactions. Unlike conventional AI tools that are limited to providing recommendations or reminders, agentic systems are designed to autonomously execute transactions such as placing orders, making payments, or subscribing to services based on learned user preferences and behavioural patterns, without contemporaneous human involvement. The introduction of products such as “Mastercard Agent Pay”[1] and “Visa Intelligent Commerce”[2] by major payment networks indicates that the delegation of transactional decision-making to AI agents is increasingly becoming a commercial reality rather than a theoretical possibility. This shift raises foundational questions about how such transactions are to be understood within existing legal frameworks.

 Contract Formation under the Indian Contract Act, 1872

Against this technological backdrop, the first legal inquiry concerns whether transactions executed through autonomous AI systems satisfy the requirements of a valid contract under Indian law. Under Indian law, a valid contract requires the existence of an offer and acceptance, lawful consideration, lawful object, and the free consent of parties competent to contract, as set out under Sections 2 and 10 of the Indian Contract Act, 1872 (“ICA”).[3]

Competency to contract is governed by Sections 11 and 12 of the ICA, which require that a contracting party must have attained the age of majority, be of sound mind, and not be disqualified from contracting by law. These provisions are intended to ensure that contractual obligations arise only where a legally capable human party has assented to the transaction.

In the context of agentic commerce, however, transactions may be initiated and completed by AI systems without any real-time human action. This raises questions as to whether the statutory requirements of consent and competency are satisfied in their traditional sense, particularly where the human user neither directly initiates nor contemporaneously approves the transaction.

 Intermediation and the Relevance of Agency Law

Where transactions are carried out through an intermediary, Indian law requires an examination of whether such acts can be attributed to a principal through the law of agency. The law of agency in India is governed by Sections 182 to 238 of the Indian Contract Act, 1872. Section 182 defines an “agent” as a person employed to do any act for another or to represent another in dealings with third persons.

Section 184 provides that, as between the principal and third persons, any person may become an agent, although a person who has not attained majority or is not of sound mind cannot be responsible to the principal. While this provision addresses the agent’s responsibility towards the principal, it does not, by itself, determine the validity of acts performed by the agent vis-à-vis third parties. Accordingly, the absence of legal personality or contractual capacity in an AI system does not automatically render every transaction executed through it void. Instead, the legal inquiry turns on whether the AI system acted within the authority conferred by the human principal under the general principles of agency.

 Authority, Attribution, and Enforceability

Having established the relevance of agency law, the critical issue becomes the scope and limits of authority exercised by such agents. Indian contract law recognises that an agent’s authority may be express or implied, and extends to all acts that are necessary or incidental to the purpose for which such authority is conferred, as provided under Sections 186 and 188 of the ICA.

In the context of agentic commerce, issues of authority arise where AI-powered systems execute transactions autonomously, without contemporaneous human involvement, based on learned user preferences and behavioural patterns. Where such autonomy operates without clearly articulated transactional boundaries or real-time user confirmation, questions may arise as to whether the transaction falls within the scope of authority that can be attributed to the human principal under general principles of agency law.

Section 237 of the ICA further provides that where a principal, by words or conduct, induces a third person to believe that an agent has authority, the principal is bound by the acts of the agent even if the agent has acted without actual authority. This provision assumes significance in scenarios where merchants or service providers rely on AI-initiated transactions on the basis of the user’s prior conduct or system configuration, rather than on contemporaneous confirmation by the user.

 Consensus ad Idem and Automated Transactions

Questions of authority are closely linked to the doctrine of consent, which lies at the heart of contract formation. Indian contract law is premised on the doctrine of consensus ad idem, requiring that parties agree upon the same thing in the same sense.

In agentic commerce, transactions may be executed without any active or contemporaneous input from the human principal, raising concerns that such a meeting of minds may not have occurred in the traditional sense. However, Indian law does not require that consent be expressed contemporaneously with performance in every case.

Consent may be inferred from prior conduct, standing instructions, or may be supplied through subsequent ratification under Sections 196 to 200 of the ICA. The legal difficulty, therefore, lies not in the complete absence of consent, but in evidencing whether the AI-initiated act can legitimately be traced back to the intention of the human principal, particularly where the transaction is disputed after execution.

Errors, Fraud, and Allocation of Liability

Even where issues of consent and authority can be addressed, practical complications arise when AI-driven transactions result in errors or unintended outcomes. Concerns also arise where AI agents commit errors due to software bugs, algorithmic limitations, or exposure to misleading advertisements, fake orders, or fraudulent schemes.

In such situations, determining liability becomes complex, particularly where the consumer had no active role in initiating or approving the transaction. Traditional doctrines such as caveat emptor may be ill-suited to fully address such scenarios, while attributing liability to developers or service providers raises questions of fault, foreseeability, and control over autonomous systems.

At present, Indian contract law does not provide explicit statutory rules governing liability for autonomous AI-initiated transactions, necessitating reliance on general principles of agency, negligence, and contractual risk allocation.

 Electronic Contracts and Statutory Recognition

These issues must also be considered against the statutory recognition accorded to electronic contracts under Indian law. Section 10A of the Information Technology Act, 2000 recognises that contracts formed through electronic means shall not be deemed unenforceable solely on that ground.[4] While this provision validates electronic contracts, it does not specifically address contracts concluded through autonomous AI systems.

Accordingly, although agentic commerce transactions may satisfy the formal validity requirements applicable to electronic contracts, substantive questions relating to authority, consent, and attribution remain unresolved.[5]

 Risk-Mitigation and Responsible Design

In the absence of a specific statutory framework governing agentic commerce, it becomes important for innovators and service providers to incorporate safeguards that mitigate legal uncertainty.²¹

Such safeguards may include real-time notifications or confirmation mechanisms, the maintenance of comprehensive audit trails, post-transaction grace periods allowing reversal of unintended transactions, and explicit restrictions on categories of transactions that AI agents are not permitted to undertake.²²

While these measures do not resolve underlying doctrinal issues, they may reduce disputes and enhance user confidence in AI-driven transactional systems.

 Conclusion

Agentic commerce represents a significant evolution in digital transactions and challenges traditional notions of consent, agency, and liability under Indian contract law. Although existing statutory principles under the Indian Contract Act, 1872 and the Information Technology Act, 2000 provide a foundational framework, they were not designed with autonomous AI systems in mind.

As AI-driven transactions become more prevalent, careful contractual structuring and responsible system design will remain essential until greater legislative or judicial clarity emerges.

[1]Mastercard, Mastercard unveils Agent Pay, pioneering agentic payments technology to power commerce in the age of AI, Mastercard press release (April 29, 2025) (on Mastercard.com), available at https://www.mastercard.com/us/en/news-and-trends/press/2025/april/mastercard-unveils-agent-pay-pioneering-agentic-payments-technology-to-power-commerce-in-the-age-of-ai.html.

[2] Visa, Intelligent Commerce, Visa corporate website, available at https://corporate.visa.com/en/products/intelligent-commerce.html.

[3] Indian Contract Act, 1872, ss. 2, 10.

[4] Information Technology Act, 2000, s. 10A.

[5] Id.