Target says customers are liable for purchases by its AI assistant

April 8, 2026 Target has updated its terms to state that customers are responsible for purchases made by its upcoming AI shopping assistant, even if the system makes mistakes. The policy means any order placed by the Gemini-powered assistant will be treated as authorised by the user.

The change was introduced on March 22 as part of the retailer’s rollout of an “agentic” shopping tool designed to recommend products and complete purchases. The system is built to reduce browsing and automate checkout, but Target acknowledges it may not always act as intended.

In its terms and conditions, the company states that it does not guarantee the AI agent “will act exactly as you intend in all circumstances.” Customers are expected to review all activity carried out by the assistant, including completed transactions.

That places the burden of oversight on users, even as the tool is designed to act independently. If the assistant orders the wrong items or misinterprets a request, the transaction still stands. A Target spokesperson said purchases made through the system will remain eligible for returns or exchanges under standard policy.

Target is not alone in pushing AI deeper into shopping workflows. Amazon and Walmart have launched similar assistants – Rufus and Sparky – that help users search, compare and buy products. Walmart also warns that its system can make errors, produce omissions or misunderstand inputs, reinforcing the need for users to verify purchases.

At the same time, retailers are expanding how AI is used beyond recommendations. Walmart has secured patents for systems that can automatically adjust prices based on demand forecasts and consumer behaviour, pointing to a broader shift toward AI-driven commerce operations.

The rollout of these tools introduces a new layer of responsibility in digital shopping. As AI agents move from suggesting products to completing transactions, companies are defining those actions as user-authorised by default, even when errors occur.

Top Stories

Related Articles

April 8, 2026 Developers are raising concerns that Anthropic’s Claude Code is becoming less reliable for complex engineering tasks, based more...

April 8, 2026 Anthropic has signed a new agreement with Google and Broadcom to significantly expand the compute capacity powering more...

April 8, 2026 Workers losing jobs to AI are not just struggling to find new roles; they are also earning more...

April 7, 2026 Iran’s Islamic Revolutionary Guard Corps (IRGC) has warned it will target U.S.-linked technology infrastructure, including a $30 more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn