The latest episode of Reality checks with Brian Reitz features Michael Nevski, Director, Global Insights at VISA, covering adoption of AI shopping agents, guardrails, data sharing, and transaction disputes.
“Consumers are craving for efficiencies, and I think agentic AI will provide those efficiencies and support for consumers’ needs and desires,” said Nevski.
To uncover the reality behind AI shopping agent adoption, Michael Neski and the YouGov team collaborated on a five-question survey of 1,000 Americans.
Gen Z & Millennials likely to lead AI agent shopping adoption
Despite the potential for shopping agents to secure better prices, most US consumers remain hesitant about outsourcing their purchases to AI, with 58% saying they are either not very likely or not at all likely to do so.
On the flip side, 42% of Americans said they would be very or somewhat likely to let AI make purchases on their behalf. Given how rapidly consumers have adopted generative AI, attitudes towards AI shopping agents could shift quickly over the next several years as well.
Diving into demographic splits, the data reveals a significant generational divide in attitudes towards AI-powered shopping assistants.
While 48% of Gen Z and Millennials are either very or somewhat likely to let an AI make purchases on their behalf, only 33% of Baby Boomers share the sentiment.
“Knowing that Gen Zers are digital natives and younger Millennials as well, they’re very prone to adopt a new technology for efficiencies and personalization,” said Nevski.
Men and women show nearly identical levels of being "very likely" (12%) to use such services, though women are slightly more hesitant overall, with 38% “not at all likely" to use AI shopping assistants compared to 32% of men.
Majorities of likely AI agent users would use safeguards
YouGov’s survey with VISA’s Michael Nevski reveals a strong preference for human oversight in AI-managed finances, with 67% of likely AI agent adopters suggesting they would utilize a safeguard that allowed for human review for transactions above a certain threshold.
Men who expressed a likelihood to adopt agentic AI show a higher inclination towards safeguards across most categories, most significantly in real-time spending notifications (65% vs 51% for female adopters).
The desire for a "financial panic button" safeguard to halt AI activity is relatively consistent among likely AI agent adopters, regardless of gender (63% for men, 59% for women), while mood-based spending limits show a similar consistency but less overall utilization.
Most AI agent adopters would not authorize spend over $100
81% of AI agent adopters would not be comfortable with the technology spending more than $100 without authorization, and 48% would not allow spending over $25.
“That means people still need to develop trust in the technology,” said Nevski. “We are still dealing with innovators and early adopters.”
While expected adoption rates and desired safeguards remained relatively consistent among men and women, the survey uncovers a significant gender divide in spending allowances among likely AI agent adopters.
While men and women show similar comfort levels for mid-range spending ($25-100), female adopters are nearly twice as likely to prefer a sub-$25 limit compared to men (26% vs 14%).
Conversely, male adopters show a much higher comfort with larger AI-driven purchases, with 25% willing to allow transactions over $100, compared to just 12% of women.
Men much more likely to share location data with AI agents
Most Americans who say they are likely to utilize AI for shopping are willing to share data with AI agents. In fact, just 16% said they would not share any data, while 21% said they would share only transaction data.
Meanwhile, half of potential adopters (50%) are open to sharing past purchases and spending patterns to improve an AI agent's ability to support your shopping and financial decision-making.
Location data sharing shows a notable gender disparity, with 45% of male adopters willing to share compared to 28% of surveyed female adopters.
Who’s responsible for an AI agent’s purchase mistake?
Finally, the survey data reveals a complex landscape of responsibility in AI-driven purchases, with no clear consensus on who should bear primary accountability when an agent makes an unwanted purchase.
Among Americans who are likely to adopt AI agents, the company offering the AI service emerges as the top choice for primary responsibility, with 33% of likely agentic AI users ranking it first, and 63% ranking it first or second.
The AI developer/company ranks third for primary responsibility among likely agentic AI users (22%), though when considering first or second most responsible for fixing an unwanted AI agent’s shopping purchase, the AI developer/company jumps to 48%.
Users themselves are the second most likely to be held primarily responsible (28%), suggesting a recognition of personal accountability in AI interaction, though this responsibility is somewhat divisive, as 36% said users should be the least responsible for an AI agent’s incorrect purchase.
Payment networks and regulatory bodies are least likely to be considered primarily responsible, indicating that consumers view them as secondary players in AI-related transactions.