The Algorithm at the Checkout

As artificial intelligence quietly takes over the checkout aisle — and the fraud desk — the payments industry is being remade for better and for worse.

In a Brooklyn apartment last week, a customer asked her chatbot to reorder her favorite lipstick from Glossier. The bot did not open a browser. It did not ask for a card number. It did not even pause to log in. It simply confirmed the order, charged a tokenized credential it had been issued months earlier, and moved on. From request to receipt: eleven seconds.

A year ago, that exchange would have been a curiosity. Today, it is becoming infrastructure.

An invisible checkout, built in twelve months

The world’s two largest card networks — along with a growing list of technology firms — have spent the last year racing to build the plumbing for a checkout no human touches. In April 2025, Mastercard introduced Agent Pay, a framework of so-called “Agentic Tokens” — credentials issued not to people but to software agents, with programmable spend limits, allowlists of merchants, and category restrictions baked into the credential itself. A few weeks later, Visa unveiled Visa Intelligent Commerce, with similar tokenized credentials and integrations with Anthropic, OpenAI, and Microsoft. By October, Visa and more than ten partners had published the Trusted Agent Protocol, an open framework designed, in the company’s words, to enable safe agent-driven checkout by helping merchants tell legitimate AI shoppers from malicious bots.

The most consequential move came from a younger player. In late 2025, Stripe and OpenAI jointly released the Agentic Commerce Protocol — an open standard that allows AI agents to access a merchant’s catalog, pricing, and checkout through standardized APIs. Any merchant already on Stripe can switch on agentic payments, the company said, in as little as one line of code. By April 2026, the protocol had cycled through several major revisions and gained support from Microsoft Copilot Checkout. Etsy sellers were the first wave; more than a million Shopify storefronts — Glossier, SKIMS, Spanx, Vuori — followed.

Visa now predicts millions of consumers will complete purchases through AI agents during the 2026 holiday season. Asia-Pacific and European pilots are scheduled for the second half of the year, with Latin America and the Caribbean joining shortly after. The company’s December note to merchants, on the prospect of mainstream AI checkout, was not hedged. The arrival, it said, was imminent.

* * *

The same engine, weaponized

The technology rewriting the way money moves is also rewriting the way it disappears. According to industry data published this spring, AI-enabled fraud surged roughly 1,210 percent between January and December 2025, compared with a 195 percent rise for fraud overall. Deepfakes — synthetic audio and video generated by readily available models — now account for an estimated eleven percent of global fraudulent activity, up from a rounding error two years ago. Deloitte’s Center for Financial Services projects that generative-AI-enabled fraud losses in the United States alone could reach forty billion dollars by 2027.

The most familiar face of this wave is the so-called authorized push payment scam, in which a victim is tricked into sending money to a fraudulent account. In the United Kingdom, where reporting is more granular, losses from these scams climbed twelve percent year-over-year to £257.5 million in the first half of 2025, with investment-related cons jumping fifty-five percent. Industry researchers expect the global total across six major markets — the United States, the United Kingdom, India, Brazil, Australia, and Saudi Arabia — to approach seven billion dollars by year’s end. In an aggressive scenario, where defenses fail to keep pace, U.S. losses alone could reach $18.2 billion by 2028.

What is new is not the scam itself but the production line behind it. Criminal networks are using generative models to launch thousands of synthetic attacks simultaneously, fabricating personas complete with voices, video, and supporting documentation. Sixty-one percent of financial leaders surveyed this winter named synthetic identity fraud their top concern.

“In 2026, fraud is scaling programmatically. The fakes are getting eerily convincing.”   — Sumsub, 2026 Fraud Trends report

Then there are the deepfakes themselves. In one case detailed by Mastercard last quarter, a corporate finance team in Asia was duped into wiring funds after joining a video meeting with what appeared to be the firm’s chief financial officer and several colleagues. None of them were real. Average per-incident losses from such schemes have climbed to roughly $450,000, according to industry surveys, and Experian’s 2026 fraud forecast singled out deepfake job candidates and “polymorphic agentic agents” — adaptive AI programs that change their behavior to evade controls — as among the year’s top three threats.

* * *

An arms race on both sides

The good news for consumers, banks, and merchants is that the same models making the threat possible are also being conscripted into the defense. The bad news is that nobody is sure yet who is winning.

JPMorgan Chase, which now runs more than five hundred AI applications in production, has built real-time fraud detection systems that have cut anti-money-laundering false positives by ninety-five percent and prevented an estimated $1.5 billion in losses, according to figures the bank released in March. Last fall, JPMorgan Payments launched the Account Confidence Score, a machine-learning tool that estimates fraud risk before a payment is initiated, giving treasurers and corporate clients a chance to halt suspicious transfers in flight.

Mastercard, which began deploying generative AI to its decisioning network in 2024, says its Decision Intelligence Pro model has improved fraud detection rates by an average of twenty percent — and as much as three hundred percent for certain use cases — while cutting false declines by more than eighty-five percent. Industry research released this year found that forty-two percent of card issuers and twenty-six percent of acquirers have each saved more than five million dollars in averted fraud over the last two years thanks to AI-powered tools. Ninety percent of payment leaders surveyed said they expected losses to climb sharply if they did not lean harder into the technology.

Stripe has gone further still. In May 2025 it unveiled what it calls a Payments Foundation Model — a transformer-based system trained on tens of billions of transactions, the first of its kind in the industry. The model uses self-supervised learning to recognize patterns in payment behavior the way large language models recognize patterns in text. The company says one large enterprise saw a sixty-four percent overnight jump in detection of so-called card-testing fraud — a small-purchase reconnaissance technique used to validate stolen cards — when the new model went live. Allowing the model to decide which checkouts deserve a step-up authentication challenge has cut overall fraud on Stripe Checkout by thirty-two percent without harming conversion, the company says.

* * *

Algorithm versus algorithm

The most remarkable shift in 2026 is that the cat-and-mouse game has effectively become AI-versus-AI. One in six attempted sign-ups across AI services running on Stripe is now believed to be a bad actor, according to data the company published this spring; free-trial abuse has more than doubled in six months.

The targets, too, are mutating. Where yesterday’s fraudster wanted a card number, today’s wants a token, an inference credit, or an AI-issued credential to be misused at scale. Stripe has begun warning customers about a new species of attack in which fraudsters generate millions of synthetic identities solely to drain sign-up promotions and run up free-trial bills. Whether that counts as fraud, theft, or a baroque form of arbitrage is, for now, a question the law has not caught up with.

* * *

What it means for the rest of us

For everyday consumers, the practical experience of paying for things in 2026 has begun to feel less like a transaction and more like a conversation. Type a request into ChatGPT or Microsoft Copilot, confirm the price, and the goods are on their way. The card networks have devoted considerable effort to making sure the underlying tokens are scoped narrowly enough that an agent cannot, for instance, decide on its own to renew a magazine subscription or upgrade a hotel room.

For merchants, the calculus is more complicated. The agentic shift puts a layer of software — Anthropic’s Claude, OpenAI’s ChatGPT, Microsoft Copilot, or any of a dozen smaller agents — between the brand and the buyer. Whoever controls the agent controls the recommendation. That is a familiar problem from the early days of search and the long middle age of social commerce, but it is being recast for an environment in which the customer’s first impression of a product is no longer a Google result but an AI-generated answer. The Trusted Agent Protocol and the Agentic Commerce Protocol are, among other things, attempts to make sure merchants can identify which agent is knocking at their checkout door, and on whose behalf.

For banks and processors, the message of the past year is that scale, data, and speed are about to matter more than ever. The same generative models that allow a fraudster in São Paulo to spin up a thousand convincing identities also allow a fraud team in Manhattan to score a thousand transactions a second. Whoever has more transaction history to learn from, more compute to throw at the problem, and tighter feedback loops with their merchants will have an edge that is, by definition, compounding.

* * *

Whose algorithm do you trust?

The phrase often used in industry conferences is “the invisible checkout” — the idea that the act of paying is dissolving into the act of asking. There is something genuinely useful in that. There is also something genuinely uneasy about a future in which a model issues credentials to another model, and money moves between them without a human ever pressing a button.

The next eighteen months will tell us whether the safeguards being engineered into Agentic Tokens, Account Confidence Scores, and payments-trained foundation models can keep ahead of an adversary that, by every available indicator, is growing faster than the defenses. The cashier is now an algorithm. So, increasingly, is the thief. And the security guard. The question for the rest of us is whose algorithm we trust.

References & Further Reading

  1. Visa, “Visa and Partners Complete Secure AI Transactions, Setting the Stage for Mainstream Adoption in 2026.” https://usa.visa.com/about-visa/newsroom/press-releases.releaseId.21961.html
  2. Mastercard, “Mastercard Unveils Agent Pay, Pioneering Agentic Payments Technology to Power Commerce in the Age of AI.” https://www.mastercard.com/us/en/news-and-trends/press/2025/april/mastercard-unveils-agent-pay-pioneering-agentic-payments-technology-to-power-commerce-in-the-age-of-ai.html
  3. OpenAI, “Buy It in ChatGPT: Instant Checkout and the Agentic Commerce Protocol.” https://openai.com/index/buy-it-in-chatgpt/
  4. Stripe, “Stripe Powers Instant Checkout in ChatGPT and Releases the Agentic Commerce Protocol Co-developed with OpenAI.” https://stripe.com/newsroom/news/stripe-openai-instant-checkout
  5. Stripe, “Developing an Open Standard for Agentic Commerce.” https://stripe.com/blog/developing-an-open-standard-for-agentic-commerce
  6. Digital Commerce 360, “Visa Signals AI Checkout Could Soon Go Mainstream.” https://www.digitalcommerce360.com/2025/12/29/visa-signals-ai-checkout-could-soon-go-mainstream/
  7. TechCrunch, “Stripe Unveils AI Foundation Model for Payments, Reveals Deeper Partnership with Nvidia.” https://techcrunch.com/2025/05/07/stripe-unveils-ai-foundation-model-for-payments-reveals-deeper-partnership-with-nvidia/
  8. Mastercard, “AI Is Helping Banks Save Millions by Transforming Payment Fraud Prevention.” https://www.mastercard.com/global/en/news-and-trends/Insights/2026/ai-is-helping-banks-save-millions-by-transforming-payment-fraud-prevention.html
  9. CNBC, “Mastercard Jumps into Generative AI Race with Model It Says Can Boost Fraud Detection by up to 300%.” https://www.cnbc.com/2024/02/01/mastercard-launches-gpt-like-ai-model-to-help-banks-detect-fraud.html
  10. J.P. Morgan, “AI Boosting Payments Efficiency and Cutting Fraud.” https://www.jpmorgan.com/insights/payments/security-trust/ai-payments-efficiency-fraud-reduction
  11. J.P. Morgan, “Fraud Frontlines: Ensuring Payments Are Safe and Secure.” https://www.jpmorgan.com/payments/newsroom/fraud-frontlines-pay-it-forward
  12. Deloitte, “Generative AI Is Expected to Magnify the Risk of Deepfakes and Other Fraud in Banking.” https://www.deloitte.com/us/en/insights/industry/financial-services/deepfake-banking-fraud-risk-on-the-rise.html
  13. Sumsub, “Fraud Trends 2026: AI Scams, Deepfakes, and Emerging Threats.” https://sumsub.com/blog/fraud-trends/
  14. Experian, “Experian’s New Fraud Forecast Warns Agentic AI, Deepfake Job Candidates, and Cyber Break-ins Are Top Threats for 2026.” https://www.experianplc.com/newsroom/press-releases/2026/experian-s-new-fraud-forecast-warns-agentic-ai–deepfake-job-can
  15. PYMNTS, “Authorized Push Payment Fraud Losses in UK Rise 12%.” https://www.pymnts.com/fraud-prevention/2025/authorized-push-payment-fraud-losses-in-uk-rise-12
  16. Payments Dive, “Visa, Mastercard Race to Agentic AI Commerce.” https://www.paymentsdive.com/news/visa-mastercard-race-agentic-ai-commerce-payments/750428/

© 2026 Payrillium. Reporting compiled from public statements, press releases, and industry analysis.

Scroll to Top