On the afternoon of March 18, 2026, we gave a live demo to Lightning Labs — the team that built the Lightning Network — along with several VCs and key players in the Lightning ecosystem. We were showing them that L402-powered agent commerce works in production: an AI agent autonomously buying a physical product from our store, paying with Lightning, and completing the purchase without any human intervention.
Two hours before the demo, we ran the full flow end to end. It worked perfectly.
During the live demo, it failed. The agent paid 50,270 sats (~$30) for a snapback hat and never received it. The money was gone, the claim never came back, and the budget safety system locked out any retry. In front of the exact audience we needed to impress.
This is what happened, why it happened, and what we shipped to fix it.
The Setup
Our L402 store sells physical merchandise — hats, t-shirts — paid for entirely with Lightning micropayments through the L402 protocol. The purchase flow has three steps:
- Checkout:
POST /api/store/checkoutwith a product ID. The server creates an order and returns HTTP 402 with a Lightning invoice and a macaroon. - Pay: The client pays the Lightning invoice and receives a preimage (cryptographic proof of payment).
- Claim:
POST /api/store/claimwith anAuthorization: L402 <macaroon>:<preimage>header. The server verifies the preimage matches the payment hash, confirms the order, and triggers fulfillment.
This is a split-flow transaction: checkout and claim happen at different URLs. That distinction turned out to matter enormously.
The Standard L402 Flow
The L402 protocol, as originally conceived, is elegant in its simplicity:
Client requests resource --> Server returns 402 + invoice + macaroon
Client pays invoice --> Client gets preimage
Client retries SAME URL --> Server verifies, grants access
Request, pay, retry. One URL throughout. This works beautifully for API access — you hit an endpoint, get a 402, pay, hit the same endpoint again with your token, and you are in. The protocol was designed for this pattern.
E-commerce does not follow this pattern.
What Went Wrong
Our MCP server provides a tool called access_l402_resource that automates the full L402 cycle. It requests a URL, detects a 402 response, pays the invoice, and retries the same URL with the L402 credential. For standard API access, this works exactly as intended.
For our store, it did this:
- Called
POST /api/store/checkout— got 402 with invoice for 50,270 sats - Paid the invoice — success, received preimage
- Retried
POST /api/store/checkoutwithAuthorization: L402 <macaroon>:<preimage> - Store returned… another 402
Step 3 is where everything broke. The checkout endpoint always returns 402 — that is its job. It creates a new order and issues a new invoice. It does not check for L402 credentials in the Authorization header because it was never designed to. The claim endpoint handles that.
The MCP tool saw the second 402 and concluded the payment had failed. It returned a generic error message to the AI agent. The valid L402 token — the macaroon and preimage that proved 50,270 sats had been paid — was silently discarded.
Bug 1 (MCP tool): access_l402_resource discarded the valid L402 token when the retry returned an error. The payment succeeded, but the token was thrown away. There was no way to recover it.
Bug 2 (Store): The checkout endpoint did not check for L402 credentials on incoming requests. Even if the MCP tool had retried correctly, the store would have ignored the token and issued a new order.
Two bugs. One protocol assumption. Fifty thousand sats gone.
Why It Worked Two Hours Earlier
AI agents are non-deterministic. Given the same high-level instruction ("buy this hat from the store"), an agent may choose different tool sequences each time.
During the pre-demo test run, the agent happened to take a multi-step approach: it called checkout manually, then used pay_invoice separately, then called /api/store/claim with the token. Three distinct tool calls, the correct endpoint for each step. It worked.
During the live demo, the agent chose access_l402_resource — the all-in-one tool designed to handle the full L402 cycle automatically. It was the reasonable choice. It was also the one that hit both bugs.
Same agent. Same store. Same product. Different tool selection. Different outcome.
This is a class of problem unique to agentic systems. When a human developer writes an integration, they pick one code path and test it. When an AI agent uses your tools, it may take any valid path through them. Every path must work.
Then the Budget System Did Its Job
After the failed purchase, the agent tried to retry. The MCP server's budget safety system — which enforces spending limits to prevent runaway AI payments — had already recorded the 50,270 sats as spent. The budget was exhausted. No more payments allowed.
This is the correct behavior. The budget system exists precisely to prevent an AI from endlessly retrying failed payments and draining a wallet. But in this moment, it meant we could not recover during the demo.
Budget safety and payment recovery are in tension. The system correctly prevented further spending. But it also prevented the one retry that would have worked if the bugs had been fixed.
The Fix
We shipped two fixes the same day.
MCP Fix (v1.11.5): Preserve Tokens on Failure
The core problem was that L402FetchResult.Failed() threw away everything when the server retry returned an error. The payment had succeeded — the preimage existed, the sats were spent — but the error path discarded the evidence.
The fix: when a payment succeeds but the subsequent retry fails, the Failed result now preserves the L402 token and the amount paid. The tool response includes the valid macaroon:preimage credential with a note explaining that the payment succeeded but the server did not accept it on retry, and that the token can be used with the correct endpoint.
This means the AI agent gets the token back. Even if access_l402_resource cannot complete the full cycle — because the server uses a split flow, or the retry endpoint is different from the payment endpoint — the agent has the credential and can use it with the right URL.
Store Fix: Handle L402 Credentials on Checkout
The checkout endpoint now checks for an Authorization: L402 header before creating a new order. When it finds a valid L402 credential, it delegates to the claim logic instead of issuing a new invoice.
This makes the store compatible with standard L402 clients that retry the same URL after payment — which is exactly what the protocol specifies.
All Paths Now Work
With both fixes in place, every route to purchase succeeds:
| Path | How It Works |
|---|---|
| Multi-step (checkout, pay, claim separately) | Worked before. Still works. |
| Single tool + store fix | access_l402_resource retries checkout. Checkout detects L402 token, delegates to claim. Purchase completes. |
| Single tool + MCP fix | Even if the store does not handle the retry, the token is preserved in the response. The agent can call /claim manually. |
| Both fixes | Belt and suspenders. Retry works server-side, and the token is preserved client-side regardless. |
What This Means for L402
This incident exposed a real gap in the L402 protocol as it moves beyond API access into general commerce.
The original L402 specification assumes a single-URL interaction. Client requests a resource, gets a 402, pays, retries the same resource. This is a clean abstraction for APIs, where "the resource" and "the thing you are paying for" are the same URL.
E-commerce breaks this assumption. The checkout URL and the fulfillment URL are different. A subscription signup endpoint and an account activation endpoint are different. Any workflow with distinct "initiate" and "complete" steps has this split.
Three principles emerged from this incident that we think apply to anyone building with L402:
1. If your service has a split flow, make your initiation endpoint handle L402 credentials on retry. The protocol says clients will retry the same URL. If your checkout endpoint always returns 402 regardless of credentials, standard L402 clients will fail. Either detect the credential and delegate to fulfillment, or redirect to the correct endpoint.
2. L402 clients must preserve tokens when payment succeeds but the server rejects the retry. A paid token is valuable. Discarding it on a non-2xx retry response is data loss. The client may not know why the server rejected the retry — wrong endpoint, server error, rate limit — but the token is still valid proof of payment.
3. AI agents are non-deterministic, so every tool path must work. If your system offers multiple ways to accomplish a payment (a single all-in-one tool and individual step-by-step tools), the agent may choose any of them on any given run. You cannot rely on the agent taking the "right" path. All paths must lead to success.
The Uncomfortable Truth About Live Demos
We could have avoided this by mocking the payment, running a scripted demo, or using a test environment. We chose not to. The whole point was to show Lightning Labs that this works in production, with real sats, with a real AI agent making real decisions.
That is also why it failed in front of them instead of failing quietly in a test suite.
The 50,270 sats were recovered manually after the demo (the payment was valid, the order existed, we just had to process the claim by hand). More importantly, the failure led to fixes that make L402 more robust for every split-flow use case — not just our store, but any service where checkout and fulfillment are separate operations.
Good infrastructure is boring. It works, it does not surprise you, it does not require attention. Our infrastructure surprised us. So we fixed it, wrote the tests, shipped the patches, and wrote this post — because the L402 ecosystem benefits more from an honest post-mortem than from a polished demo recap.
The MCP fix shipped in v1.11.5. The store fix is live at store.lightningenable.com. Both are in production.