Skip to main content
CryptoFlex// chris johnson
Shipping
§ 01 / The Blog · Security Engineering

The Vercel Breach and an Hour With Claude Code: AI for Defenders

A fourth-party supply-chain breach prompted Vercel to flag nine of my production credentials for rotation out of an abundance of caution. Twenty minutes after reading the disclosure I was rotating keys, and sixty minutes later I had a full audit and a hardened account. Here is how Claude Code turned a day of incident response into an hour, and why the chain that got here should change how you pick vendors.

Chris Johnson··20 min read

I host cryptoflexllc.com and a game dev project on Vercel. On April 19, 2026, I read Juliet.sh's writeup of the Vercel breach over morning coffee. A precautionary notification from Vercel followed, and when I opened my dashboard the console had flagged nine environment variables with a "Need To Rotate" badge on each of them. Sixty minutes later, they were all rotated, the platform was hardened, and I had as much forensic confidence as a customer-side audit can give you that nothing had been used against me.

This is the story of how Claude Code made that possible, and why the breach chain that got here should worry anyone running production on a vendor platform.

Series Context

This is the fourth post in the Security Engineering series. The first three covered auditing my own code, picking a free WAF, and a 4-agent pentesting sprint. Those were proactive security posts. This one is reactive: something went wrong upstream, and I had to respond.

The Hook Metric#

Nine credentials flagged. One hour to rotate and harden. Zero anomalies in the deployment, event, and token history I could pull.

That is the short version. The longer version starts several degrees of separation away from my laptop, and ends with a plan file I want everyone reading this to be able to mirror.

What a Fourth-Party Breach Actually Means#

Before we get to Claude Code, I need to explain the supply chain. Not because the breach is novel (it is not), but because the language around these events is terrible and business readers keep getting confused by it.

What is a fourth-party breach?

You are first party. Your customers and end users are second party. Your direct vendors with access to your systems or data (Vercel, in my case) are third party. Their vendors (Context.ai, Vercel's vendor) are fourth party. Each layer down is one more organization that can be compromised without you ever knowing they were in the chain.

The attacker who potentially reached my Vercel env vars never touched my laptop. They traversed a chain of trust relationships to get from a compromised personal device to Vercel's internal tooling. Here is the chain as best as it has been reconstructed from public reporting:

  1. A Context.ai employee's personal device picked up Lumma Stealer, a commodity infostealer, allegedly via a disguised Roblox auto-farm script. This is the kind of malware that sells on Telegram for $250 a month. No nation-state, no zero-day, just a family member clicking the wrong link.
  2. The infostealer harvested the employee's saved credentials and browser session cookies, including their Context.ai Google Workspace login.
  3. From that foothold, the attacker reached a Vercel employee's Workspace. The bridge was an OAuth grant, not a cookie replay: the Vercel employee had signed up for Context.ai's consumer AI suite using their enterprise Google account and granted "Allow All" scope. That OAuth token gave Context.ai (and whoever compromised Context.ai) broad read access into the Vercel employee's Workspace.
  4. With that access, an attacker enumerated non-Sensitive-flagged environment variable values across a subset of customer projects between March 2026 and April 19, 2026.

How I learned about it was the quieter part of the story. Vercel reached out to me directly, out of an abundance of caution, and the console flagged nine of my environment variables for rotation. Whether any of my values were actually read is not confirmed. Vercel's own guidance said impacted customers would be contacted directly, and when I built the impact assessment on April 19 the evidence suggested I was likely not in the confirmed-impacted subset. I rotated anyway.

A 5-party attack chain. Commodity infostealer on a personal device cascades through two SaaS OAuth relationships and lands in customer production environment variables.

That is what supply chain exposure looks like in 2026. The attacker never touched my Mac. They were five trust-hops away, and nine of my production keys sat in the potentially-readable pool until I rotated them.

Slide: Five Degrees of Separation, The Attack Chain. A four-stage chain showing Context.ai Employee Personal Device (Lumma Stealer infection via disguised Roblox script, $250/mo commodity malware) to Context.ai Google Workspace (credentials and session harvested from the infected host) to Vercel Employee Workspace (Vercel employee had signed up for Context.ai consumer AI suite using their enterprise Google account at Allow All scope; that OAuth grant was abused) to Customer Environment Variables (attacker potentially enumerated non-Sensitive-flagged env var values via internal tooling). Status note: the attacker never touched customer local environments; any production keys without the Sensitive flag were potentially readable through fourth-party supply-chain exposure, and actual per-customer access is not confirmed.

The Sensitive flag was the only architectural mitigation in play

Vercel offers a "Sensitive" flag on every environment variable. When it is on, the value is encrypted at rest in a form the internal tooling path cannot decrypt, and is not enumerable through the channel the attacker reached. When it is off, the value is stored in a form internal tooling can read. None of my nine flagged variables had the Sensitive flag set. The flag is architecturally enforced through encryption-at-rest, not relying on policy or access control alone. It is the only thing that separates "potential exposure" from "confirmed safe" in cases like this, and I did not know that until the disclosure came out.

The Screenshot I Did Not Want To See#

Here is what greeted me when I opened my Vercel Environment Variables page after reading the Juliet.sh writeup and receiving the precautionary notice.

Vercel environment variables page showing nine secrets flagged as Need To Rotate: VERCEL_API_TOKEN, DATABASE_URL, GITHUB_TOKEN, ANTHROPIC_API_KEY, RESEND_API_KEY, GMAIL_APP_PASSWORD, and three HMAC secrets

Nine of my thirteen environment variables, each flagged by Vercel as "Need To Rotate". The other four were not flagged because they are public identifiers (team ID, GA ID, project ID) or an email address. Vercel's own UI was telling me exactly what to do. The question was how fast I could do it, and how confident I could be after the fact that the flagged values had not been used against me.

Scoping the Blast Radius#

Before touching anything, I opened Claude Code with a single prompt: "Here is the breach disclosure. Read my cryptoflexllc repo and my Vercel account. Tell me what I have exposed and what I need to do."

What happened next is the first place the AI force multiplier showed up. Claude Code did this in parallel, in a single turn, without me micromanaging the order:

  • Grepped src/ for every process.env reference, building a complete list of environment variables the code actually uses.
  • Cross-referenced that list against vercel env ls output to find variables that were provisioned but no longer referenced (there were none).
  • For each credential, opened the relevant file and inferred the minimum scope it needs. For GITHUB_TOKEN, it looked at src/lib/github-api.ts and determined the only operation is file CRUD on a specific repo: "Contents: Read and write" on a fine-grained PAT, not the classic repo scope.
  • Checked ~/.claude/secrets/secrets.env for any shadow copies of the exposed keys. Result: none. That file held UniFi, Obsidian, Exa, Firecrawl, and Ollama keys, all unrelated.
  • Grepped .env* files across the cryptoflexllc project directory. None on disk.
  • Ran git log -p --all against every branch for any pattern matching the exposed variables. Zero matches. The repo has always been clean.

The local exposure audit is the step most people skip

The "rotate everything Vercel flagged" response is obvious. The less obvious part is whether any of those credentials have shadow copies somewhere else. An old backup, a .env.local you forgot about, a personal config repo, a teammate's laptop. If a stolen key is in both places and you only rotate one, the attacker still has the working copy. Claude Code grepping my entire config directory for the exact variable names took about three seconds. Doing it by hand and being sure I got all of them would have taken me the better part of an hour and I still would not have trusted the result.

Within ten minutes I had a structured scope document. Nine credentials to rotate, local exposure confirmed zero, one coordination concern flagged: the ANTHROPIC_API_KEY might also be in use by my Home Network Mission Control Dashboard project, and if so I would need to update that project's .env after rotating on Vercel.

The Plan File Pattern#

Before executing anything, Claude Code wrote a plan file at docs/plans/2026-04-20-cryptoflexllc-credential-rotation-plan.md. This is a pattern I use for every non-trivial operation now, and it is the single thing I would recommend any reader adopt.

The file is in a private config repo, so I cannot link to it, but the structure is trivial to mirror. Here is the skeleton:

markdown
# <project> Credential Rotation Plan

Trigger: <link to disclosure>
Date opened: <date>
Owner: <name>

## 1. Scope and Compromise Assessment
- Table of credentials with issuer and blast radius
- What is NOT at risk (public identifiers, etc.)
- Local exposure audit results

## 2. Prerequisite: CLI auth or access

## 3. Rotation Checklist
- One section per credential
- Each section has the exact commands, in order
- Revoke old at issuer, create new, update platform, verify

## 4. Platform Hardening (non-rotation)
- 2FA, deployment protection, OAuth audits

## 5. Post-rotation Monitoring (7-day window)
- Table of where to watch for auth failures on the old credential

## 6. Completion criteria
- Checkbox list: all done when every box is ticked

Why a plan file and not just a chat transcript?

The plan file gives me three things the chat cannot. First, checkboxes survive context compactions. If my Claude Code session runs out of context mid-rotation, I can start fresh and the new session reads the plan and picks up exactly where the old one left off. Second, it is reviewable. I can ask a peer to read it before I execute anything destructive. Third, it becomes the forensic record. Six months from now, if a regulator or auditor asks what I did, I have the plan and the commits.

The AI-Assisted Response Loop#

This is the part the user reading this actually cares about: what does "AI for defenders" look like in practice? Not marketing. Not demos. The actual loop, as it ran.

The AI-assisted incident response loop. Five stages, parallel execution where possible, and a verification gate after every mutation. The user stays in the driver's seat for anything that touches an external UI.

Five stages, and the one that does the heavy lifting is stage three: parallel execution. Traditional incident response is serial. Read the docs for the first service, rotate the key, verify, move on to the next. Claude Code does not do that. It fires off the read calls in parallel, waits for all of them, then coordinates the write calls with appropriate dependencies (e.g., revoke at issuer before update on Vercel).

Here is one practical example. The rotation plan called for me to review the last ~60 days of Vercel activity to make sure nothing unusual had happened during the breach window. I did not want to click through the activity log by hand. I gave Claude Code a freshly rotated VERCEL_API_TOKEN (so I was reading Vercel state with a clean key) and asked it to pull the forensic evidence.

In a single turn, in parallel:

bash
# All four of these ran in parallel as one Claude Code message.
# SINCE_MS is an epoch-ms anchor covering the breach window (2026-03-01).
SINCE_MS=1740787200000

curl -sS -H "Authorization: Bearer $NEW_VERCEL_API_TOKEN" \
  "https://api.vercel.com/v6/deployments?teamId=$TEAM_ID&limit=100"

curl -sS -H "Authorization: Bearer $NEW_VERCEL_API_TOKEN" \
  "https://api.vercel.com/v3/events?teamId=$TEAM_ID&limit=12&since=$SINCE_MS"

curl -sS -H "Authorization: Bearer $NEW_VERCEL_API_TOKEN" \
  "https://api.vercel.com/v5/user/tokens"

curl -sS -H "Authorization: Bearer $NEW_VERCEL_API_TOKEN" \
  "https://api.vercel.com/v1/teams/$TEAM_ID/members"

100 deployments, 12 team events, my own user's token creation history, and the team membership list. All pulled, parsed, and summarized in under a minute. The review result: every deployment author was me, every event timestamp matched a commit I had made, and there was exactly one "Chrome on Windows" session active during the incident window. That was me too: my work machine is Windows, my personal machine is the Mac I am writing this post on. Dual-machine workflow.

What this audit can and cannot prove

The four calls above cover what a customer has access to: recent deployments, the team event log, my own tokens, and team membership. They are strong evidence that nothing unusual happened on my account. They are not evidence that nothing was read. The internal-tooling path the attacker reportedly used sits on the Vercel side of the line, and env var reads from that path are not in the customer-visible audit log. That gap is exactly what makes the Sensitive flag load-bearing.

Programmatic verification beats UI inspection

I have reviewed activity logs manually dozens of times. It is tedious, I get impatient, and I miss things. Having Claude Code pull the raw API responses, diff them against expected patterns (my own commits, my own IPs, my own user agents), and surface only the anomalies means the review is both faster and more thorough than I would do by hand. This is the defender multiplier in a nutshell: not that the AI does anything I could not do, but that it removes every "wait, what is the filter syntax for this dashboard again" moment and keeps the whole checklist alive in working memory across a long session.

Infographic: The 60-Minute Remediation, AI-Assisted Recovery from the Vercel Breach. Four panels. Anatomy of a 4th-Party Breach shows the five-stage chain from Lumma Stealer infection to potential Vercel customer environment variable exposure. AI vs Manual Remediation compares roughly one hour with Claude Code against an estimated six to eight hours manual. Rotation Ladder and Forensic Proof shows the priority order VERCEL_API_TOKEN then Database URLs then HMAC Secrets, with programmatic verification against 100 deployments and 12 team events yielding zero anomalies in the reviewed surface. Hardening and Lessons Learned lists the Sensitive flag enablement, 2FA mandate, Deployment Protection at Standard, and the 7-day post-rotation monitoring window.

Rotation Order by Blast Radius#

Nine credentials. They are not equal. Rotating them in the wrong order can let an attacker with a stale but still-working key interfere with the rotation of the others. The rule: rotate the credential with the largest downstream blast radius first.

Nine credentials sorted by blast radius. Top of the ladder is the credential that would let an attacker rotate every other credential, so it goes first. The HMAC secrets sit last because they only sign locally.

The top of the ladder is VERCEL_API_TOKEN. If an attacker still has it while I am rotating the others, they can modify the env vars I just updated. So it goes first. The HMAC secrets are at the bottom because they only sign cookies locally; they cannot be used to rotate anything else.

This ordering fell out of a two-minute conversation with Claude Code: "What order should I rotate these in, and why?" The answer was correct, and the explanation was good enough that I could sanity-check it and push back on one item I disagreed with.

I bumped DATABASE_URL to position 2 instead of 3 because Neon downtime during a password reset is the most user-disruptive step in the chain, and I did not want it sitting in the back half of the rotation window. A pure blast-radius ordering would put GITHUB_TOKEN at position 2, because repo write access can re-inject secrets into future builds. I accepted the downtime-weighted reorder on the theory that I could revoke GITHUB_TOKEN at GitHub instantly, so its effective rotation was fast even at position 3.

Slide: The Rotation Ladder Matrix. A seven-row table pairing each credential with its blast radius risk and execution order. VERCEL_API_TOKEN is flagged CRITICAL because it can modify other variables if compromised during rotation, execution order 1. DATABASE_URL is HIGH due to Neon password reset dictating highest downtime disruption, order 2. GITHUB_TOKEN is HIGH for source code read/write access, order 3. ANTHROPIC_API_KEY is MEDIUM for AI service billing exposure, order 4. RESEND_API_KEY is MEDIUM for outbound email spoofing, order 5. GMAIL_APP_PASSWORD is MEDIUM for direct inbox access, order 6. The three HMAC secrets (Analytics, Subscriber, Cron) are LOW because they are local cookie signing only and scriptable, orders 7 through 9. Footer: keys are not equal. Rotating lower-tier secrets first allows an attacker to use a still-active high-tier key to intercept the rotation process.

The HMAC Secrets Trick#

Six of the rotations were UI-gated. Open the issuer's dashboard, click buttons, copy a new value. But three of them were pure signing keys generated by me, for me, used by my own code. Claude Code generated a fresh 256-bit value for each and re-added them to Vercel with the Sensitive flag on:

bash
for name in ANALYTICS_SECRET SUBSCRIBER_SECRET CRON_SECRET; do
  value=$(openssl rand -hex 32)
  vercel env rm "$name" production --yes
  printf "%s" "$value" | vercel env add "$name" production --sensitive
done

Three fully scriptable rotations in under thirty seconds, each with the Sensitive flag turned on this time.

Side effects of HMAC rotation

Rotating ANALYTICS_SECRET logged me out of my own analytics dashboard (expected, re-auth with the new secret and continue). Rotating SUBSCRIBER_SECRET invalidated every outstanding unsubscribe link in newsletters I had previously sent. That is a user-facing impact. I accepted it because subscribers can always re-request a link, and leaving forgeable tokens alive after a breach is worse than asking a handful of people to click again. CRON_SECRET had no visible impact: Vercel re-signs cron invocations automatically after the next deploy.

The Platform Hardening Pass#

Rotating secrets closes the door on the stolen values. It does not close the door on the attacker coming back through another path. So after the nine rotations, I spent another fifteen minutes on platform hardening:

  1. Enabled 2FA on my Vercel account. If you are still running with password-plus-GitHub-SSO in 2026, stop that today.
  2. Verified Deployment Protection was set to Standard. It was; no change needed.
  3. Audited Deployment Protection bypass tokens. Zero existed. Good.
  4. Revoked the Context.ai OAuth grant in my Google Workspace, which was the actual root-cause trust relationship that started this whole cascade upstream.
  5. Audited every other third-party OAuth grant in my Workspace. Revoked a handful of apps I had signed into once years ago and forgotten about.
  6. Pushed my Workspace third-party app policy from default-allow to "only trusted apps" (explicit allowlist). This means any future OAuth grant has to be approved by me before it takes effect.

Every state change was verified programmatically. After enabling 2FA, Claude Code hit the Vercel API to confirm my account reported MFA enabled. After rotating DATABASE_URL, it ran psql "<old-connection-string>" and confirmed the old password is no longer valid. Password rotation alone does not rule out a rogue role the attacker could have created with valid creds, so Claude Code also listed Neon roles and confirmed none existed beyond the ones I had provisioned. Trust, but verify. Better: do not trust the UI, verify the API.

Actionable Caution: What You Should Do Today#

If you host anything on Vercel, whether or not you received a direct notification from them, read this section and act on it. I have tried to make it scannable enough to execute on a phone.

Slide: Actionable Hardening Architecture. Two columns. Vercel Pass covers rotating all unflagged environment variables immediately, enforcing the Sensitive flag on for all current and future variables, requiring 2FA globally across the account, and upgrading Deployment Protection to Standard while deleting unused bypass tokens. Workspace Pass covers auditing third-party OAuth grants under Security then API Controls, revoking unrecognized applications such as Context.ai, and restricting Workspace third-party app policy from default-allow to Explicit Allowlist Mode. Footer: monitor upstream services (Neon, Anthropic, Resend, GitHub) for 7 days post-rotation to confirm zero anomalous auth attempts.

The 2026-04 hardening checklist for every Vercel customer

  1. Open your Vercel Environment Variables page right now. Any unflagged variable is breach-scope rotation-eligible until proven otherwise. Rotate first, ask questions later.
  2. Flag every future env var as Sensitive. The flag is architecturally enforced through encryption-at-rest that the internal tooling path cannot decrypt.
  3. Enable 2FA. If you are still using just password + GitHub SSO in 2026, stop that today.
  4. Set Deployment Protection to at least Standard.
  5. Delete all Deployment Protection bypass tokens you are not actively using. While you are there, audit your Deploy Hooks, Log Drains, and Marketplace integrations. Anything you did not create yesterday is suspect.
  6. Audit your Vercel team membership, pending invites, and installed integrations. Revoke anything you do not recognize.
  7. If you use Google Workspace, audit every third-party OAuth grant at admin.google.com (Security to API controls to Manage Third-Party App Access). Revoke anything you do not recognize.
  8. Switch your Workspace third-party app policy from default-allow to "only trusted apps" (explicit allowlist mode). This is a one-time setting with large payoff.
  9. Lock your default branch. On GitHub: Settings to Branches, require a PR review, require signed commits, disallow force-push. A stolen GITHUB_TOKEN with contents-write scope cannot re-inject secrets into a protected branch.
  10. Scope every PAT and API token to least privilege with an expiry. Vercel tokens should be scoped per integration with a 90-day max. GitHub: use fine-grained PATs tied to one repo and one permission set. Never reuse one full-account PAT across services.
  11. Turn on GitHub push protection and secret scanning at the org or repo level. If a future rotation leaves a secret in a commit, GitHub blocks the push and alerts you. Free on public repos.
  12. Monitor upstream services for 7 days post-rotation. If anyone attempts to auth with your revoked credentials, the stolen value was real. Services to watch for my stack: Neon, Anthropic, Resend, GitHub, Gmail, Vercel.

The 7-day monitoring window is not optional

Rotation kills the stolen credential's usefulness going forward. It does not tell you whether the attacker already used it. The only way to know is to watch each issuer's auth log for attempts to use the now-invalid value. Any attempt means a human (or script) has the stolen value and is trying it. Zero attempts across 7 days is the closest you will get to "confirmed not-used". I set calendar reminders for day 1, day 3, and day 7 after rotation, and I still have two more to run as of this writing.

The Hour in Numbers#

The actual numbers from my session:

  • 9 environment variables rotated, each verified as Sensitive afterward.
  • 100 deployments pulled and reviewed via /v6/deployments.
  • 12 team events pulled via /v3/events and read for anomalies.
  • 1 old Vercel API token revoked (the one that could have been used against me if it had leaked).
  • 6 Google Workspace third-party OAuth apps revoked in the hardening pass.
  • 0 anomalies found in the 100 deployments or 12 events I could pull. Env var read events on the internal tooling path are not in the customer-visible audit log, so this is a clean customer-side audit, not a breach-negative result.
  • 0 shadow copies of the nine credentials found in local config.
  • ~60 minutes wall time from "reading the breach disclosure" to "hardened and documented".
  • ~6 to 8 hours, estimated, if I had done the same work without Claude Code.

The delta is not magical. Every step I just described I could have done manually. None of them are individually complex. But the aggregate work would have taken me a full day under normal circumstances, and longer under the emotional load of "you might have been breached, hurry up". With Claude Code orchestrating the session, the checklist stayed alive in working memory even while I was context-switching into external dashboards to click buttons, and every verification step happened as soon as the preceding action completed.

What "AI For Defenders" Actually Looks Like#

I have been writing this blog about AI agents since February of this year. I have written about agents that write code, agents that audit code, and agents that run security sprints. This post is different. This was not a planned exercise. This was me, rattled, with a production system I care about, responding to someone else's breach disclosure.

And here is the honest version of what helped:

  1. Parallel tool calls. Not sequential. Not "do one thing at a time". Claude Code fires off the grep, the API pull, and the config read in the same turn and tells me when all three finish. That is the force multiplier.
  2. The plan file. Not the chat. A durable checklist that survives context compaction and can be reviewed by a peer.
  3. Programmatic verification. Every UI-gated action was followed by an API-level confirmation. The UI lies. The API does not.
  4. Persistent working memory across a long session. I never had to stop and ask "which ones did I rotate already". Claude Code kept the board.

This is not an autonomous agent neutralizing threats in real time. That is marketing. What it is, is a patient co-pilot that turns a day of recovery work into an hour and makes sure you do not miss a step while you are still rattled from reading the breach disclosure.

What Is Next#

I am going to keep monitoring the upstream service logs across the 7-day window and beyond: Neon, Anthropic, Resend, GitHub, Gmail, and Vercel. If anything tries the revoked credentials, or if Vercel's post-mortem surfaces new detail worth writing about, I will come back with a follow-up. If nothing shows up, the silence is itself the outcome, and I will let the monitoring just be monitoring.

In the meantime, the plan file I described above is the single most reusable artifact from this incident. It lives in a private config repo, so I cannot link it, but the structure is in the section above and takes about fifteen minutes to adapt to any stack. Write it once. Use it every time something upstream goes wrong. You will thank yourself.

Your Move If You Run On Vercel#

You do not need to wait for a direct notification. Right now, in the next fifteen minutes:

  1. Open vercel.com/account/environment-variables in your project, rotate anything without the Sensitive flag, and re-add each value with --sensitive so the internal tooling path cannot read it again.
  2. Turn on 2FA and set Deployment Protection to Standard or better.
  3. If you use Google Workspace, revoke any third-party app grant you do not actively use, especially any consumer product you signed up for with an enterprise account.
  4. Protect your default branch on GitHub and scope every PAT to one repo and one permission set, with an expiry.
  5. Open a plan file for your next rotation before you need one. Ten minutes today, one hour saved the next time something upstream goes wrong.

If that list looks long, good. That is what supply chain defense looks like in 2026. It is not glamorous, it is not autonomous, and it is mostly checkbox work. Which is exactly the kind of work a patient AI co-pilot is built to keep alive in your working memory while you are still rattled from reading the disclosure.

Stay safe out there. And flag your env vars Sensitive.

Related Posts

How I built a subscriber-gated comment system with thumbs up/down reactions, admin moderation, and a one-time welcome email blast, including the PowerShell quirks and Vercel WAF rules that nearly blocked everything.

Chris Johnson··22 min read

How I built a full newsletter system for this site with secure subscriptions, HMAC-verified unsubscribes, branded HTML emails, and a Vercel Cron that sends a weekly digest every Monday. Includes the WAF rule that broke everything and the firewall tightening that followed.

Chris Johnson··18 min read

How I built a custom analytics system with interactive visualizations, IP intelligence, and a Leaflet world map, using Next.js, Neon Postgres, and Claude Code. Includes the full Vercel Analytics integration and why custom tracking fills the gaps.

Chris Johnson··20 min read

Comments

Subscribers only — enter your subscriber email to comment

Reaction:
Loading comments...

Navigation

Blog Posts

↑↓ navigate openesc close