SEO for Developers Who'd Rather Write Code Than Meta Tags
I built a website. It looked great. It loaded fast. It had blog posts, a portfolio, proper dark theme, the whole nine yards.
Nobody could find it.
Not because the content was bad. Not because the design was off. Because I'd done what most developers do: I built a technically sound website and completely forgot to tell Google it existed. The site was basically a beautifully decorated house on an unmapped road with no street signs.
This post covers every SEO optimization I implemented on CryptoFlex LLC, the why behind each technique, the exact code, and how it all fits together. If you're running a Next.js site and wondering why Google hasn't noticed you yet, this is your playbook.
The Audit: What Was Missing#
Before diving into code, I did an inventory of what the site had versus what Google actually needs. The results were... humbling.
That's a lot of red X marks. The site had exactly one thing going for it: a <title> tag. Everything else (sitemaps, structured data, social cards, canonical URLs) was missing entirely. Google was treating my carefully crafted blog posts the same way it treats a random HTML file uploaded to a shared hosting account in 2004.
Time to fix that.
The SEO Stack#
SEO isn't one thing. It's layers, and each layer feeds a different part of how Google (and social platforms) discover, parse, and rank your content.
Let's walk through each layer, bottom to top.
Layer 1: Technical SEO Foundation#
This is the infrastructure layer, the files that tell search engines how to crawl your site before they look at any content.
robots.txt: The Bouncer at the Door#
robots.txt is the first file Googlebot checks when it visits your domain. It's a set of rules that say "crawl this, skip that." Without it, Google crawls everything, including your API routes, analytics dashboard, and any other page you'd rather keep out of search results.
// src/app/robots.ts
import type { MetadataRoute } from "next";
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: "*",
allow: "/",
disallow: ["/analytics", "/analytics/", "/api/"],
},
],
sitemap: "https://cryptoflexllc.com/sitemap.xml",
};
}
Why This Matters
Google allocates a crawl budget to each site, meaning how many pages it'll visit per session. If Googlebot wastes time on /api/analytics/track or /analytics/login, that's crawl budget not spent on your actual blog posts. The robots.txt focuses 100% of Google's attention on content that should rank.
In Next.js App Router, creating robots.ts in the app/ directory automatically generates /robots.txt at build time. No configuration needed. It just works via the MetadataRoute.Robots type.
The sitemap field at the bottom is crucial: it tells every crawler where to find your sitemap without them having to guess.
sitemap.xml: The Page Directory#
If robots.txt is the bouncer, sitemap.xml is the floor plan. It lists every page on your site, how important each one is, and when it was last updated.
// src/app/sitemap.ts
import type { MetadataRoute } from "next";
import { getAllPosts } from "@/lib/blog";
const BASE_URL = "https://cryptoflexllc.com";
export default function sitemap(): MetadataRoute.Sitemap {
const posts = getAllPosts();
const blogEntries: MetadataRoute.Sitemap = posts.map((post) => ({
url: `${BASE_URL}/blog/${post.slug}`,
lastModified: new Date(post.date),
changeFrequency: "monthly",
priority: 0.7,
}));
const staticPages: MetadataRoute.Sitemap = [
{
url: BASE_URL,
lastModified: new Date(),
changeFrequency: "weekly",
priority: 1.0,
},
{
url: `${BASE_URL}/blog`,
lastModified: new Date(),
changeFrequency: "daily",
priority: 0.9,
},
// ... about, services, portfolio, contact
];
return [...staticPages, ...blogEntries];
}
Dynamic Is Better Than Static
This sitemap is generated at build time by calling getAllPosts(), which reads every .mdx file from the content directory. Every time you publish a new blog post and redeploy, the sitemap automatically includes it. No manual XML editing. No forgetting to add a page.
The priority values tell Google what to crawl most aggressively:
1.0: Homepage (crawl this first)0.9: Blog listing (changes frequently, new posts appear here)0.8: About/Services (important but stable)0.7: Individual blog posts and portfolio0.5: Contact (rarely changes)
Canonical URLs: One True URL Per Page#
Here's a problem you might not realize you have: Google sees https://cryptoflexllc.com/blog/post and https://www.cryptoflexllc.com/blog/post as two different pages with the same content. Same with query-string variations like ?tag=seo. This is called duplicate content, and it dilutes your ranking across all the duplicates.
The fix is a <link rel="canonical"> tag on every page that says "this is the one authoritative URL for this content."
// In each page's metadata export
alternates: {
canonical: `${BASE_URL}/blog/${slug}`,
},
Next.js renders this as:
<link rel="canonical" href="https://cryptoflexllc.com/blog/seo-for-nextjs-developers" />
Now Google consolidates all ranking signals onto a single URL. No dilution.
metadataBase: The Unsung Hero#
This single line in layout.tsx affects every URL in your metadata:
metadataBase: new URL("https://cryptoflexllc.com"),
Without metadataBase, your OpenGraph images resolve to /CFLogo.png instead of https://cryptoflexllc.com/CFLogo.png. Social platforms can't follow relative paths, so they need full URLs. This one setting ensures every image, canonical link, and RSS reference resolves correctly.
Layer 2: Structured Data (JSON-LD)#
This is where SEO goes from "Google can find your pages" to "Google understands your pages." JSON-LD (JavaScript Object Notation for Linked Data) is a way to embed machine-readable schema.org vocabulary into your HTML.
When Googlebot crawls a page with JSON-LD, it doesn't just see text. It sees structured objects it can reason about. An Article schema tells Google "this page is a blog post by Chris Johnson, published on February 12, 2026, about SEO." That context enables rich search results: author names, publish dates, breadcrumb trails, and more.
WebSite + Person Schemas: Global Identity#
These go in the root layout and appear on every page:
// src/components/json-ld.tsx
export function WebsiteJsonLd({ url, name, description }: WebsiteJsonLdProps) {
const data = {
"@context": "https://schema.org",
"@type": "WebSite",
name,
url,
description,
publisher: {
"@type": "Organization",
name: "CryptoFlex LLC",
url,
logo: {
"@type": "ImageObject",
url: `${url}/CFLogo.png`,
},
},
potentialAction: {
"@type": "SearchAction",
target: {
"@type": "EntryPoint",
urlTemplate: `${url}/blog?q={search_term_string}`,
},
"query-input": "required name=search_term_string",
},
};
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(data) }}
/>
);
}
The SearchAction is interesting. It tells Google that your site has an internal search feature. If Google decides your site is authoritative enough, it'll show a sitelinks search box directly in search results, letting users search your content without even visiting your homepage first.
The PersonJsonLd component establishes author identity:
export function PersonJsonLd({ name, url, jobTitle, description }: PersonJsonLdProps) {
const data = {
"@context": "https://schema.org",
"@type": "Person",
name,
url: `${url}/about`,
jobTitle,
description,
worksFor: {
"@type": "Organization",
name: "CryptoFlex LLC",
url,
},
sameAs: [
"https://github.com/chris2ao",
"https://www.linkedin.com/in/chris-johnson-secops/",
],
};
// ...
}
E-E-A-T and Why Person Matters
Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness) directly impacts ranking. The Person schema with sameAs links to GitHub and LinkedIn tells Google "this author is a real person with a verifiable professional identity." For technical blog content, this is a significant ranking signal.
Both components are injected in the root layout's <body>:
// src/app/layout.tsx
<body>
<WebsiteJsonLd
url={BASE_URL}
name="CryptoFlex LLC"
description="Personal tech blog and portfolio..."
/>
<PersonJsonLd
name="Chris Johnson"
url={BASE_URL}
jobTitle="Cybersecurity Professional"
description="Veteran turned cybersecurity professional..."
/>
{/* rest of app */}
</body>
Article Schema: Per-Post Rich Snippets#
Every blog post gets its own Article schema with headline, author, publisher, keywords, and publish date:
export function ArticleJsonLd({
title, description, url, datePublished, author, tags,
}: ArticleJsonLdProps) {
const data = {
"@context": "https://schema.org",
"@type": "Article",
headline: title,
description,
url,
datePublished,
dateModified: datePublished,
author: {
"@type": "Person",
name: author,
url: "https://cryptoflexllc.com/about",
},
publisher: {
"@type": "Organization",
name: "CryptoFlex LLC",
url: "https://cryptoflexllc.com",
logo: {
"@type": "ImageObject",
url: "https://cryptoflexllc.com/CFLogo.png",
},
},
mainEntityOfPage: { "@type": "WebPage", "@id": url },
keywords: tags.join(", "),
image: "https://cryptoflexllc.com/CFLogo.png",
};
// ...
}
This is what enables Google to show your search result like:
Building This Site with Claude Code Chris Johnson · Feb 7, 2026 · CryptoFlex LLC A step-by-step guide to vibe coding a production website...
Instead of just:
cryptoflexllc.com/blog/building-with-claude-code No description available.
The difference in click-through rate is massive.
BreadcrumbList: Navigation Context#
Breadcrumbs tell Google where a page sits in your site hierarchy:
<BreadcrumbJsonLd
items={[
{ name: "Home", url: BASE_URL },
{ name: "Blog", url: `${BASE_URL}/blog` },
{ name: post.title, url: postUrl },
]}
/>
Google renders these as clickable trails in search results:
cryptoflexllc.com > Blog > Building This Site with Claude Code
This takes up more visual space in search results (good for you) and gives users navigational context before they click (good for them).
Layer 3: Social & Preview Optimization#
SEO isn't just about Google. Every time someone shares a link on LinkedIn, Twitter, Slack, or Discord, those platforms read your page's metadata to generate a preview card. No metadata = blank card = nobody clicks.
OpenGraph: The Social Media Standard#
OpenGraph is a protocol created by Facebook that's now used by virtually every social platform. It controls what appears when someone shares your URL.
// src/app/layout.tsx - global defaults
openGraph: {
title: "CryptoFlex LLC | Chris Johnson",
description: "Personal tech blog and portfolio...",
url: BASE_URL,
siteName: "CryptoFlex LLC",
locale: "en_US",
type: "website",
images: [
{
url: "/CFLogo.png",
width: 512,
height: 512,
alt: "CryptoFlex LLC Logo",
},
],
},
Each blog post overrides with article-specific data:
// src/app/blog/[slug]/page.tsx
openGraph: {
title: post.title,
description: post.description,
url: postUrl,
type: "article",
publishedTime: post.date,
modifiedTime: post.date,
authors: post.author ? [post.author] : undefined,
tags: post.tags,
images: [{ url: "/CFLogo.png", width: 512, height: 512, alt: post.title }],
},
Images Must Be Absolute URLs
Social platforms fetch images from their own servers and can't resolve relative paths. The metadataBase setting in layout.tsx handles this automatically, but if you ever hardcode an image path, make sure it's a full https:// URL.
Twitter Cards#
Twitter (X) uses its own metadata format alongside OpenGraph:
twitter: {
card: "summary",
title: "CryptoFlex LLC | Chris Johnson",
description: "Cybersecurity professional writing about AI-assisted development...",
images: ["/CFLogo.png"],
},
The card: "summary" type shows a small square image with title and description. For blog posts with hero images, you could use "summary_large_image" to get a wider preview, but since we're using a logo rather than post-specific images, summary keeps things clean.
googleBot Directives: Controlling the Preview#
These directives tell Google exactly how much of your content to show in search result previews:
robots: {
index: true,
follow: true,
googleBot: {
index: true,
follow: true,
"max-video-preview": -1,
"max-image-preview": "large",
"max-snippet": -1,
},
},
| Directive | Value | Effect |
|---|---|---|
max-image-preview | "large" | Google can show large image thumbnails in results |
max-snippet | -1 | No limit on text snippet length |
max-video-preview | -1 | No limit on video preview (future-proofing) |
Setting these to their maximum values means Google shows the richest possible preview of your content. More visual space in search results = higher click-through rates.
Layer 4: Distribution & Discovery#
RSS Feed: Your Syndication Channel#
RSS (Really Simple Syndication) is a decades-old technology that's still relevant for SEO. Feed readers, aggregator sites, and some search engines use RSS to discover new content automatically.
// src/app/feed.xml/route.ts
import { getAllPosts } from "@/lib/blog";
const BASE_URL = "https://cryptoflexllc.com";
function escapeXml(str: string): string {
return str
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/"/g, """)
.replace(/'/g, "'");
}
export function GET() {
const posts = getAllPosts();
const items = posts
.map(
(post) => `
<item>
<title>${escapeXml(post.title)}</title>
<link>${BASE_URL}/blog/${post.slug}</link>
<guid isPermaLink="true">${BASE_URL}/blog/${post.slug}</guid>
<description>${escapeXml(post.description)}</description>
<pubDate>${new Date(post.date).toUTCString()}</pubDate>
<author>Chris.Johnson@cryptoflexllc.com (${escapeXml(post.author)})</author>
${post.tags.map((tag) => `<category>${escapeXml(tag)}</category>`).join("\n ")}
</item>`
)
.join("");
const feed = `<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>CryptoFlex LLC Blog</title>
<link>${BASE_URL}/blog</link>
<description>Tech articles about cybersecurity, AI-assisted development...</description>
<language>en-us</language>
<lastBuildDate>${new Date().toUTCString()}</lastBuildDate>
<atom:link href="${BASE_URL}/feed.xml" rel="self" type="application/rss+xml" />

${items}
</channel>
</rss>`;
return new Response(feed, {
headers: {
"Content-Type": "application/rss+xml; charset=utf-8",
"Cache-Control": "s-maxage=3600, stale-while-revalidate",
},
});
}
The feed is auto-discoverable because we declare it in the layout metadata:
alternates: {
canonical: BASE_URL,
types: {
"application/rss+xml": [
{ url: "/feed.xml", title: "CryptoFlex LLC Blog RSS Feed" },
],
},
},
This renders as <link rel="alternate" type="application/rss+xml" href="/feed.xml"> in the HTML head. RSS readers and aggregators auto-discover this link when scanning your site.
The SEO Backlink Angle
When aggregator sites syndicate your RSS content, they create backlinks, links from external sites pointing to yours. Backlinks are one of the strongest Google ranking signals. Every RSS subscriber is a potential source of organic backlinks.
PWA Manifest: The Finishing Touch#
The web app manifest isn't strictly SEO, but it contributes to the overall signal quality Google looks for:
// src/app/manifest.ts
import type { MetadataRoute } from "next";
export default function manifest(): MetadataRoute.Manifest {
return {
name: "CryptoFlex LLC | Chris Johnson",
short_name: "CryptoFlex",
description: "Personal tech blog and portfolio...",
start_url: "/",
display: "standalone",
background_color: "#0f0f12",
theme_color: "#0f0f12",
icons: [
{
src: "/CFLogo.png",
sizes: "512x512",
type: "image/png",
purpose: "any",
},
],
};
}
This enables the "Add to Home Screen" prompt on mobile browsers and provides Google with additional structured metadata about your site.
Tying It All Together: The Metadata API#
Here's where Next.js really shines. All of the above (OpenGraph, Twitter Cards, canonicals, robots directives, keywords) is configured through a single TypeScript Metadata object that Next.js renders into the correct HTML tags at build time.
The root layout defines global defaults. Individual pages override specific fields. Next.js deep-merges them automatically, so you never write raw <meta> tags.
Here's the complete metadata configuration from the root layout:
// src/app/layout.tsx
export const metadata: Metadata = {
metadataBase: new URL(BASE_URL),
title: {
default: "CryptoFlex LLC | Chris Johnson",
template: "%s | CryptoFlex LLC",
},
description: "Personal tech blog and portfolio...",
keywords: [
"cybersecurity", "Claude Code", "AI development", "Next.js",
"web development", "security consulting", "Chris Johnson",
"CryptoFlex", "vibe coding", "tech blog",
],
authors: [{ name: "Chris Johnson", url: `${BASE_URL}/about` }],
creator: "Chris Johnson",
publisher: "CryptoFlex LLC",
openGraph: { /* ... */ },
twitter: { /* ... */ },
alternates: { /* canonical + RSS */ },
robots: { /* index, follow, googleBot directives */ },
};
The title.template is particularly elegant: setting it to "%s | CryptoFlex LLC" means any child page that exports title: "About" automatically renders as <title>About | CryptoFlex LLC</title>. Consistent branding across every search result.
The Gotcha: Vercel Serverless + File System#
Here's a war story. Everything looked perfect locally. The sitemap generated beautifully. But when I submitted it to Google Search Console, the status said "Couldn't fetch."
The root cause? The sitemap calls getAllPosts(), which uses fs.readdirSync() to read blog content from the src/content/blog/ directory. Locally, those files exist. On Vercel's serverless runtime? They might not be included in the deployment bundle.
Vercel Serverless File System
Vercel's serverless functions only include files that Next.js detects as dependencies through import analysis. Since getAllPosts() reads files dynamically via fs (not import), Next.js doesn't know those .mdx files need to be bundled.
The fix is one line in next.config.ts:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
outputFileTracingIncludes: {
"/*": ["src/content/**/*"],
},
};
export default nextConfig;
outputFileTracingIncludes explicitly tells Next.js: "When building serverless functions, include all files matching src/content/**/* in the bundle." Problem solved. Google can fetch the sitemap.
Check This First
If your sitemap.xml, feed.xml, or any route that reads from the filesystem works locally but fails on Vercel, outputFileTracingIncludes is almost certainly the fix. This catches most "works on my machine" serverless deployment issues.
Per-Page SEO Enhancement#
Beyond the global configuration, every page gets its own targeted metadata. Here's how the blog listing page looks:
// src/app/blog/page.tsx
export const metadata: Metadata = {
title: "Blog",
description: "Articles about Claude Code, Next.js, cybersecurity...",
alternates: {
canonical: `${BASE_URL}/blog`,
},
openGraph: {
title: "Blog | CryptoFlex LLC",
description: "Technical articles...",
url: `${BASE_URL}/blog`,
type: "website",
},
};
And the About page uses type: "profile" for the OpenGraph type:
openGraph: {
title: "About | CryptoFlex LLC",
description: "Chris Johnson, veteran, cybersecurity professional...",
url: `${BASE_URL}/about`,
type: "profile",
},
Each page type gets semantically appropriate metadata. Blog posts are article, the About page is profile, everything else is website. Google uses these type hints to understand the nature of each page.
Validation Checklist#
After implementing all of this, here's how to verify it's working:
| Endpoint | What to Check |
|---|---|
/robots.txt | Returns Allow: / with disallow rules and sitemap reference |
/sitemap.xml | Valid XML listing all pages and blog posts with dates |
/feed.xml | Valid RSS 2.0 with Atom self-link and all posts |
/manifest.webmanifest | JSON with app name, icons, and theme colors |
For structured data, use Google's Rich Results Test:
- Test your homepage, which should detect WebSite and Person schemas
- Test a blog post, which should detect Article and BreadcrumbList schemas
For social previews:
- OpenGraph Debugger: Test how your site appears on LinkedIn/Facebook
- Share a link in a draft tweet and verify the Twitter Card renders correctly
For the sitemap specifically:
- Go to Google Search Console
- Verify your domain ownership
- Navigate to Sitemaps in the left sidebar
- Submit
sitemap.xml - Wait for status to show "Success"
Don't Skip Search Console
Google Search Console is free and gives you invaluable data: which search queries lead to your site, which pages are indexed, crawl errors, Core Web Vitals scores, and mobile usability issues. It's the single most important SEO tool you can set up.
The Results#
With all four layers in place, Google now sees a completely different site:
- 16 pages submitted via dynamic sitemap (and growing with every new post)
- 4 JSON-LD schemas providing structured data for rich results
- Unique metadata on every page with targeted keywords
- Social preview cards that look professional on every platform
- An RSS feed enabling content syndication and potential backlinks
- Canonical URLs preventing duplicate content dilution
- googleBot directives maximizing search result preview quality
The best part? All of this is zero-maintenance. New blog posts automatically appear in the sitemap, get Article schemas, generate RSS entries, and inherit all the metadata configuration. The SEO infrastructure scales with the content.
Final Thoughts#
SEO doesn't have to be mysterious. At its core, you're answering three questions for Google:
- What pages exist? →
robots.txt+sitemap.xml - What is each page about? → JSON-LD + metadata + keywords
- How should I show it? → OpenGraph + Twitter Cards + googleBot directives
If you're building with Next.js, the Metadata API makes this almost pleasant. You write TypeScript objects, and Next.js handles the messy HTML output. Add a few JSON-LD components, set metadataBase, and you've covered 90% of what matters for search visibility.
The remaining 10% is content quality and backlinks, and no amount of meta tags can fake those. But at least now Google knows your site exists, understands what it's about, and can show it to people who are looking for exactly what you've written.
That unmapped road? It's got street signs now.
Written by Chris Johnson and edited by Claude Code (Opus 4.6). All the SEO implementation code shown in this post is in the website source at github.com/chris2ao/cryptoflexllc. This post is part of a series about AI-assisted development. Previous: Evaluating Free WAFs So You Don't Have To: Cloudflare vs Vercel. Next: Building a Blog Newsletter from Scratch.
Weekly Digest
Get a weekly email with what I learned, summaries of new posts, and direct links. No spam, unsubscribe anytime.
Related Posts
A detailed, step-by-step guide to vibe coding a production website from the ground up using Claude Code, from someone whose last website ran on Apache with hand-written HTML. Every service, every config, every command.
One cybersecurity nerd, one AI coding assistant, one week, 117 commits, 24 pull requests, 17 blog posts, 410 tests at 98% coverage. This is the story of building a production website from scratch with Claude Code, including every mistake, meltdown, and meme along the way.
How I turned a functional web port of a 1991 game into a full-featured modern 4X strategy game across four feature phases and a wiring plan, using Claude Code as the primary development engine.
Comments
Subscribers only — enter your subscriber email to comment
