I learned this the hard way: SEO is not a content problem first. It is a rendering, routing, and deployment problem.
The first version of InfiniteGrammar.de was designed in Loveable. The rest of the development and repair work was done in Claude Code. That combination was productive for shipping quickly. It also created a trap I did not recognise early enough.
The app started as a React 18 + Vite + React Router SPA on Netlify. That is a reasonable shape for an interactive product. It is a weak default shape for a site that needs Google to index dozens of content pages.
The real problem was architectural
The site had many pages that were meant to rank: grammar overview pages, CEFR-level pages, grammar-section pages, exam-related pages. But the initial frontend stack behaved like this:
- React Router resolved routes client-side,
- Vite built a static SPA shell,
- Netlify served
index.htmland let the browser render the page, - content lived in the application layer rather than in pre-rendered HTML.
For a crawler, that meant weak or inconsistent signals unless additional SEO infrastructure was added deliberately. The main lesson is simple:
If a product depends on organic search, a plain React SPA is not SEO-ready by default.
That does not mean it cannot rank. It means SEO has to be treated as part of the architecture, not as a late polish pass.
The mistakes that mattered most
1. Treating meta tags as the main SEO task
The first instinct was to improve titles, descriptions, Open Graph tags, and route metadata. That helped presentation. It did not solve indexation. The real issue was that the crawler still had to deal with a client-rendered application shell.
A page can have a polished title and still be a weak SEO page.
2. Patching SEO outside the rendering model
I tried approaches that modified or injected metadata after build instead of making prerendered pages the actual output. That created duplication: one source of truth in React, another in scripts, sometimes a third in sitemap or redirect logic. Once several systems define the same URLs and metadata, drift becomes likely. That drift caused real problems.
3. Changing URLs too often
One of the most expensive mistakes was changing URL structures repeatedly while the site was already being crawled. Every URL migration required redirects. Repeated migrations created chains. Chains wasted crawl budget, added latency, and weakened the consistency of the signals being sent.
For SEO, URL stability is not an implementation detail. It is a product decision.
4. Underestimating trailing-slash consistency
This was one of the most damaging issues and also one of the easiest to miss. In a Netlify setup that serves prerendered directory-based routes, /page/ and /page are not equivalent. One may return 200, the other may return 301.
If the sitemap, canonical tags, internal links, and prerender script disagree about slash format, the site starts generating unnecessary redirects on its own canonical pages. The rule I would apply now is absolute:
Pick one URL format and make every system obey it.
That includes React links, React Router navigation, sitemap generation, canonical tags, prerender page lists, and Netlify redirect rules.
5. Letting the sitemap drift away from the build
A sitemap is only useful if it reflects what the build actually produces. In my case, the sitemap, the prerender script, and the route definitions were not always generated from the same source of truth. That meant some URLs were valid in one place and missing or redirected in another.
A sitemap should not be maintained as a separate editorial object. It should be derived from the same route inventory that powers prerendering and canonical generation.
6. Treating prerendering as an add-on
The turning point was moving to Puppeteer-based prerendering and treating it as part of the build itself. That changed the site from one SPA shell plus client rendering into a set of route-specific HTML documents that React could hydrate after load.
That is a much cleaner SEO model for a content-heavy site on React + Vite + Netlify. It is still more fragile than using a framework with built-in SSR or SSG. But it is workable if the pipeline is disciplined.
What the stack taught me
Loveable made it easy to get a usable frontend quickly. Claude Code made it easy to iterate quickly. But speed of UI iteration can hide SEO debt if the underlying rendering model is wrong for search-driven growth.
The repair cost was real because the key question had not been asked early enough:
Does this stack produce stable, crawlable HTML for every page I want indexed?
That question should have been answered before scaling content.
What I would do differently now
Path 1: use a framework with built-in SSR or SSG
For content-heavy products, this is the cleaner answer. Use Next.js, Astro, Remix, or another framework where route-level HTML generation is part of the normal architecture. That removes a large amount of custom SEO plumbing.
Path 2: stay on React + Vite, but define the SEO system on day one
If the stack remains React 18 + TypeScript + Vite + React Router + Netlify + react-helmet-async + Puppeteer, then the following should exist before launch:
- a route inventory as a single source of truth,
- build-time prerendering for every indexable route,
- per-route canonical tags,
- a sitemap generated from the same route inventory,
- structured data for key content types,
- robots rules for public vs private areas,
- one enforced trailing-slash policy,
- and redirect tests as part of deployment validation.
That would have prevented most of the struggle.
SEO guidelines I would now give Claude Code for this stack
Architecture rules
- Treat SEO as a rendering problem first, not as a metadata problem.
- Assume a plain React SPA is not sufficient for indexable content pages.
- For every public route that should rank, produce prerendered HTML at build time.
- Keep private routes out of the crawl surface:
/auth,/profile,/admin,/statistics. - Pick one canonical URL format and enforce it everywhere.
Route and URL rules
- Use one route inventory as the single source of truth for sitemap generation, prerender targets, canonical generation, and internal linking.
- Do not change URL structures unless absolutely necessary.
- If URLs must change, use one-hop
301redirects only. - Do not allow internal links to point at redirecting URLs.
- Treat trailing-slash consistency as mandatory, not cosmetic.
Metadata rules
- Use react-helmet-async for per-route titles, descriptions, and canonical tags.
- Add structured data where appropriate:
WebSite,BreadcrumbList,FAQPage,Article,LearningResource. - Keep canonical URLs aligned with the actual deployed route format.
- Make sure social tags and search tags are consistent, but do not mistake social metadata for SEO completeness.
Build and deployment rules
- Make Puppeteer prerendering part of the build, not a side script.
- Fail the build if an indexable route is missing prerendered output.
- Fail the build if sitemap URLs do not match the prerender route list.
- Fail the build if canonical URLs and route inventory disagree.
- Add automated checks for
200on canonical pages and for accidental301or404responses. - Keep Netlify redirects minimal and explicit.
Crawl-surface rules
- Maintain a strict
robots.txtfor public vs private routes. - Do not let the sitemap be blocked by headers or robots directives.
- Make sure every public content page is reachable through internal links, not only via the sitemap.
- Treat crawlability, canonicals, redirects, sitemap output, and prerendering as one system.
Hosting platform rules
- Disable Netlify post-processing features (Pretty URLs, asset optimization, snippet injection) when managing meta tags with React Helmet. These features can inject duplicate
og:tags derived fromindex.htmlthat appear before Helmet's tags and override them for social crawlers. - After every deploy,
curla sample of live URLs and diff the response against localdist/files. If they differ, the hosting platform is modifying the HTML. - Test social sharing with LinkedIn Post Inspector and Facebook Sharing Debugger after every deploy that touches meta tags or hosting configuration. Browser previews are not sufficient — social crawlers use first-match semantics for
og:tags. - Treat the hosting platform's configuration as part of the SEO system. Netlify settings, Cloudflare rules, Vercel headers — any layer between build output and the crawler can break what the build got right.
The frontend was easy to ship. The search surface was not. For content-heavy products, that distinction matters very early.
