Official statement
Other statements from this video 7 ▾
- □ Faut-il vraiment mettre à jour vos contenus plutôt que créer de nouvelles pages ?
- 2:52 Un blog actif améliore-t-il vraiment votre classement Google ?
- 4:44 Pourquoi les crawl stats sont-elles un indicateur totalement inutile pour évaluer la performance de votre contenu ?
- 6:18 Faut-il vraiment regrouper vos pages FAQ pour éviter la pénalité thin content ?
- 7:21 Faut-il vraiment fusionner vos contenus similaires pour mieux ranker ?
- 7:34 Le nombre de mots est-il vraiment un facteur de classement Google ?
- 11:33 Comment Google détecte-t-il vraiment le contenu dupliqué avec le fingerprinting ?
Google tolerates generated content for location pages as long as each page contains enough unique and locally relevant information. However, beware: simply swapping city names isn’t enough. Pages that are too similar risk being deindexed for duplication. The goal is not just to avoid penalties, but to show that there is a real difference between Manchester and Birmingham, not just a search-and-replace job.
What you need to understand
Why is Google specifically interested in location pages?
Location pages have represented one of the goldmines of SEO for years. Any multi-site or multi-geographic company faces the same challenge: how to create unique content for 50, 100, or even 500 locations when the service offering remains the same?
Historically, there was a strong temptation to duplicate a template with three variables: city name, postal code, maybe a local photo. Google has always said that it doesn’t pass. But the nuance introduced here by Martin Splitt is interesting: it’s not the fact that the content is automatically generated that poses a problem, it’s its lack of real differentiation.
This statement implicitly acknowledges that Google understands the operational constraint. We’re not going to ask a writer to create 200 unique texts for 200 stores that all sell the same thing. But there is a red line: structural similarity must not lead to informational similarity.
What does Google consider to be “unique enough”?
Splitt uses a deliberately vague formula: “enough relevant facts and different information”. No percentage, no metric. It’s frustrating but consistent with Google’s usual stance: no numeric threshold, just a principle.
Specifically, this means replacing “Our services in Paris” with “Our services in Lyon” does not constitute different information. On the other hand, mentioning local hours, regional promotions, geolocated customer reviews, access specifics (parking, transport), local regulatory particularities — all of this matters.
The underlying message: if the page does not provide any additional value to a visitor from Lyon compared to a visitor from Paris, then it probably does not deserve to be indexed separately. Google wants pages that meet local intent, not just pages targeting a local keyword.
What are the concrete risks if the pages are too similar?
Splitt is clear: Google may consider the pages as duplicates and deindex them. This isn’t a manual spam penalty; it’s an algorithmic similarity-based grouping treatment.
In practice, this means that out of 50 location pages, Google indexes 5-6 and ignores the others. Or worse, it indexes randomly according to updates, creating chronic instability. Some pages appear in positions 3-4, then disappear completely at the next crawl.
The real danger is the lost SEO investment. Creating pages, acquiring local backlinks, optimizing tags… all of this becomes useless if Google decides that 45 of your 50 pages are clones. Deindexing is not always permanent, but it is unpredictable.
- Generated content is not prohibited: Google does not penalize automation per se, but the informational poverty that often results from it.
- Textual similarity is a signal of duplication: simply having a few different words is not enough; blocks of structurally distinct information are required.
- Deindexing can be partial and fluctuating: Google does not necessarily penalize the entire site, but may ignore the majority of location pages.
- Local intent must be satisfied differently: each page must answer a specific question that only that location can resolve.
- No communicated quantitative threshold: Google will never say “you need 30% unique content”; it evaluates overall relevance.
SEO Expert opinion
Is this statement consistent with field observations?
Yes and no. Yes, because we do observe that Google better tolerates location pages that include local structured data, reviews, specific hours, and geolocated images. These pages perform better in local SERPs and remain indexed more stably.
No, because the line between “unique enough” and “too similar” remains a black box. I’ve seen sites with 80% identical content remain indexed because they had strong domain authority and good internal linking. I’ve seen the opposite: pages with 40% differentiation get deindexed because the site lacked trust signals. [To be verified]: does Google apply the same tolerance threshold to an established site as to a new domain? Nothing officially proves that.
What is certain is that Google does not look only at the text. Behavioral signals (click-through rate, time spent, bounce rate), the quality of local schema.org markup, NAP consistency (Name Address Phone), local backlinks… all of this plays a role. So average content well integrated into a local ecosystem can perform better than technically more unique content but isolated.
What are the blind spots of this official recommendation?
The first blind spot: what constitutes a “relevant fact”? Google does not define anything. Does a list of 10 nearby points of interest suffice? Is original editorial content required? Does aggregating public data (local weather, events, demographic statistics) count as “unique”?
The second blind spot: the scale question. Splitt talks about “50 states,” but what about a site with 5000 location pages? At what point does Google become suspicious? No answer. And that changes everything: a plumbing site with 200 French cities does not have the same issue as a directory with 50,000 listings.
The third blind spot: the role of external signals. If a location page receives local backlinks, citations in directories, shares in neighborhood Facebook groups… does Google automatically consider it “useful” even if the content is generic? Probably yes, but Splitt does not mention it.
In what cases does this rule not really apply?
There are sectors where partial duplication is almost inevitable and yet accepted: real estate, classifieds sites, service aggregators. These sites have thousands of pages with a rigid template, but Google indexes them because they meet a specific informational demand (a specific property, a specific job offer).
The fundamental difference: these pages have a unique factual anchor (an exact address, a price, availability) that the user actively seeks. Location pages for “generic service” do not have this anchor. A plumber in Toulouse technically offers the same thing as a plumber in Bordeaux. That’s why the pressure for differentiation is higher.
Another exception: sites with overwhelming domain authority. Yelp, Yellow Pages, Booking can afford nearly identical pages because their trust flow and user signals are massive. Google knows these pages serve a real need, even if the content is light. For an average site, this tolerance does not exist.
Practical impact and recommendations
What should be done concretely to secure location pages?
First step: identify content blocks that can be differentiated without excessive editorial effort. Think about structured data: specific hours, local team (with photos and short bios), geolocated customer testimonials, local FAQ (“What is the response time in Lyon?”), ongoing promotions in that area.
Second step: integrate relevant dynamic content. Local weather if it aligns with your sector, nearby events, local news related to your activity. Use public APIs or local RSS feeds. The goal is not to stuff artificially but to create an authentic local context.
Third step: rely on the schema.org LocalBusiness with all fields completed. Google reads this structured data, and it enhances the legitimacy of each page as a distinct local entity. Add GPS coordinates, service areas, spoken languages, accepted payment methods… anything that can vary locally.
What errors should absolutely be avoided?
Classic error: generating pages for areas where you have no physical or operational presence. Google cross-references signals: if your NAP is inconsistent anywhere, if no one is searching for your business in that location, if no local backlinks exist, the page will be seen as geographic spam.
Another trap: using the same editorial text block with just the city name as a variable. Typical example: “Our company offers plumbing services in [CITY]. We are proud to serve [CITY] for 20 years.” Google detects these substitution patterns. If you must generate text, at least vary the sentence structure, the order of arguments, the concrete examples.
Third error: neglecting internal linking. If your 50 location pages are orphaned or accessible only via a dropdown menu, Google may not even crawl them. Create a regional hub page, cross-links between nearby cities, a “neighboring cities” logic at the bottom of the page.
How to check if my pages are perceived as distinct by Google?
First reflex: use Search Console and check the Coverage tab. How many of your location pages are indexed? If you’ve created 100 and only 15 are indexed, that’s a clear signal of perceived duplication. Also look at the pages “Excluded” with the status “Duplicate, page not selected as canonical.”
Second test: manual search with precise local queries. Type “[your service] + [city]” for each important location. Does your dedicated page show up? Or is your homepage or another generic page appearing instead? If Google does not display the right page, it doesn’t consider it sufficiently relevant or unique.
Third indicator: analyze click-through rates and impressions per page in Search Console. A location page that generates zero impressions on local queries while being indexed indicates it is not viewed as relevant by Google. Compare the performance of your different pages: if some perform well and others stagnate at zero, investigate the difference in content.
- Indexing audit: check how many pages are actually indexed vs created
- Content differentiation: minimum 3-4 blocks of unique local information per page (hours, team, reviews, FAQ, events)
- Complete and consistent schema.org LocalBusiness on each page
- NAP (Name Address Phone) consistent with local citations and Google Business Profile
- Structured internal linking: regional hub + links between nearby cities
- Monitoring local performance per query in Search Console
❓ Frequently Asked Questions
Combien de mots différents faut-il entre deux pages de localisation pour qu'elles soient considérées comme uniques ?
Est-ce que l'utilisation d'un template commun est automatiquement pénalisante ?
Si mes pages de localisation sont désindexées, est-ce définitif ?
Les données structurées LocalBusiness suffisent-elles à différencier les pages ?
Peut-on créer des pages de localisation pour des villes où on n'a pas de bureau physique ?
🎥 From the same video 7
Other SEO insights extracted from this same Google Search Central video · duration 13 min · published on 09/09/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.