
SEO in Kelowna: How to Get Your Business Found on Google
March 16, 2026
What robots.txt does, why it matters for SEO, and how to set it up correctly for your small business website. No technical jargon required.
Loic Bachellerie
March 16, 2026

Most small business owners have never looked at their robots.txt file. Some do not know it exists. A few have one that is actively hurting their search rankings without them realizing it.
This guide covers everything you need to know about robots.txt optimization — what the file does, why it matters for your visibility on Google, what common mistakes look like, and how to get yours set up correctly. No prior technical knowledge required.
A robots.txt file is a plain text file that lives at the root of your website. You can find it at yourdomain.com/robots.txt. Its job is to send instructions to search engine crawlers — programs like Googlebot that visit your website to read and index your pages.
Think of it as a note you leave on the front door. It tells crawlers which parts of your site they are welcome to explore and which parts they should skip entirely.
Here is what a simple robots.txt file looks like:
User-agent: *
Disallow: /admin/
Sitemap: https://www.yoursite.ca/sitemap.xml
That is it. Three lines. The first line says "these rules apply to all crawlers." The second says "do not crawl anything inside the /admin/ folder." The third tells crawlers where to find the sitemap.
Robots.txt is not a security mechanism. It does not prevent people from visiting those pages directly, and a determined crawler can ignore it. But every major search engine — Google, Bing, DuckDuckGo — respects the standard. For the purposes of SEO, it works reliably.
You might assume robots.txt is a concern for large websites with thousands of pages. It is not. Even a five-page brochure site can run into real problems if this file is misconfigured.
Google allocates a limited amount of crawl activity to each website. For a large e-commerce site with 50,000 pages, this is a significant constraint. For most small business sites, it is less critical — but it still matters.
If your robots.txt is not directing crawlers efficiently, Googlebot may spend time on pages that offer no SEO value: thank-you pages, internal search results, login pages, duplicate content generated by URL parameters. That is time not spent on your services pages, your location pages, and your blog posts — the pages that actually drive organic traffic.
Proper robots.txt optimization ensures crawlers focus on the content that matters.
Most websites have areas that should never appear in search results. Admin dashboards, staging environments, internal tools, user account pages, checkout flows. If these pages get indexed, you end up with unwanted results in Google — and potentially expose information that was not meant to be public.
A correctly configured robots.txt keeps these areas out of the search index before the problem starts.
Some platforms generate multiple URLs for the same content. A product page might be accessible at /products/widget/, /products/widget/?color=blue, and /products/widget/?ref=email. Without guidance from robots.txt (or canonical tags), Google may index all three versions and treat them as duplicate content — which dilutes your ranking signals.
Blocking parameter-based URLs in robots.txt is one way to manage this. It is not always the right approach — canonical tags are often better — but robots.txt is a tool worth having in your kit.
Open a browser and go to your domain followed by /robots.txt. For example: https://www.yoursite.ca/robots.txt.
If the file exists, you will see plain text in your browser. If you see a 404 error, your site does not have one — and you should create one.
Reading the file is straightforward once you understand the four core directives.
This line identifies which crawler the following rules apply to. A wildcard (*) means the rules apply to all crawlers. You can also target specific ones:
User-agent: *
User-agent: Googlebot
Most small business sites only need the wildcard. If you want to give Google different instructions than other crawlers, you would write separate blocks — one for Googlebot and one for *.
This tells a crawler not to visit a specific path. The value is relative to your domain root:
Disallow: /admin/
Disallow: /wp-login.php
Disallow: /cart/
A Disallow: / with no other rules blocks the entire site — one of the most damaging mistakes a website can make. More on that below.
An empty Disallow: value means nothing is disallowed:
Disallow:
The Allow directive explicitly permits access to a path, even within a disallowed section. It is useful when you want to block a folder but allow a specific file inside it:
Disallow: /private/
Allow: /private/press-kit.pdf
Google supports Allow. Not all crawlers do, but for SEO purposes, the ones that matter all respect it.
This line points crawlers to your XML sitemap. Include the full URL:
Sitemap: https://www.yoursite.ca/sitemap.xml
You can list multiple sitemaps if needed. This is optional but strongly recommended — it ensures Google always knows where your sitemap lives, regardless of whether you have submitted it through Google Search Console.
WordPress with the Yoast SEO plugin or Rank Math automatically generates a robots.txt file. The default output is generally good. You can edit it directly in the plugin settings without touching the actual file.
A typical WordPress robots.txt looks like this:
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Sitemap: https://www.yoursite.ca/sitemap_index.xml
The Allow: /wp-admin/admin-ajax.php line is intentional — this endpoint is used by many front-end WordPress features and needs to be crawlable.
Shopify generates a robots.txt file automatically and does not let you edit the entire file directly. By default, it blocks admin areas, checkout pages, and internal URLs. It also includes your sitemap automatically.
As of recent Shopify versions, you can customize the file via a robots.txt.liquid template in your theme. Unless you have a specific need, the default Shopify configuration is reasonable for most merchants.
For modern JavaScript frameworks, robots.txt is typically a static file placed in the public/ directory. It gets served as-is when a crawler requests /robots.txt.
In Nuxt 3, you can place your robots.txt in /public/robots.txt. If you are using the nuxt-simple-robots module, you can configure it in nuxt.config.ts and let the module generate the file for you.
A basic setup for a Nuxt or Next.js site:
User-agent: *
Disallow: /api/
Disallow: /_nuxt/
Sitemap: https://www.yoursite.ca/sitemap.xml
The /_nuxt/ path contains JavaScript build assets. There is no value in having these indexed.
If your site is a static HTML site with no CMS, create a plain text file named robots.txt and upload it to your root directory. The format is identical to the examples above.
This is the most catastrophic robots.txt error, and it happens more often than you would expect — usually when a developer sets up a staging environment and forgets to change the settings before launch.
User-agent: *
Disallow: /
This single instruction tells every crawler to stay out of your entire website. Google will stop indexing your pages. Your rankings will disappear. If you have been wondering why your site is not showing up in search results, check this first.
To allow full crawling, either remove the Disallow line entirely or use an empty value:
User-agent: *
Disallow:
Google's crawler renders your pages the same way a browser does. It needs access to your CSS and JavaScript files to understand what your pages look like and how they are structured.
If your robots.txt blocks these file types, Googlebot sees a broken version of your site. This can negatively affect your Core Web Vitals assessments and how your pages are ranked.
Never block:
Disallow: *.css
Disallow: *.js
A broad Disallow rule can catch pages you did not intend to block. For example:
Disallow: /services-old/
If your current services pages happen to live at /services/ and your old ones at /services-old/, this is fine. But if someone accidentally writes:
Disallow: /services
Without the trailing slash, this blocks /services, /services/, /services-old/, /services-pricing/, and anything else starting with that string. Always use trailing slashes when you mean to block a folder.
Robots.txt is a public file. Anyone can read it by typing your domain followed by /robots.txt. If you list a path like /internal-financial-reports/ in a Disallow directive, you have just told the world that folder exists.
For genuinely sensitive pages, use proper authentication. Do not rely on robots.txt to keep them hidden.
These three tools all deal with controlling search engine behavior, but they work at different levels and serve different purposes.
Robots.txt operates at the crawl level. It tells search engines whether to visit a URL at all. If you block a page in robots.txt, Googlebot will not crawl it — which also means it cannot read any other instructions on that page, including meta tags. A page blocked in robots.txt can still appear in search results if other sites link to it; Google will just show it without a description.
Meta robots tags are HTML tags placed in the <head> of a specific page. They tell search engines whether to index that page and whether to follow its links:
<meta name="robots" content="noindex, nofollow">
Use meta robots when you want Google to visit the page (so it can read the tag) but not include it in search results. This is the right tool for thin content pages, thank-you pages, and duplicate content you cannot remove.
X-Robots-Tag is an HTTP response header that does the same job as the meta robots tag, but works at the server level. It is useful for non-HTML files like PDFs that cannot contain HTML tags.
The practical rule: use robots.txt to stop crawlers from wasting time on sections of your site. Use meta robots tags to control indexing on individual pages. Use X-Robots-Tag for files.
Google Search Console has a robots.txt tester built into the platform. Navigate to your property, go to Settings, and look for the Crawl Stats or Legacy Tools section. The tester lets you enter a URL and see whether your current robots.txt allows or blocks it for specific crawlers.
You can also use the URL Inspection tool to check any individual page on your site. If a page is blocked by robots.txt, the inspection result will flag it clearly.
For a quick check without tools, visit your robots.txt file in a browser and read through it carefully. Look for:
Disallow: / rule that could block your entire siteScreaming Frog, Ahrefs, and Semrush all have robots.txt analyzers that can validate your file and flag potential issues. If you are doing a full technical SEO audit, these tools give you a more thorough picture than manual review alone.
Here is a starting point that works for most small business websites. Adjust the disallowed paths to match your platform and site structure:
User-agent: *
Disallow: /admin/
Disallow: /wp-admin/
Disallow: /wp-login.php
Disallow: /cart/
Disallow: /checkout/
Disallow: /account/
Disallow: /thank-you/
Disallow: /search/
Disallow: /?s=
Allow: /wp-admin/admin-ajax.php
Sitemap: https://www.yoursite.ca/sitemap.xml
Remove the WordPress-specific lines if you are not on WordPress. Add any other paths specific to your platform that should stay out of Google's index. If your site is straightforward with no private sections, even a minimal file like this is sufficient:
User-agent: *
Disallow:
Sitemap: https://www.yoursite.ca/sitemap.xml
The explicit Disallow: with no value tells crawlers that nothing is off-limits, which is clearer than having no file at all.
yourdomain.com/robots.txt that tells search engine crawlers what to visit and what to skip.Disallow: / — it blocks your entire site from Google.If you are based in the Okanagan and want someone to review your site's technical SEO setup — including robots.txt, sitemap configuration, crawl health, and page speed — our web development team works with small businesses across Vernon, Kelowna, and the broader BC interior. A technical audit takes less time than you might expect, and the issues it surfaces are often the ones holding back an otherwise solid site.
Let's discuss how we can help you achieve your goals online.