Skip to content
SEO 2 March 2026 Updated: 2 March 2026

Technical SEO: The Complete Guide for 2026

59% of websites have at least one critical technical SEO issue. It is the invisible foundation that determines whether your content can rank at all — no matter how well it is written.

Magnus Bo Nielsen Magnus Bo Nielsen 14 min read

You have written excellent content. Your copy is well-researched, well-written and precisely targeted to the keywords your customers are using. Yet the page does not rank. Google cannot find it. Or it ranks, but more slowly than your competitors. Or it appears, but with a poor click-through rate because no rich results are triggered.

The reason is almost always the same: the technical foundation is not in order. Technical search engine optimisation is everything that happens under the bonnet — what your visitors never see, but what Google uses to assess whether your page deserves to rank at the top. Content is the furniture. Technical SEO is the foundation and structure of the building. Nobody wants to live in a house with a rotten foundation, no matter how beautiful the furniture is.

According to an analysis by Semrush, 59% of all analysed websites have at least one critical technical SEO issue. For many businesses, technical SEO is the lowest-hanging fruit — an investment that removes barriers actively preventing Google from giving you the visibility you deserve.

In this guide we cover all the most important aspects of technical SEO: what it is, why it matters, and — most crucially — what you can actually do about it. We are an SEO agency based in Aarhus that works with technical SEO every day, so the recommendations below are based on practical experience with the market.

59%
Websites with at least one critical technical SEO issue
2.5s
Target for LCP (Largest Contentful Paint)
200ms
Target for INP (Interaction to Next Paint)

1. What is technical SEO?

Technical SEO is the part of search engine optimisation that covers everything apart from content and links. It encompasses the technical characteristics of your website that determine whether search engines can find, understand, index and rank your pages correctly.

To understand technical SEO it helps to know Google's three-step process: crawling, indexing and ranking. Googlebot crawlers visit your website and follow links from page to page (crawling). The analysed pages are stored in Google's vast database (indexing). When someone searches, the algorithm sorts the indexed pages by relevance and quality (ranking). Technical SEO is primarily about removing barriers in the first two steps — and about optimising the signals that influence the third.

Three domains in SEO: On-page SEO = your content and your keywords. Off-page SEO = backlinks and external authority. Technical SEO = the foundation that makes both of the others possible. You can rank with a weak technical foundation, but you will never reach your full potential.

Technical SEO covers areas including: page speed and Core Web Vitals, crawl management via robots.txt and sitemaps, canonical tags and duplicate content handling, structured data and Schema.org markup, HTTPS and security headers, URL structure and internal link topology, and mobile-first optimisation. We cover all of these areas in detail in the sections below.

A good starting point is on-page SEO — which technical SEO complements — and a broader understanding of search engine optimisation as a whole.

2. Core Web Vitals: Google's page experience signals

Core Web Vitals are Google's official measurements of the user experience on your page. They are not merely advisory — they are direct ranking factors. In 2023 Google updated INP (Interaction to Next Paint) as a replacement for FID (First Input Delay), and these are the three current metrics that count:

Metric What it measures Good Needs improvement Poor
LCP
Largest Contentful Paint
Time until the largest visible element loads Under 2.5s 2.5s – 4.0s Over 4.0s
INP
Interaction to Next Paint
Response time on user interaction Under 200ms 200ms – 500ms Over 500ms
CLS
Cumulative Layout Shift
Visual stability — does the content jump around? Under 0.1 0.1 – 0.25 Over 0.25

How to test your Core Web Vitals

The easiest place to start is Google PageSpeed Insights (pagespeed.web.dev) — free, requires no setup and gives you an overall score plus specific recommendations. Enter your URL and focus particularly on "Field data" (real users' data from Chrome) rather than "Lab data" (simulated tests). Field data is what Google uses for ranking.

In Google Search Console you will find the Core Web Vitals report under "Experience" — it shows you which specific URL groups have issues, and whether they are on mobile or desktop. This is where you get a full picture of site-wide health.

Practical tips for improving your scores

LCP fix 01
Optimise hero images

The largest element is typically a hero image. Use WebP format, correct sizing (no larger than displayed), and add loading="eager" and fetchpriority="high" attributes specifically to the hero image.

LCP fix 02
Preload and font optimisation

Use <link rel="preload"> for critical resources and font-display: swap for web fonts. Add <link rel="preconnect"> to external font servers such as Google Fonts to reduce connection time.

CLS fix 01
Specify image dimensions

The most common cause of high CLS is images without explicit width and height attributes. The browser does not reserve space for the image and shifts content when it loads. Always add both attributes.

INP fix 01
Reduce JavaScript load

Heavy JavaScript that blocks the main thread is the primary cause of poor INP. Use defer and async attributes on non-critical scripts. Remove unused third-party scripts (chat widgets, tracking pixels that are not being used).

3. Crawling and indexing

Before Google can rank your page, it needs to find it and store it. That sounds straightforward, but this is where many businesses run into problems — often without realising it.

robots.txt: What it is and the common mistakes

The robots.txt file lives at the root of your domain (e.g. gezar.dk/robots.txt) and tells search engine crawlers which parts of your site they may and may not visit. It is not a security feature — it is an agreement that can be ignored by malicious crawlers — but all reputable search engines respect it.

The classic mistake is blocking too much. We have seen sites that accidentally blocked the entire site with Disallow: / — and a website that cannot be crawled cannot rank. Check your robots.txt file regularly and verify that you are not accidentally blocking important pages or directories. Google Search Console will warn you if your robots.txt blocks indexed pages.

Typical robots.txt for a standard website:
User-agent: *
Disallow: /admin/
Disallow: /checkout/
Sitemap: https://yourdomain.com/sitemap.xml

Only block pages that should not be indexed — admin panel, checkout, thank-you pages and search result pages with parameters.

XML Sitemap: Your guide to Google

An XML sitemap is a list of all the pages on your site that you want indexed. It tells Google precisely what exists, and provides hints about priority and update frequency. A sitemap does not guarantee indexing — Google can still choose to ignore pages it considers low quality — but it ensures Google at least knows about all your pages.

Canonical tags: Avoid duplicate content issues

Duplicate content arises when the same or very similar content is accessible at multiple URLs. This can happen for many reasons: www vs. non-www, HTTP vs. HTTPS, URL parameters from tracking or filtering, pagination, or deliberately duplicated pages. Canonical tags (<link rel="canonical" href="...">) tell Google which version of a page is the "original" and should receive all link juice and ranking credit.

Canonical tags are not only relevant for large sites with many pages. Even a small website should have a canonical tag on every page — it is a straightforward and effective safeguard against accidental duplicate content issues.

Crawl budget: Important for large sites

For most small and medium-sized websites, crawl budget is not an issue — Google simply crawls all pages. But for large e-commerce stores with thousands of product pages, filter parameters and variants, crawl budget is a real concern. Google only allocates a certain amount of resources to crawling your site. If it wastes them on pages without value — filtered search results, parameter URLs, empty category pages — it spends fewer resources on your important pages.

4. Site structure and URL architecture

Good URL architecture helps both users and search engines understand your site. It comes down to two things: individual URLs that are logical and readable, and an overall hierarchical structure that is consistent and easy to navigate.

URL best practices

Flat hierarchy: Three clicks from the homepage

A rule of thumb that holds in practice: all important pages should be accessible in at most three clicks from the homepage. The deeper a page is buried in the site structure, the weaker a signal you send to Google about its importance. This is partly because crawlers use link depth as a proxy for priority, and partly because internal link juice is diluted with each level.

The practical implication is that you should link to important pages from your navigation, your homepage and your most visited pages — not just via a single path through the site structure.

Breadcrumbs with BreadcrumbList schema

Breadcrumbs are navigation links at the top of subpages showing the hierarchical path to the current page — e.g. "Home › Blog › Technical SEO". They benefit usability and give Google context about the site structure. Combined with BreadcrumbList JSON-LD schema they can trigger breadcrumb display directly in search results instead of the normal URL, which increases visibility and CTR.

5. Structured data (Schema.org)

Structured data is a standardised way of telling search engines exactly what your content is — not what it looks like, but what it actually is. You implement it as a JSON-LD script in your HTML, and it gives Google the opportunity to show your content as "rich results" in search: star ratings, prices, FAQ accordions, recipe cards and much more.

The most important Schema.org types for businesses

Schema 01
LocalBusiness

Essential for local businesses. Specifies name, address, phone, opening hours, priceRange and serviceArea. Use a specific subtype such as MarketingAgency, Restaurant or Plumber rather than the generic LocalBusiness.

Schema 02
FAQPage

Implement on pages with question-and-answer content. Can trigger an FAQ accordion directly in search results and significantly increase the space your listing occupies. Particularly effective for service pages and blog articles with FAQ sections.

Schema 03
BlogPosting

For all blog articles. Specifies headline, author, datePublished, dateModified and description. Helps Google understand and validate article content, and increases the likelihood of inclusion in AI Overviews and other rich features.

Schema 04
BreadcrumbList

Combine with visible breadcrumb links on the page. Shows the breadcrumb path instead of the URL in search results, giving clearer context and often a higher CTR than a long URL string.

How to test structured data

Google offers the free Rich Results Test (search.google.com/test/rich-results) — enter a URL to see whether your structured data is correctly implemented and which rich results the page is eligible for. You can also check implementation in Google Search Console under "Rich results" in the left menu. Remember: correct implementation is necessary but not sufficient — Google decides whether to show rich results based on context and the search query.

JSON-LD over Microdata: Google recommends JSON-LD as the preferred implementation method for structured data. It is easier to maintain because it is separated from the HTML content, and it can be placed in <head> or <body> without any difference in effect.

AI Overviews and structured data

With Google's AI Overviews — the AI-generated summary at the top of search results — structured data has become even more important. Pages with correctly implemented structured data, particularly FAQPage and BlogPosting, appear to have a statistically higher probability of being cited. This has not yet been scientifically documented, but practical experience points clearly in that direction.

6. HTTPS, security and HTTP headers

HTTPS is an absolute minimum in 2026. Google has used HTTPS as a ranking factor since 2014, and since 2018 Chrome has marked all HTTP pages as "Not secure" with a clear icon in the address bar. If your site is still running on HTTP, this is an issue that needs to be resolved immediately — both for SEO and for user trust.

Security headers: Low-hanging fruit

Beyond HTTPS, you can improve your site's security profile with HTTP response headers. Most web servers and hosting platforms make it straightforward to add them — in Caddy it is just a few lines in the configuration file:

Redirect chains: Minimise them

Redirects are necessary — when you change a URL, move content or consolidate pages. But redirect chains, where one redirect points to another redirect which points to yet another, are problematic. They waste crawl budget, introduce latency and dilute link juice. Keep your redirects direct (301-redirect from old URL to final URL) and clean up existing chains regularly.

7. Mobile-first indexing

Since 2019 Google has primarily used the mobile version of your page to index and rank it. This means that if your mobile page has different or incomplete content compared to your desktop page, you rank on the basis of the mobile version — which may be inferior. Mobile-first indexing is not an option that can be switched on or off. It is simply how Google works.

What to check

Remember Googlebot-Smartphone: Pages that are blocked by robots.txt for Google's mobile crawler but accessible to the desktop crawler are a serious problem. Check specifically in your robots.txt that you are not accidentally blocking mobile crawlers from parts of your site.

Core Web Vitals on mobile are what matter most

The Google Core Web Vitals data used for ranking is primarily based on mobile data from Chrome users. It is not uncommon to see pages scoring 90+ on desktop PageSpeed Insights but only 40–60 on mobile. Always check mobile scores specifically, and prioritise mobile optimisation over desktop optimisation if you need to choose.

8. Technical SEO checklist

Here is a practical checklist for a thorough technical SEO review. Work through the points one at a time — most can be verified with the free tools mentioned in the next section.

Crawling
Crawling and access

robots.txt allows crawling of important pages. XML sitemap created and submitted to GSC. No important pages accidentally blocked with noindex. No broken internal links (404 errors). HTTPS on all pages with a valid certificate.

Content
Duplicate content

Canonical tags on all pages. www vs. non-www redirects to one version. HTTP redirects to HTTPS. URL parameters handled correctly. No identical pages at different URLs without a canonical.

Performance
Core Web Vitals

LCP under 2.5 seconds. INP under 200ms. CLS under 0.1. Images have explicit dimensions. Hero images are WebP format with preload and fetchpriority. font-display: swap implemented.

Mobile
Mobile-first

Responsive design that works on all screen sizes. Touch targets minimum 44px. No horizontal overflow. Same content on mobile and desktop. Text readable without zooming.

Schema
Structured data

JSON-LD implemented for relevant schema types. LocalBusiness on homepage and service pages. BreadcrumbList on subpages. FAQPage on pages with Q&A content. Validated with Rich Results Test.

Security
HTTPS and headers

Valid SSL certificate. HSTS header enabled. X-Content-Type-Options header. X-Frame-Options header. No mixed content (HTTP resources on HTTPS page). Direct redirects without chains.

Architecture
URL structure

Short, descriptive URLs. Hyphens instead of underscores. No special characters in slugs. Flat hierarchy (max 3–4 levels). Breadcrumbs on all subpages. Internal links to important pages from homepage and navigation.

Multilingual
hreflang (if relevant)

hreflang tags on all pages with language versions. Bidirectional: the Danish page points to English, the English page points to Danish. x-default points to primary version. hreflang in sitemap.xml. Canonical and hreflang are consistent.

9. Tools for technical SEO

You do not need expensive enterprise solutions to run a solid technical SEO audit. Here are the most valuable tools — starting with the free ones:

See our full overview of recommended marketing tools for further recommendations within SEO, analytics and optimisation.

Always start with Google Search Console. It is free, uses real data from Google, and is the direct communication channel between Google and you as a website owner. If you are not set up yet, that is the first thing you should do today — verification takes only five minutes.

Frequently asked questions about technical SEO

On-page SEO is about the content on your pages — keywords, headings, copy, internal links and meta tags. Technical SEO is about everything that happens under the bonnet: how search engines find and index your site, your page speed, structured data, HTTPS, URL structure and mobile-friendliness. Both are necessary — great on-page SEO cannot reach its full potential if the technical foundation is shaky.
For most websites, one thorough technical SEO audit per year is sufficient, combined with ongoing monitoring through Google Search Console. Large e-commerce sites with frequent changes or many pages should audit quarterly. Always use Google Search Console actively — it will automatically alert you to many technical issues such as crawl errors, indexing problems and Core Web Vitals drops.
Many technical SEO tasks can be handled yourself using free tools: Google Search Console, PageSpeed Insights and Screaming Frog (free up to 500 URLs). It requires some technical understanding, but is entirely possible. For more complex work such as server-side technical changes, advanced structured data or large-scale crawl optimisation, it is typically more efficient to work with an agency with experience in the area.
A one-off technical SEO audit typically costs DKK 5,000–15,000 depending on the size and complexity of the site. Ongoing technical SEO as part of an SEO package starts at Gezar from DKK 4,500/month (setup DKK 5,999). This includes monthly monitoring, technical fixes, Core Web Vitals optimisation and ongoing reporting in Google Search Console.
The most critical technical SEO errors are: pages accidentally blocked by robots.txt or noindex tags, missing HTTPS, very low Core Web Vitals scores (especially LCP above 4 seconds), duplicate content without canonical tags, and no XML sitemap submitted to Google Search Console. In the worst case, these errors can prevent your pages from being indexed at all — all your good content becomes invisible to Google.

Want a technical SEO health check?

We run a thorough technical SEO audit of your website — Core Web Vitals, crawling, structured data, URL architecture and much more. You get a prioritised list of concrete improvements with no obligation.

See our SEO service

Read also

Local SEO: Dominate Google in Aarhus Complete guide to local visibility SEO vs. Google Ads: Which Should You Choose? Pros and cons of both channels What is search engine optimisation? Understand the fundamentals of SEO Linkbuilding: The Complete Guide 10 strategies for building authority with backlinks Conversion Rate Optimisation (CRO) From technical foundation to more conversions