Technical SEO Problems

11 Technical SEO Problems & How To Fix Them

These are some of the top technical SEO problems I've seen throughout the years that present obstacles to the visibility of content in search results. These are all too common but usually quite easy to fix.

1) Disallow Robot Text

Robot Text is a technical standard that instructs search engines which content on a site it should not index. You can usually find your robot text file at yourdomain.com/robots.txt but it can also be coded into the head of web pages and configured globally within a content management system.

I see this more often than you'd imagine and typically, it is a result of a developer trying to keep the site contents out of search results while the site is under development but then forgetting to remove the instructions after the site has gone live.

This can be solved by either having a technical person change the robot.txt file to allow indexing or to look into the configuration of your content management system for a setting that hides content from search engines and turn it off.

2) No Sitemap

There are two varieties of sitemaps: 1) HTML sitemaps and 2) XML Sitemaps.

HTML sitemaps are a page that links to all the major sections of a given site. They were intended to give users easy access to all the sections of a site on one page as well as to give search engines a starting point for indexing all the content of a site. This practice has largely been abandoned in favor of XML sitemaps.

XML sitemaps are dynamically-generated files that list all the pages on a site and are specifically for search engines to read. They are written in a coding language called eXtensible Markup Language. Whenever new content is generated, the XML Sitemap is automatically updated to reflect the new page. Search engines can easily look for that page (located at yourdomain.com/sitemap.xml) to discover the newly-published content.

Sites that do not use XML Sitemaps are foregoing an industry-standard for helping search engines find new content on your site.

3) Unresponsive Server Status

The server that your website is hosted upon is designed to give technical responses to tell a browser the status of a web document it is requesting. A 200 response is what browsers usually get and it is server talk for "everything is just fine. No issues."

When a server is having problems delivering a document, it will respond with an error message. Themost common error is a 404 error, or a "file not found" error. That simply means the document requested by the browser could not be found.

The 500 error is not as common. This is an "internal server error" and often is a result of broken code on the site or can also be the result of a broken connection to a database or a misconfigured server.

Search engines, like browsers, can read these error messages and when they encounter them will obviously not list content it is being told is not accessible.

If you think your server is having some issues, use the Header Check Tool to see what type of response it is sending.

4) Canonical Domains & URLs

Canonical is a technical way of saying "preferred' or sometimes, "authoratitive." The use of it as a technical standard likely has some roots in Catholicism, where the "canon" refers to the rites or "standards" of the Roman Church or the official or "authoritative" sacred books that are accepted as genuine by the church.

The configuration of a website domain can become a problem for search visibility when either the www.yourdomain.com or the non-www version of your domain does not work. The proper configuration is for both domains to return the home page of your site and that all versions resolve to one, preferred, domain. (You can tell Google directly which version of your domain you prefer via the Google Search Console.)

So, for example, both the non-www and the www-version of your domain work but when someone uses the www. version, you seamlessly end up at the preferred, non-www version. If both versions return your website but one does not redirect to the other,search engines will view the domains as individual websites with identical content. And search engines do not like duplicate content (more on that below).

People will also run into problems when they syndicate their content to other sites. When content is republished elsewhere, that, too, obviously creates duplicate content, which, again, search engines don't like.

If you do syndicate your content to other sites, you should ask the sites that republish your content to implement the Rel=Canonical HTML tag with a URL pointing back to the original post. This will tell search engines that your content is the original and authoritative post, which they will then honor for indexing purposes.

5) Duplicate Content

Search engines originally began to penalize websites with duplicate content because people started creating sites with tons of pages that had nearly identical content but were optimized to attract traffic for a specific search query.

This was often done by eCommerce sites, where the site would host pages with identical content except for the name of the product. They would host a page with the exact same copy, for example, except the title would be:

  • Blue Polo Shirt
  • Red Polo Shirt
  • Green Polo Shirt
  • White Polo Shirt
  • ad infinitum

Another common tactic would be used by franchises where the copy on the page would be identical except for the geographic location of an individual store.

Search engines want to provide the most valuable results for any query. The aforementioned pages were most likely not satisfying searchers' needs. Nor would a page of search results that all linked to the same content.

As a result, these days search engines pick the best content and ignore any duplicate content.

6) Failure To Use 301 Redirects

Content can go stale or become irrelevant to an organization and when that happens, people will often simply delete it from their site and think no more about it. But that content may still be attracting search traffic or it may be linked to from other sites, which helps with search visibility in and of itself. When those pages are deleted obviously, so are the sources of that website traffic.

Abandoned pages are also unintentionally created during website redesigns and when moving from one content management system to another.

The solution is to use 301 redirects, a technical solution that seamlessly redirects visitors from one page to another. This standard also tells search engines that one page has been permanently moved to another location, and here's where to find it.

7) Scripting Code Clogging Content

Another common issue that throws up obstacles for search engines reading your content is the inclusion of a bunch of scripting code at the top of a web page before the actual content is rendered. The scripting code can include Javascript for interactive elements on the page, tracking code for web analytics services, or code for a variety of other pruposes.

Search engines have to wade through all that code before they get to the stuff they really want, the copy. And it is unnecessary to put them through all that labor. The solution is to include all your script in a single file which you then link to in the head of the web page.

8) Dynamic Non-Descriptive URLs

You will often see a URL for an individual web page that looks like this:

  • example.com/?p=123 or like this:
  • example.com/?cat=347&prod=752&gen=2

While search engines will index these URLs, they don't provide much information as to the nature of the content that resides there.

The first example is the default setting for site using the WordPress content management system, which has more than a 60% market share. The P in the URL stands for Post or Page and the number following it corresponds to that individual post or page's identification number in the WordPress database.

This problem is all too common but easily remedied by changing the Permalinks setting in your WordPress installation. Doing so allows you to set keywords in your URL structure, giving search engines additional information as to the nature of your content.

The second example is typical of an eCommerce store. In this URL structure, CAT does not refer to felines; it refers to Product Category, PROD refers to Product Type and GEN refers to GENDER. Again, not a lot of information for search engines to understand your content. Reconfiguring the URL settings might create a more search engine-friendly URL like this: example.com/apparel/shirts/men

This provides a logical structure to the site that actually makes it more user-friendly by giving visitors information about where they currently are within a site as well as providing keyword the search engines can use.

9) Excessive External Linking

Excessive linking became an issue back in the day (circa 1999) when people began creating web sites with the express purpose of linking to one another in order to take advantage of the link popularity algorithm of the Inktomi search engine, which powered other popular engines of the time, the most prominent among them Yahoo!

The idea was to create sites that you want to rank highly in search results, and link between them to create the appearance of link popularity. Another similar practice was the selling of links for the sole purpose of search ranking.

Now, pages with a lot (50-100) external links are likely to be flagged as Link Farmsand get penalized in search rankings.

10) Slow Page Loading

Pages that take a long time to load are going to have a more difficult time ranking in search results compared to those that load quickly. The reason search engines want web pages that load fast is simply a usability issue:It is a poor search experience if the page you click on in search results takes forever to load.

Common issues that cause a page to load slowly are:

  • Overloaded or misconfigured server
  • Excessively large images on the page
  • Flash technology
  • Excessive scripts and bloated code

Google offers a Page Speed tool you can use to analyze your site's load time. You should also pay attention to you website analytics' Page Speed or Site Speed reports to identify underperforming pages.

Another primary reason slow loading pages are such a problem is the proliferation of mobile devices. Internet access is often not as fast on a mobile device and people have far less patience when waiting for content to load on their phone or tablet. Which brings us to…

11) Not Mobile Friendly

Again, mobile devices with broadband Internet access are becoming ubiquitous. As a result, mobile search behavior is becoming much more prevalent, whether those searches are conducted using the keyboard or as voice queries.

Sites that are not easy to use on mobile devices or display poorly on phones or tablets will often have high mobile bounce rates because people will find them from a search, click on the link, have a poor user experience, and return immediately to the search results.

Search engines like Google understand that behavior, can count the number of seconds that elapse between the time someone clicks on a link and returns to the search results; and factor that behavior into its ranking algorithm. (Google does offer a tool to analyze the degree to which your site is mobile-friendly.)

The current industry standard is to use responsive design, which is a web design practice that allows a site to reconfigure itself to accommodate screen size.

Most of these issues are fairly easy to fix. Doing so will go a long way toward boosting your site's visibility in search engines.