Crawlers, spiders, and robots
The query interface and search results pages truly are the only parts of a search engine that the user ever sees. Every other part of the search engine is behind the scenes, out of view of the people who use it every day. That doesn’t mean it’s not important, however. In fact, what’s in the back end is the most important part of the search engine, and it’s what determines how you show up in the front end.
If you’ve spent any time on the Internet, you may have heard a little about spiders, crawlers, and robots. These little creatures are programs that literally crawl around the Web, cataloging data so that it can be searched. In the most basic sense, all three programs — crawlers, spiders, and robots — are essentially the same. They all collect information about each and every web URL.
This information is then cataloged according to the URL at which they’re located and are stored in a database. Then, when a user uses a search engine to locate something on the Web, the references in the database are searched and the search results are returned.
Databases
Every search engine contains or is connected to a system of databases where data about each URL on the Web (collected by crawlers, spiders, or robots) is stored. These databases are massive storage areas that contain multiple data points about each URL.
The data might be arranged in any number of different ways and is ranked according to a method of ranking and retrieval that is usually proprietary to the company that owns the search engine.
You’ve probably heard of the method of ranking called PageRank (for Google) or even the more generic term quality scoring. This ranking or scoring determination is one of the most complex and secretive parts of SEO. How those scores are derived, exactly, is a closely guarded secret, in part because search engine companies change the weight of the elements used to arrive at the score according to usage patterns on the Web.
The idea is to score pages based on the quality that site visitors derive from the page, not on how well web site designers can manipulate the elements that make up the quality score. For example, there was a time when the keywords that were used to rank a page were one of the most important factors in obtaining a high-quality score.
That’s no longer the case. Don’t get me wrong. Keywords are still vitally important in web page ranking. However, they’re just one of dozens of elements that are taken into consideration, which is why a large portion of Part II of this book is dedicated to using keywords to your advantage. They do have value; and more important, keywords can cause damage if not used properly — but we’ll get to that.
Quality considerations
When you’re considering the importance of databases, and by extension page quality measurements, in the mix of SEO, it might be helpful to equate it to something more familiar — customer service. What comprises good customer service is not any one thing. It’s a conglomeration of different factors — greetings, attitude, helpfulness, and knowledge, just toname a few — that come together to create a pleasant experience. A web page quality score is the same.
The difference with a quality score is that you’re measuring elements of design, rather than actions of an individual. For example, some of the elements that are known to be weighted to develop a quality score are as follows:
■ Domain names and URLs
■ Page content
■ Link structure
■ Usability and accessibility
■ Meta tags
■ Page structure
It’s a melding of these and other factors — sometimes very carefully balanced factors — that are used to create the quality score. Exactly how much weight is given to each factor is known only
to the mathematicians who create the algorithms that generate the quality score, but one thing is certain: The better quality score your site generates, the better your search engine results will be, which means the more traffic you will have coming from search engines.
seo consultancy services|seo secrets|link building seo