Search Engine basics
Improving positions in the natural listings is dependent on marketers understanding the process whereby search engines compile an index by sending out spiders or robots to crawl around sites that are registered with that search engine. The technology harnessed to create natural listings involves these main processes.
The purpose of the crawl is to identify relevant pages for indexing and assess whether they have changed. Crawling is performed by robots (bots) that are also known as spiders. These access web pages and retrieve a reference URL of the page for later analysis and indexing.
Although the terms ‘bot’ and ‘spider’ give the impression of something physical visiting a site, the bots are simply software processes running on a search engine’s server with request pages, follow the links contained on that page and create a series of page references with associated URLs. This is a recursive process, so each link followed will find additional links that then need to be crawled.
The index is crafted to enable the search engine to rapidly find the most relevant pages containing the query typed by the searcher. Rather than searching each page for a query phrase, a search engine ‘inverts’ the index to produce a lookup table of documents containing particular words.
The index information consists of phrases stored within a document and also other information characterizing a page such as a document’s title, meta description, page additional attributes will be stored such as semantic markup (<h1>, <h2> headings and position in document, etc. The words contained the link anchor text ‘pointing’ to a page are particularly important in determining search rankings.
Ranking or scoring
The indexing process has produced a lookup of all the pages that contain particular words in a query, but they are not sorted in terms of relevance. Ranking of the document to assess the most relevant set of documents to return in the SERPs occurs in real-time for the search query entered. First, relevant documents will be retrieved from a run time version of the index at a particular data center. then rank in the SERPs for each document will be computed based on parsing many ranking factors, of which we highlight the main ones in later sections.
Query request and results serving
The familiar search engine interface accepts the searcher’s query. the user’s location is assessed through their IP address and the query is then passed to a relevant data center for processing. Ranking then occurs in real-time for a particular query to return a sorted list of relevant documents and these are displayed on the search results page.
Search Engine Ranking Factors
Google has stated it uses more than 200 factors or signals within its search ranking algorithms. these include positive ranking factors that help boost position and negative factors or filters which are used to remove search engine spam (also known as webspam) from the index where SEO companies have used unethical approaches such as automatically creating links to mislead the google algorithms. The importance of ranking factors are much disputed by SEOs since with so many factors it is difficult to isolate their impact to prove a correlation, or more important a causative relationship between.
The two most important factors for good ranking is good matching between web page copy and the key phrases searched and Links into the page.
The main factors to optimize on are keyword density, keyword formatting, keywords in anchor text and the document meta-data including page title tags. The SEO process to improve results in this area is known as on-page optimization.
Inbound and backlinks
Google counts each link to a page as from another page or another site as a vote for this page. So pages and sites with more external links from other sites will be ranked more highly. The quality of the link is also important, so if links are from a site with a good reputation and relevant context for the keyphrase, then this is more valuable. Internal links are also assessed in a similar way. The processes to improve this aspect of SEO are external link building and internal link architecture. To reduce the impact of webspam, search engines introduced no-follow tags, which means that links added to comments in blogs and social media blogs have a limited impact, although it seems that many search spammers aren’t aware of this.