History of SEO
Webmasters and content providers started optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. At the beginning, all webmasters needed to do was submit the address of a site, or URL, to the different engines that would send a “spider” to “crawl” that page, extract links to other pages from it, and return information found on the page to be indexed. The method involves a search engine spider downloading a page and storing it on the search engine’s own server, where an alternative program, known as an indexer, extracts different information about the site, such| as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then allocated into a scheduler for crawling at a later date.
Site owners begun to acknowledge the benefit of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry investigator Danny Sullivan, the term “search engine optimization” probably came into use in 1997. The first documented use of the term Search Engine Optimization was John Audette and his business concern Multimedia Marketing Group as documented by a web page from the MMG site from August, 1997.
Early versions of search algorithms relied on webmaster-provided facts such as the keyword meta tag, or index files in engines such as ALIWEB. Meta tags provide a guide to each page’s content. Using meta data to index sites was found to be less than reliable, still, as a result of the webmaster’s choice of key words in the meta tag could potentially be an false description of the site’s actual content. False, defective, and irregular data in meta tags could and did cause sites to rank for irrelevant searches. Web content providers also manipulated a number of attributes within the HTML source of a page in an effort to rank well in search engines.
By relying so much on factors like as keyword density which were entirely within a webmaster’s authority, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to remodel to arrange their results pages showed the most appropriate search results, rather than unrelated pages stuffed with many keywords by unscrupulous webmasters. Since the success and acceptance of a search engine is decided by its ability to produce the most appropriate results to any given search, allowing those results to be deceptive would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to control.
Graduate students at Stanford University, Larry Page and Sergey Brin, developed “backrub,” a search engine that relied on a numerical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and weight of inbound links. PageRank estimates the likelihood that a given page will be reached by an Internet user who randomly surfs the Internet, and follows links from one site to another.
Page and Brin founded Google in 1998. Google attracted a loyal following among the increasing number of Internet users, who liked its simple design. Off-page factors (such as PageRank and hyperlink analysis) were investigated as well as on-page factors (such as keywords density, meta tags, headings, links and site structure) to empower Google to prevent the kind of manipulation seen in search engines that only examined on-page factors for their rankings. Although PageRank was harder to game, webmasters had already developed link building equipment and schemes to influence the Inktomi search engine, and these methods proved also appropriate to gaming PageRank. A lot of sites focused on exchanging, purchasing, and selling links, oftentimes on a big scale. Several of these schemes, or link farms, involved the creation of thousands of websites for the exclusive intend of link spamming.
In 2007 Google announced a campaign contra paid links that transfer PageRank. On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer evaluate nofollowed links in the identical way, in order to prevent SEO service providers from utilizing nofollow for PageRank sculpting. As a result of this adjust the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed different techniques that change nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Furthermore various solutions have been suggested that involve the usage of iframes, Flash and Javascript.
Google Instant, real-time-search, was introduced in late 2009 in an attempt to make up search results more appropriate and relevant. Earlier webmasters have spent months or even years optimizing a website to increase search rankings. With the advance in recognition of social media sites and blogs the leading engines made changes to their algorithms to empower fresh content to rank quickly within the search results.
Web Directory – http://www.mydirectory.me