Beginnings And Developments
Works on optimization first began in the mid- 1990s, when search engines classified the sites present in the early web. Earlier, all that a web-master had to do was to submit his page, or the URL, to the search engines. The engines would, in turn, send a spider to “crawl” the page, returning links, and other information in the page fit for indexing. The process comprised of the spider storing a copy of the page in the search engine’s server, and a different program, known as indexer, would extract the information about it, like the words used in the page, their weightage and their location. It would also extract the links that the page contained, which would be then placed in the scheduler to crawl later.
Later, site owners realized the necessity of easy visibility and high rankings of their sites in the search lists. This created an opportunity for both black hat and white hat SEO practioners. In fact, the first use of the phrase, search engine optimization, according to Danny Sullivan, a famous industry analyst, was seen in a spam message on the Usenet in the mid-1997.
Initially, the search engines relied on simple information, like Meta tags of the keywords, or index files. Meta tags would give a hint to the content of the web page. However, since it depended completely on the webmaster’s interpretation of the page, it could be, and it was at times, a wrong idea of what the site contained. Hence, the site would be listed for irrelevant searches, making the search inconsistent with the user’s need. The content providers could also manipulate things in HTML sources to increase rankings in searches.
Measures To Prevent Inconsistent Tags
By the end of 1997, search engines could make out that wrong means were being used by web masters to increase their visibility in searches. Some of them even practiced stuffing their sites with useless and misleading keywords for that purpose. Hence some search engines, like Infoseek took measures to prevent such manipulation in rankings by implementing better algorithms.
AIRWeb: A Step Further In That Direction
Because there is a high demand for being in the search list, there is a chance of a difference of opinions arising between the SEOs and the search engines in the future. AIRWeb, Adversarial Information Retrieval on the Web, was an annual conference created in 2005, to discuss ways to prevent and minimize the malicious activities of such web content providers. Today, SEO has reached a different level and the competition has grown largely. The reason behind this is due to the increasing number of websites per day.