DEFINITION of Search Engine

A search engine is software designed to search the Internet for websites to add to an index. Search engines use algorithms to filter indexed content, and to display the results on a web page.

BREAKING DOWN Search Engine

Before the introduction of search engines, the primary way that someone could find a list of websites related to a specific topic would be to use a search directory. These were curated lists compiled by humans who scoured the Internet for useful websites, and created categories and sub-categories based on the websites’ subject matter. Directories would typically contain descriptions of the content found on the website in addition to the web addresses, called the Uniform Resource Locator (URL).

Web directories are associated with a specific behavior: browsing. Someone visiting a web directory would look through the various categories until one of interest was found, and then look through the list of websites in the category until finding what he or she wanted. This was a relatively indirect approach to finding a website, similar to walking through a library until finding an interesting book.

In order to speed up the browsing process, web directories began offering function a site search function. This allowed people to type in a series of keywords, which would return directory content that matched.

While directories were able to provide a high-quality list of websites, the process of collecting and categorizing websites was manual and managed by humans. Since it was not possible for directory operators to compile information on every website available on the Internet, the list available to browsers were inherently limited.

Web Crawlers

Computer scientists began looking at a way of searching the World Wide Web for content programmatically. This was initially done through scripts that would search various catalogs and reformat values, resulting in the release of the W3Catalog in 1993.

The first web crawler, also known as spiders, was introduced around the same time. Web crawlers systematically scoured the Internet for new content. The first search engine to use a web crawler, WebCrawler, was launched in 1993. Soon after, search engines like AltaVista and Yahoo! were created.

Search engines work by using web crawling bots to search the Internet for webpages. Once a webpage is found, the bot copies the page and its URL, but also follows any links found on the page. Content found on these links is also copied, with the copied URLs and content ultimately being added to the search engine’s index. (See also: Search Engines That Compete With Google.)

Search engines use algorithms to organize the indexed content, with these algorithms ultimately determining the content that is displayed when someone types in a string of keywords into the search engine’s public-facing website.

The page of content that is displayed after a search is conducted is referred to as a search engine result page, or SERP. The SERP shows a list of websites that the search engine believes are most closely-linked to the keywords, with the results referred to as “organic.” 

The display order of websites found on the SERP depends on a variety of factors, including how useful the search engine considers the content and how trustworthy it considers the website. Search engines consider the search result ranking algorithms as the core of their business, and keep the algorithms a closely-guarded secret.

Alongside the organic search results shown on a SERP, search engines typically show paid advertisements (“paid search”). These resemble the organic results, but are advertisements placed by businesses that want to target people searching for particular keywords. Paid search advertisements are typically the primary revenue stream for search engines. (See also: How Google's Search Engine Makes Money.)