Let’s design a Web Crawler that will systematically browse and download the World Wide Web.
Similar Names: web spiders, robots, worms, walkers, and bots.
Difficulty Level: Hard
Step-1: What is a Web Crawler?
A web crawler is a software program which browses the World Wide Web in a methodical and automated manner.
It collects documents by recursively fetching links from a set of starting pages.
Many sites, particularly search engines, use web crawling as a means of providing up-to-date data.
Search engines download all the pages to create an index on them to perform faster searches.
Some other uses of web crawlers are:
To test web pages and links for valid syntax and structure.
To monitor sites to see when their structure or contents change.
To maintain mirror sites for popular Web sites.
To search for copyright infringements.
To build a special-purpose index, e.g., one that has some understanding of the content stored in multimedia files on the Web.
Step-2: Requirements and Goals of the System
Let’s assume we need to crawl all the web.
Scalability: Our service needs to be scalable such that it can crawl the entire Web, and can be used to fetch hundreds of millions of Web documents.
Extensibility: Our service should be designed in a modular way, with the expectation that new functionality will be added to it. There could be newer document types that needs to be downloaded and processed in the future.
Step-3: Some Design Considerations
Crawling the web is a complex task, and there are many ways to go about it.
We should be asking a few questions before going any further:
Is it a crawler for HTML pages only? Or should we fetch and store other types of media i.e. sound files, images, videos,etc. also ?
This is important because the answer can change the design.
If we have to write a general-purpose crawler to download different media types, we might have to break down the parsing module into different sets of modules:
one for HTML
another for images
another for videos
Here each module extracts what is considered interesting for that media type.
Let’s assume for now that our crawler is going to deal with HTML only, but it should be extensible and make it easy to add support for new media types.
What protocols are we looking at ? HTTP ? What about FTP links ? What different protocols should our crawler handle ?
For simplicity, we will assume HTTP.
Again, we need to keep in mind that it shouldn’t be hard to extend the design to use FTP and other protocols later.
What is the expected number of pages we will crawl ? How big will the URL database become ?
Assuming we need to crawl one billion websites.
Since a website can contain many, many URLs, let’s assume we have 15 billion different web pages that will be reached by our crawler.
What is ‘RobotsExclusion’ and how should we deal with it ?
Courteous Web crawlers implement the Robots Exclusion Protocol, allows Webmasters to declare parts of their sites not to be crawled.
The Robots Exclusion Protocol requires a Web crawler to fetch a special document called robot.txt, containing these declarations from a Web site before downloading any real content from it.
Step-4: Capacity Estimation and Constraints
Traffic Estimates:
If we want to crawl 15 Billion pages in 4 weeks, then pages need to fetch per second : 15B/(4*7*86400 sec)~= 6200 pages/sec.
Storage Estimates:
Page sizes vary a lot, but as mentioned above since we will be dealing with HTML text only, let’s assume an average page size be 100KB.
With each page if we are storing 500 bytes of metadata, then total storage we need: 15B * (100KB + 500B) ~= 1.5 petabytes
Assuming a 70% capacity model, then total storage we need: 1.5 petabytes / 0.7 ~= 2.14 petabytes
Step-5: High Level Design
Basic Algorithm
The basic algorithm executed by any Web crawler is to take a list of seed URLs as its input.
And then repeatedly execute the following steps:
Pick a URL from the unvisited URL list.
Determine the IP Address of its host-name.
Establishing a connection to the host to download the corresponding document.
Parse the document contents to look for new URLs.
Add the new URLs to the list of unvisited URLs.
Process the downloaded document, e.g., store it or index its contents, etc.
However, Depth First Search (DFS) is also utilized in some situations, like if crawler has already established a connection with the website, it might just DFS all the URLs within this website to save some handshaking overhead.
Path-ascending crawling can help discover a lot of isolated resources or resources for which no inbound link would have been found in regular crawling of a particular Web site.
In this scheme, a crawler would ascend to every path in each URL that it intends to crawl.
For example, when given a seed URL of http://foo.com/a/b/page.html, it will attempt to crawl /a/b/, /a/, and /.
Difficulties in implementing efficient web crawler
There are two important characteristics of the Web that makes Web crawling a very difficult task:
Large volume of Web pages: A large volume of web page implies that web crawler can only download a fraction of the web pages at any time and hence it is critical that web crawler should be intelligent enough
to prioritize download.
Rate of change on web pages: Another problem with today’s dynamic world is that web pages on the internet change very frequently, as a result, by the time the crawler is downloading the last page from a site, the page may change, or a new page has been added to the site.
A bare minimum crawler needs at least these components:
URL frontier: To store the list of URLs to download and also prioritize which URLs should be crawled first.
HTTP Fetcher: To retrieve a web page from the server.
Extractor: To extract links from HTML documents.
Duplicate Eliminator: To make sure same content is not extracted twice unintentionally.
Datastore: To store retrieve pages and URL and other metadata.
Step-6: Detailed Component Design
Let’s assume our crawler is running on one server, and all the crawling is done by multiple working threads.
Each working thread performs all the steps needed to download and process the document in a loop.
The first step of this loop is to remove an absolute URL from the shared URL frontier for downloading.
An absolute URL begins with a scheme (e.g., “HTTP”), which identifies the network protocol that should be used to download it.
We can implement these protocols in a modular way for extensibility, so that later our crawler can support more protocols.
Based on the URL’s scheme, the worker calls the appropriate protocol module to download the document.
After downloading, the document is placed into a Document Input Stream (DIS).
Putting documents into DIS will enable other modules to re-read the document multiple times.
Once the document has been written to DIS, the worker thread invokes the dedupe test to determine whether this document (associated with a different URL) has been seen before.
If so, the document is not processed any further, and the worker thread removes the next URL from the frontier.
Next, our crawler needs to process the downloaded document.
Each document can have a different MIME type like HTML page, Image, Video, etc.
We can implement these MIME schemes in a modular way so that later our crawler can support more types.
Based on MIME type of document, worker invokes the process method of each processing module associated with that MIME type.
Furthermore, our HTML processing module will extract all links from the page.
Each link is converted into an absolute URL and tested against a user-supplied URL filter to determine if it should be downloaded.
If the URL passes the filter, the worker performs the URL-seen test, which checks if the URL has been seen before, namely, if it is in the URL frontier or has already been downloaded.
If the URL is new, it is added to the frontier.
1. URL Frontier
The URL frontier is the data structure that contains all the URLs that remain to be downloaded.
We can crawl by performing a breadth-first traversal of the Web, starting from the pages in the seed set.
Such traversals are easily implemented by using a FIFO queue.
Since we’ll be having a huge list of URLs to crawl, we can distribute our URL frontier into multiple servers.
Let’s assume on each server we have multiple worker threads performing the crawling tasks.
Let’s also assume that our hash function maps each URL to a server which will be responsible for crawling it.
Following politeness requirements must be kept in mind while designing a distributed URL frontier:
Our crawler should not overload a server by downloading a lot of pages from it.
We should not have multiple machines connecting a web server.
To implement this politeness constraint, our crawler can have a collection of distinct FIFO sub-queues on each server.
Each worker thread will have its separate sub-queue, from which it removes URLs for crawling.
When a new URL needs to be added, the FIFO sub-queue in which it is placed will be determined by the URL’s canonical hostname.
Our hash function can map each hostname to a thread number.
Together, these two points imply that at most one worker thread will download documents from a given Web server.
It will also not overload a Web server by using FIFO queue.
How big our URL frontier would be ?
The size would be in the hundreds of millions of URLs. Hence, we need to store our URLs on disk.
We can implement our queues in such a way that they have separate buffers for enqueuing and dequeuing.
Enqueue buffer, once filled will be dumped to the disk.
Dequeue buffer will keep a cache of URLs that need to be visited, it can periodically read from disk to fill the buffer.
2. Fetcher Module:
Fetcher module downloads the document corresponding to a given URL using the appropriate network protocol like HTTP.
As discussed above webmasters create robot.txt to make certain parts of their websites off limits for the crawler.
To avoid downloading this file on every request, our crawler’s HTTP protocol module can maintain a fixed-sized cache mapping host-names to their robots exclusion rules.
3. Document Input Stream (DIS):
Our crawler’s design enables the same document to be processed by multiple processing modules.
To avoid downloading a document multiple times, cache document locally using an abstraction called a Document Input Stream (DIS).
A DIS is an input stream that caches the entire contents of the document read from the internet.
It also provides methods to re-read the document.
DIS can cache small documents (64 KB or less) entirely in memory, while larger documents can be temporarily written to a backing file.
Each worker thread has an associated DIS, which it reuses from document to document.
After extracting a URL from the frontier, the worker passes that URL to the relevant protocol module, which initializes the DIS from a network connection to contain the document’s contents.
The worker then passes the DIS to all relevant processing modules.
4. Document Dedupe Test:
Many documents on the Web are available under multiple, different URLs.
There are also many cases in which documents are mirrored on various servers.
Both of these effects will cause any Web crawler to download the same document contents multiple times.
To prevent processing a document more than once, we perform a dedupe test on each document to remove duplication.
To perform this test, we can calculate a 64-bit checksum of every processed document and store it in a database.
For every new document, we can compare its checksum to all the previously calculated checksums to see if it has been seen before.
We can use MD5 or SHA to calculate these checksums.
How big would be the checksum store ?
If the whole purpose of our checksum store is to do dedupe, then we just need to keep a unique set containing checksums of all previously processed document.
Considering 15 billion distinct web pages, we would need: 15B * 8 bytes => 120 GB for storing checksum of document.
Although this can fit into a modern-day server’s memory, if we don’t have enough memory available, we can keep smaller LRU based cache on each server with everything in a persistent storage.
The dedupe test first checks if the checksum is present in the cache. If not, it has to check if the checksum resides in the back storage. If the checksum is found, we will ignore the document. Otherwise, it will be added to the cache and back storage.
5. URL Filters:
The URL filtering mechanism provides a customizable way to control the set of URLs that are downloaded.
This is used to blacklist websites so that our crawler can ignore them.
Before adding each URL to the frontier, the worker thread consults the user-supplied URL filter.
We can define filters to restrict URLs by domain, prefix, or protocol type.
6. Domain Name Resolution (DNS):
Before contacting a Web server, a Web crawler must use DNS to map the Web server’s hostname into an IP address.
DNS name resolution will be a big bottleneck of our crawlers given the amount of URLs we will be working with.
To avoid repeated requests, we can start caching DNS results by building our local DNS server.
7. URL Dedupe Test:
While extracting links, any Web crawler will encounter multiple links to the same document.
To avoid downloading and processing a document multiple times, a URL dedupe test must be performed on each extracted link before adding it to the URL frontier.
To perform the URL dedupe test, we can store all the URLs seen by our crawler in canonical form in a database.
To save space, we do not store the textual representation of each URL in the URL set, but rather a fixed- sized checksum.
To reduce the no. of operations on the db store, we can keep an in-memory cache of popular URLs on each host shared by all threads.
The reason to have this cache is that links to some URLs are quite common, so caching them will lead to a high in- memory hit rate.
How much storage we would need for URL’s store ?
If the whole purpose of our checksum is to do URL dedupe, then we just need to keep a unique set containing checksums of all previously seen URLs.
Considering 15 billion distinct URLs and 2 bytes for checksum, we will need: 15B * 2 bytes => 30 GB for storing checksum of URLs.
Can we use bloom filters for deduping ?
Bloom filters are a probabilistic data structure for set membership testing that may yield false positives.
A large bit vector represents the set.
An element is added to the set by computing ‘n’ hash functions of the element and setting the corresponding bits.
An element is deemed to be in the set if the bits at all ‘n’ of the element’s hash locations are set.
Hence, a document may incorrectly be deemed to be in the set, but false negatives are not possible.
While doing URL seen test each false positive will cause the URL not to be added to the frontier, and therefore the document will never be downloaded.
The chance of a false positive can be reduced by making the bit vector larger.
8. Checkpointing:
A crawl of the entire Web takes weeks to complete.
To guard against failures, our crawler can write regular snapshots of its state to disk.
An interrupted or aborted crawl can easily be restarted from the latest checkpoint.
Step-7: Fault tolerance
We should use consistent hashing for distribution among crawling servers.
Extended hashing will not only help in replacing a dead host but also help in distributing load among crawling servers.
All our crawling servers will be performing regular checkpointing and storing their FIFO queues to disks.
If a server goes down, we can replace it. Meanwhile, extended hashing should shift the load to other servers.
Step-8: Data Partitioning
Our crawler will be dealing with three kinds of data:
URLs to visit
URL checksums for dedupe
Document checksums for dedupe.
Since we are distributing URLs based on the hostnames, we can store these data on the same host.
So, each host will store its set of URLs that need to be visited, checksums of all the previously visited URLs and checksums of all the downloaded documents.
Since we will be using extended hashing, we can assume that URLs will be redistributed from overloaded hosts.
Each host will perform checkpointing periodically and dump a snapshot of all the data it is holding into a remote server.
This will ensure that if a server dies down, another server can replace it by taking its data from the last snapshot.
Step-9: Crawler Traps
There are many crawler traps, spam sites, and cloaked content.
A crawler trap is a URL or set of URLs that cause a crawler to crawl indefinitely.
Some crawler traps are unintentional. Example:- a symbolic link within a file system can create a cycle.
Other crawler traps are intentional. Example:- people have written traps that dynamically generate an infinite Web of documents.
The motivations behind such traps vary.
Anti-spam traps are designed to catch crawlers used by spammers looking for email addresses.
While other sites use traps to catch search engine crawlers to boost their search ratings.
Before crawling starts, allocate a fixed X amount of credit to each page.
Select a page P with the highest amount of credit (or select random page if all pages have the same amount of credit).
Crawl page P (let’s say that P had 100 credits when it was crawled).
Extract all the links from page P (let’s say there are 10 of them).
Set the credits of P to 0.
Take a 10% “tax” and allocate it to a Lambda page.
Allocate an equal amount of credits each link found on page P from P’s original credit after subtracting the tax, so: (100 (P credits) - 10 (10% tax))/10 (links) = 9 credits per each link.
Repeat from step 3.
Since the Lambda page continuously collects the tax, eventually it will be the page with the largest amount of credit, and will be crawled.
By crawling the Lambda page, we just take its credits and distribute them equally to all the pages in our database.
As bot traps only give internal links credits & rarely get credit from outside, will continually leak credits (from taxation) to Lambda page.
The Lambda page will distribute that credits out to all the pages in the database evenly, and upon each cycle, the bot trap page will lose more and more credits until it has so little credits that it almost never gets crawled again.
This will not happen with good pages because they often get credits from backlinks found on other pages.