How it works?

The way this project is working is pretty easy.

The application is build around 2 major pieces. The client one, which everyone. can download and run, and the server one, which is storing the results.

A client application that you run on your computer (Just download and run. No installation required) will query the server to get a list of URLs to crawl. The client (let’s call it the crawler) will open each of those URL, parse the page, extract the links and statistics and send that back to the server when it’s done. The data is stored locally so the client can be stopped at any time, and work can re-start when ever the application is run.

On the other side, the server is receiving all the URLs and associated statistics and is gathering all of them together to extract some, I hope, useful statistics.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>