2017 © Pedro Peláez
 

symfony-bundle concurrent-spider-bundle

Symfony bundle for running a distributed web page crawler

image

simgroep/concurrent-spider-bundle

Symfony bundle for running a distributed web page crawler

  • Friday, May 26, 2017
  • by Breuls
  • Repository
  • 9 Watchers
  • 8 Stars
  • 177 Installations
  • PHP
  • 0 Dependents
  • 0 Suggesters
  • 9 Forks
  • 0 Open issues
  • 11 Versions
  • 0 % Grown

The README.md

Concurrent Spider Bundle

Build Status Coverage Status, (*1)

This bundle provides a set of commands to run a distributed web page crawler. Crawled web pages are saved to Solr., (*2)

Installation

Install it with Composer:, (*3)

composer require simgroep/concurrent-spider-bundle dev-master

Then add it to your AppKernel.php, (*4)

new Simgroep\ConcurrentSpiderBundle\SimgroepConcurrentSpiderBundle(),

It is needed to install http://www.foolabs.com/xpdf/ - only pdftotext is realy to be functional from command line:, (*5)

/path_to_command/pdftotext pdffile.pdf

Configuration

Minimal configuration is necessary. The crawler needs to know the mapping you're using in Solr so it can save documents. The only mandatory part of the config is "mapping". Other values are optional:, (*6)

simgroep_concurrent_spider:
    http_user_agent: "PHP Concurrent Spider"

    rabbitmq.host: localhost
    rabbitmq.port: 5672
    rabbitmq.user: guest
    rabbitmq.password: guest

    queue.discoveredurls_queue: discovered_urls
    queue.indexer_queue: indexer

    solr.host: localhost
    solr.port: 8080
    solr.path: /solr

    mapping:
        id: #required
        title: #required
        content: #required
        url: #required
        tstamp: ~
        date: ~
        publishedDate: ~

How does it work?

You start the crawler with:, (*7)

app/console simgroep:start-crawler https://github.com

This will add one job to the queue to crawl the url https://github.com. Then run the following process in background to start crawling:, (*8)

app/console simgroep:crawl

It's recommended to use a tool to maintain the crawler process in background. We recommend Supervisord. You can run as many as threads as you like (and your machine can handle), but you should be careful to not flood the website. Every thread acts as a visitor on the website you're crawling., (*9)

Architecture

This bundle uses RabbitMQ to keep track of a queue that has URLs that should be indexed. Also it uses Solr to save the crawled web pages., (*10)

The Versions