2017 © Pedro Peláez
 

application php-spider

A configurable and extensible PHP web spider

image

vdb/php-spider

A configurable and extensible PHP web spider

  • Sunday, March 26, 2017
  • by matthijsvandenbos
  • Repository
  • 96 Watchers
  • 1015 Stars
  • 37,262 Installations
  • PHP
  • 3 Dependents
  • 0 Suggesters
  • 227 Forks
  • 7 Open issues
  • 5 Versions
  • 11 % Grown

The README.md

Build Status Latest Stable Version Total Downloads License, (*1)

PHP-Spider Features

  • supports two traversal algorithms: breadth-first and depth-first
  • supports crawl depth limiting, queue size limiting and max downloads limiting
  • supports adding custom URI discovery logic, based on XPath, CSS selectors, or plain old PHP
  • comes with a useful set of URI filters, such as robots.txt and Domain limiting
  • supports custom URI filters, both prefetch (URI) and postfetch (Resource content)
  • supports custom request handling logic
  • supports Basic, Digest and NTLM HTTP authentication. See example.
  • comes with a useful set of persistence handlers (memory, file)
  • supports custom persistence handlers
  • collects statistics about the crawl for reporting
  • dispatches useful events, allowing developers to add even more custom behavior
  • supports a politeness policy

This Spider does not support Javascript., (*2)

Installation

The easiest way to install PHP-Spider is with composer. Find it on Packagist., (*3)

$ composer require vdb/php-spider

Usage

This is a very simple example. This code can be found in example/example_simple.php. For a more complete example with some logging, caching and filters, see example/example_complex.php. That file contains a more real-world example., (*4)

Note that by default, the spider stops processing when it encounters a 4XX or 5XX error responses. To set the spider up to keep processing, please see the link checker example. It uses a custom request handler, that configures the default Guzzle request handler to not fail on 4XX and 5XX responses., (*5)

First create the spider, (*6)

$spider = new Spider('http://www.dmoz.org');

Add a URI discoverer. Without it, the spider does nothing. In this case, we want all <a> nodes from a certain <div>, (*7)

$spider->getDiscovererSet()->set(new XPathExpressionDiscoverer("//div[@id='catalogs']//a"));

Set some sane options for this example. In this case, we only get the first 10 items from the start page., (*8)

$spider->getDiscovererSet()->maxDepth = 1;
$spider->getQueueManager()->maxQueueSize = 10;

Add a listener to collect stats from the Spider and the QueueManager. There are more components that dispatch events you can use., (*9)

$statsHandler = new StatsHandler();
$spider->getQueueManager()->getDispatcher()->addSubscriber($statsHandler);
$spider->getDispatcher()->addSubscriber($statsHandler);

Execute the crawl, (*10)

$spider->crawl();

When crawling is done, we could get some info about the crawl, (*11)

echo "\n  ENQUEUED:  " . count($statsHandler->getQueued());
echo "\n  SKIPPED:   " . count($statsHandler->getFiltered());
echo "\n  FAILED:    " . count($statsHandler->getFailed());
echo "\n  PERSISTED:    " . count($statsHandler->getPersisted());

Finally we could do some processing on the downloaded resources. In this example, we will echo the title of all resources, (*12)

echo "\n\nDOWNLOADED RESOURCES: ";
foreach ($spider->getDownloader()->getPersistenceHandler() as $resource) {
    echo "\n - " . $resource->getCrawler()->filterXpath('//title')->text();
}

Contributing

Contributing to PHP-Spider is as easy as Forking the repository on Github and submitting a Pull Request. The Symfony documentation contains an excellent guide for how to do that properly here: Submitting a Patch., (*13)

There a few requirements for a Pull Request to be accepted: - Follow the coding standards: PHP-Spider follows the coding standards defined in the PSR-0, PSR-1 and PSR-2 Coding Style Guides; - Prove that the code works with unit tests and that coverage remains 100%;, (*14)

Note: An easy way to check if your code conforms to PHP-Spider is by running the script bin/static-analysis, which is part of this repo. This will run the following tools, configured for PHP-Spider: PHP CodeSniffer, PHP Mess Detector and PHP Copy/Paste Detector., (*15)

Note: To run PHPUnit with coverage, and to check that coverage == 100%, you can run bin/coverage-enforce., (*16)

Support

For things like reporting bugs and requesting features it is best to create an issue here on GitHub. It is even better to accompany it with a Pull Request. ;-), (*17)

License

PHP-Spider is licensed under the MIT license., (*18)

The Versions

26/03 2017

dev-master

9999999-dev

A configurable and extensible PHP web spider

  Sources   Download

MIT

The Requires

 

The Development Requires

by Matthijs van den Bos

crawler spider scraper

25/03 2017

dev-feature/extract-discovereduris

dev-feature/extract-discovereduris

A configurable and extensible PHP web spider

  Sources   Download

MIT

The Requires

 

by Matthijs van den Bos

crawler spider scraper

10/08 2016

v0.3

0.3.0.0

A configurable and extensible PHP web spider

  Sources   Download

MIT

The Requires

 

by Matthijs van den Bos

crawler spider scraper

30/12 2015

v0.2

0.2.0.0

A configurable and extensible PHP web spider

  Sources   Download

MIT

The Requires

 

The Development Requires

by Matthijs van den Bos

crawler spider scraper

16/03 2013

v0.1

0.1.0.0 http://php-spider.org

A configurable and extensible PHP web spider

  Sources   Download

MIT

The Requires

 

The Development Requires

by Matthijs van den Bos

crawler spider scraper