Profiler Bundle
Bundle to profile eZ Platform installations and setup scenarios to be able to
continuously test to keep track of performance regressions of repository and
underlying storage engines(s), (*1)
This bundle contains two means of profiling your eZ Publish stack., (*2)
-
API Profiler, (*3)
The API profiler executes tests against the Public API or directly against
the SPI. It is capable of executing different scenarios., (*4)
-
jMeter Tests, (*5)
The jMeter tests run tests against the HTTP frontend. Currently just a random
browser is implement. This is most useful together with some profiling done
in the background to detect the actual bottlenecks., (*6)
API Profiler
Warning
Running the performance tests / profiling will chnage the contents of your
database. Use with care.
Usage
Install the bundle inside of an existing ez-platform installation::, (*7)
composer.phar require ezsystems/profiler-bundle dev-master
Enable Bundle in kernel by adding:, (*8)
new eZ\Publish\ProfilerBundle\EzPublishProfilerBundle(),
Then you can run the performance tests using::, (*9)
php app/console profiler:run papi vendor/ezsystems/profiler-bundle/docs/profile_example.php
The provided file specifies the performance test you want to run. The file
mentioned here is an example file provided with the bundle. You can run the
tests either against the Public API (papi
) or directly against the SPI
(spi
)., (*10)
Configuration
To model different scenarios then the on provided in the example file is a
little more complex., (*11)
Types
First we define multiple content types. The content type definitions are
simpler then in the APIs to test, but are mapped accordingly::, (*12)
$articleType = new ContentType(
'article',
[
'title' => new Field\TextLine(),
'body' => new Field\XmlText( new DataProvider\XmlText() ),
'author' => new Field\Author( new DataProvider\User( 'editor' ) ),
// …
],
[$defaultLanguage, 'ger-DE', 'fra-FR'], // Languages of content
8 // Average number of versions
);
First we define the name of the type and then its fields. Each field should
have a data provider assigned, which provides random test data., (*13)
Optionally we can define multiple languages in which content will be created.
Also optionally an average number of versions can be defined to "age" content.
You can define as many types as sensible., (*14)
Actors
Actors actually do something with the defined types. There are currently three
different actors, but you could define more:, (*15)
$createTask = new Task(
new Actor\Create(
1, $folderType,
new Actor\Create(
12, $folderType,
new Actor\Create(
50, $articleType,
new Actor\Create(
5, $commentType
),
$articles = new Storage\LimitedRandomized()
)
)
)
);
This example will create a structure of folder types, which, in the end, will
contain articles, which will contain comments. The specified numbers are the
average number of children which are created., (*18)
You may optionally specify an object store, if you want to reference some of
the created content objects in a different actor, like the next one., (*19)
$viewTask = new Task(
new Actor\SubtreeView(
$articles
)
);
You should provide the actor with an object store so it can pick from a
number of existing content objects which would be viewed by users of an
application., (*22)
Execution
Finally we want to execute our configured scenario consisting of types and
actors. For this an executor is used::, (*25)
$executor->run(
array(
new Constraint\Ratio( $createTask, 1/10 ),
new Constraint\Ratio( $viewTask, 1 ),
new Constraint\Ratio( $simpleSearchTask, 1/3 ),
new Constraint\Ratio( $sortedSearchTask, 1/5 ),
),
new Aborter\Count(200)
);
The executor will be provided with an array of Constraint
objects each
associated with a task. In this case Constraint\Ratio
objects are used,
which will only execute a task according to the given probability., (*26)
The Aborter defines when the execution will be halted. It could also check for
the amount of create content objects or just abort after a given time span. The
Count
aborter just aborts after the given number of iterations., (*27)
You might, like done in the example, define multiple executors which then will
be executed subsequently., (*28)
jMeter Tests
Usage
The jMeter test can be run by just executing ant
in the root directory. In
the first run jMeter will be downloaded. In subsequent runs the already
downloaded files will be used. Ant 1.8 is required to run the example., (*29)
The test hits the configured host and will create files providing you with
statistics about the run:, (*30)
-
build/result.jtl
, (*31)
jMeter log file for further analysis, (*32)
-
build/result.csv
, (*33)
Simple grouping of response times by URL, (*34)
Configuration
You can configure the run by creating a file jmeter.properties.local
to
overwrite the variables in the jmeter.properties
file. You definitely want
to adapt the jmeter.server
in there to point to the website you want to put
under test. All options are documented in the jmeter.properties
file., (*35)
The implemented "Random Browser" only executes GET
requests accessing
random links starting at the configured start page. It will not log in or
submit any forms (searches)., (*36)
There are two options defining the behaviour of the random surfer:, (*37)
-
crawler.usertype.a.breadth
, (*38)
On average, how many links are clicked on the same page. Causes the user to
click more links on the start page and the subsequent pages. (Default: 2), (*39)
-
crawler.usertype.a.depth
, (*40)
On average, how deep a user will click through the website. Causes the user
to follow links deeper into the website structure. (Default: 3), (*41)
Another important configuration is the jmeter.users
value. It defines how
many users will access / surf the website in parallel. The default of 5 means
that 5 users will simultaneously surf on the website. With the configured
timings that means something between 1 Req/s and 2 Req/s., (*42)