2017 © Pedro Peláez
 

project spark

✨ Toolkit to develop, test and run Drupal websites.

image

bluesparklabs/spark

✨ Toolkit to develop, test and run Drupal websites.

  • Saturday, July 28, 2018
  • by balintk
  • Repository
  • 15 Watchers
  • 1 Stars
  • 7 Installations
  • PHP
  • 0 Dependents
  • 0 Suggesters
  • 0 Forks
  • 9 Open issues
  • 5 Versions
  • 0 % Grown

The README.md

Spark ✨

Toolkit to develop, test and run PHP applications., (*1)

Spark provides a turnkey Docker-based environment for development and continuous integration. It ships commands to create anonymized database exports, execute test suites, initialize a Solr index etc. Spark simply needs to be added as your project's dependency, and after some minimal configuration steps you're ready to go., (*2)

"Concerning toolkits", an article published by Kent C. Dodds had been a great inspiration for architecting Spark., (*3)

Roadmap

  • We're in the middle of implementing key database interactions in order to be able to use Spark for creating GDPR-compliant, anonymized database exports. (See our board here.)
  • After the features around interacting with the database we'll turn to getting prepared for our first alpha release, which will introduce a more flexible way for defining the required services for project environments.

Getting Started — How to Sparkify your project

Check out the Drupal 8 example project., (*4)

Here are the main steps outlined., (*5)

1. Add Spark as a dependency:, (*6)

    $ composer require bluesparklabs/spark

2. Define a new script in your composer.json:, (*7)

"scripts": {
  "spark": "SPARK_WORKDIR=`pwd` robo --ansi --load-from vendor/bluesparklabs/spark"
}

3. Add autoload information to your autoload field in your composer.json:, (*8)

"autoload": {
    "psr-4": {
        "BluesparkLabs\\Spark\\": "./vendor/bluesparklabs/spark/src/"
    }
},

4. Create a file named .spark.yml in your project's root. This will be your project-specific configuration that Spark will use., (*9)

To learn about how to write your project-specific configuration, please refer to our .spark.example.yml file., (*10)

5. Optional: Create a file named .spark.local.yml in your project's root. This will be your environment-specific configuration that Spark will use. Do not commit this file to your repository. If you want to leverage environment-specific configuration for CI builds or in your hosting environment, the recommended way is to keep these files in your repository named specifically, i.e. .spark.local.ci.yml, and then ensure you have automation in place that renames it to .spark.local.yml in the appropriate environment., (*11)

To learn about how to write your project-specific configuration, please refer to our .spark.example.yml file., (*12)

See the Drupal 8 example project's composer.json file., (*13)

1. Composer by default installs all packages under a directory called ./vendor. Use composer/installers to define installation destinations for Drupal modules, themes etc. Example configuration in composer.json:, (*14)

"extra": {
    "installer-paths": {
        "web/core": ["type:drupal-core"],
        "web/libraries/{$name}": ["type:drupal-library"],
        "web/modules/contrib/{$name}": ["type:drupal-module"],
        "web/profiles/contrib/{$name}": ["type:drupal-profile"],
        "web/themes/contrib/{$name}": ["type:drupal-theme"],
        "drush/contrib/{$name}": ["type:drupal-drush"]
    }
}

2. In case you're working with a Drupal site, use drupal-composer/drupal-scaffold to install and update files that are outside of the core folder and which are not part of the drupal/core package. This Composer plugin will take care of the files whenever you install or update drupal/core, but to run it manually, you can add a script to your composer.json:, (*15)

"scripts": {
    "drupal-scaffold": "DrupalComposer\\DrupalScaffold\\Plugin::scaffold",
},

3. Spark has a command, drupal:files, to ensure the files folder exists with the right permissions, and that there's a settings.php file and a settings.spark.php which currently holds Spark's Docker-specific configuration, i.e. database connection etc. You may want to add this command to your scripts field in your composer.json, so that Composer executes it when packages are installed or updated:, (*16)

"scripts": {
    "post-install-cmd": "composer run spark drupal:files",
    "post-update-cmd": "composer run spark drupal:files",
}

Usage

This is how you can run a Spark command:, (*17)

$ composer run spark <command>

Tip: Set up spark as a command-line alias to composer run spark., (*18)

To list all available commands, just issue $ composer run spark. Here is a high-level overview of what you can currently do with Spark:, (*19)

Command namespace Description
drush Execute Drush commands
containers Manage a Docker-based environment
db Drop or import database, check its availability
drupal (Being deprecated.) Create backup and upload to an Amazon S3 bucket, ensure files directory and settings.php files, install Drupal
mysql Import or export database. (Will eventualy deprecate db command group.)
solr Initialize a Solr core with configuration for Drupal, check its availability
test Execute test suite

Commands

Notes:, (*20)

  • Commands will be documented here as they become ready for prime time.
  • ⚠️ When using command-line arguments, you need to include a double-dash (--) before your arguments. E.g. composer run spark mysql:dump -- --non-sanitized. (See the reason for this and the proposed solution.)

mysql:dump

Exports database to a file. By default the file is placed into the current folder and data is sanitized based on the sanitization rules in .spark.yml. The following command-line arguments are optional., (*21)

Argument Description
--non-sanitized Produces a non-sanitized data export.
--destination Directory to where the export file will be placed. Can be an absolute or a relative path.

The Versions