So you're looking to contribute to shpy, eh? Do you think you're ready to face the dangers of gumshoeing shell corporations, sleuthing through pipes, and finding justice for those processes killed in cold blood? Grab your badge, it's time for your first mission!
- Getting Started
- Project Structure
- Development
- Testing
- Submitting Changes
- Release Checklist
- Getting in Touch
Before you dive in, there are a few ground rules for shpy
development:
- shpy must remain POSIX compliant, with the exception of the
local
keyword - Support for new shells can be added, but shpy can never lose support for a shell
- All changes must be covered by existing or new unit tests
- New features must have documentation
These resources are a big help in understanding what POSIX compliance entails:
- The POSIX Shell And Utilities
Provides quick links to different portions of the POSIX shell spec - Insufficiently known POSIX shell features
Justifies the use oflocal
in shell scripting, despitelocal
not being POSIX-compliant - Dash as
/bin/sh
Details common "bashisms" in shell scripting and how to rewrite them for POSIX compliance
test/
- Testing directoryrun_tests
- Script to run all tests, returns 0 on successtest_*
- Individual test files, organized by functionality under test
examples/
- Practical examples of shpy usage*/
- An individual example project, such ascoverfetch
README.md
- Basic example overview and walkthrough of writing the tests*
- Documented shell script for the example, such ascoverfetch
test.sh
- Documented tests for the example
hooks/
- Hooks for the automated Docker image builds.travis.yml
- Configures continuous integration with Travis CIDockerfile
- Defines the shpy image used in testing and productiondocker-compose.yml
- Coordinates execution of multiple shpy containers for testingshpy-shunit2
- The bindings and helpers for integrating shpy with the shunit2 unit testing frameworkshpy
- The entirety of the shpy codebase
Shpy is written in POSIX-compliant shell scripting, with the exception of the local
keyword. The only required development tool is Docker
When a spy is first created, _shpy_inited
is set in the environment and a temporary working directory is created. A bin directory is also created and prepended to the path
Spies are implemented with the following metadata written to disk
$_shpy_spies_dir/
bin/
$spy_name
: Executable shell script to run the spy
outputs/$spy_name/
- Contains numbered files, starting from 0, for the spy's output to stdout when called
errors/$spy_name/
- Contains numbered files, starting from 0, for the spy's output to stderr when called
$spy_name/
- Contains numbered directories, starting from 0, for each call to the spy
- Contains numbered files, starting from 0, containing each individual argument to a call of the spy
- Contains numbered directories, starting from 0, for each call to the spy
The follow environment variables are also set and exported for each spy:
_shpy_${spy_name}_status_codes
: Space-delimited list of status codes to return for a spy, defaults to"0"
_shpy_${spy_name}_current
: Index of the current call to the spy being examined, used bywasSpyCalledWith
Tests are organized in the test/
directory by function under test, with files named as test_<function_under_test>
. Each file contains one function per unit test, prefixed with it
and followed by a description of the test. An example from test/test_createSpy
:
itReturns1WhenCreatingSpyWithoutArgs() {
createSpy >/dev/null
assertEquals 1 $?
}
When writing tests, your assertion message should be lowercase and specify what went wrong, not what was expected. Remember that expected values come before actual values!
createSpy -o 'hello world' helloSpy
assertEquals 'unexpected spy output' 'hello world' "$(helloSpy)"
Tests should verify the expected stdout, expected stderr, and expected return value, even if the expected output is nothing. In some cases it may make sense to omit some of these tests, and that's perfectly ok!
If your test involves calling a spy, you should create a second test case for the same condition that runs the spy in a new shell with runInNewShell
. This ensures shpy works for sourced and executed scripts
The assertDies
function is provided for tests that expect _shpy_die
to be called. This function takes the command to run as a string, an optional expected death message, and an optional expected exit status:
assertDies 'createSpy -z' 'Error: Unknown option -z' 1
The doOrDie
function is provided to fail a test if the given code returns a non-zero exit status. Use this in situations where code is not expected to fail:
doOrDie createSpy mySpy
The runInNewShell
function is provided to run a command in a new process of the parent shell. This is useful to simulate shell scripts that are executed rather than sourced:
runInNewShell mySpy --some --args
# To preserve argument whitespace, you may need to wrap your command in quotes
runInNewShell 'mySpy --message "hello world" file1 file2'
Your code can be tested under multiple shells using the Docker image. To run tests with all supported shells, as well as the analysis tools and code coverage, you can run Docker compose as follows:
docker-compose up --build \
&& docker-compose ps | grep -v 'Exit 0'
To run tests for an example, set the CMD
environment variable as follows, where <example>
is the name of the example (renamer, coverfetch)
CMD=/shpy/examples/<example>/test.sh docker-compose up --build \
&& docker-compose ps | grep -v 'Exit 0'
If any services show a non-zero exit state, you can view the output from that service with docker-compose logs <service>
(e.g. docker-compose logs shellcheck
)
Code coverage results will be available in the coverage/
directory at the root of the repo. Opening coverage/index.html
gives you a web interface to the results
To run tests under a specific shell, or to run a specific analysis tool, you can run one of the following commands:
Command | Purpose |
---|---|
docker-compose run ash |
Run all tests with sh , which on Alpine Linux is BusyBox's ash |
docker-compose run bash |
Run all tests with bash |
docker-compose run checkbashisms |
Check all sources and tests for bash-specific functionality |
docker-compose run kcov |
Generate coverage reports for all sources and tests with kcov |
docker-compose run mksh |
Run all tests with mksh , the successor to pdksh |
docker-compose run shellcheck |
Run static analysis on all sources and tests for warnings and suggestions |
docker-compose run zsh |
Run all tests with zsh |
Once your code is polished and tests are passing, it's time to submit a pull request! When creating your PR, it's a good idea for your description to explain what was changed and why the change was needed
Once the CI build for your branch passes and a project owner reviews your code (which should happen within a few days), your change will be rebased into the master branch and your contribution complete! Thanks! 💖
Shpy releases are versioned using the semver major.minor.patch format
Segment | Reason to bump |
---|---|
Major | Breaking changes to the API, such as renaming an existing public function |
Minor | New functionality is added, such as improving the performance of spies or supporting the sourcing of spies |
Patch | Bug fixes, such as fixing an issue where spies did not work with nounset set |
During a version bump, all segments to the right of the bumped segment are reset to 0, such as 0.0.1 to 0.1.0, 1.2.3 to 2.0.0, and so on
Before creating a new release, run through this checklist to ensure nothing is forgotten!
-
Update the
SHPY_VERSION
variable inshpy
and its test intest/test_testEnvironment
-
Document any public-facing changes in
README.md
-
Document any architectural or internal API changes in
CONTRIBUTING.md
-
Once the PR is approved, tag the head of
master
and pushgit tag -a x.y.z -m 'brief description of changes' git push origin x.y.z
-
Create a release on GitHub with a fully fleshed description of changes
For bugs, you can create a new issue in the tracker. Be sure to describe what you did, what you expected, and what actually happened. If there's anything you tried in response to the issue, that's good to know as well!
For questions or concerns, feel free to reach out to @codehearts on Twitter!