Welcome to pyQuil, and thanks for wanting to be a contributor! 🎉
This guide is to help walk you through how to open issues and pull requests for the pyQuil project, as well as share some general how-tos for development, testing, and maintenance.
If all you want to do is ask a question, you should do so in our Rigetti Forest Slack Workspace rather than opening an issue. Otherwise, read on to learn more!
This project and everyone participating in it are governed by pyQuil's Code of Conduct. In contributing, you are expected to uphold this code. Please report unacceptable behavior by contacting support@rigetti.com.
If you've encountered an error or unexpected behavior when using pyQuil, please file a bug report. Make sure to fill out the sections that allow us to reproduce the issue and understand the context of your development environment. We welcome the opportunity to improve pyQuil, so don't be shy if you think you've found a problem!
If you have an idea for a new addition to pyQuil, please let us know by creating a feature request. The more information you can provide, the easier it will be for the pyQuil developers to implement! A clear description of the problem being addressed, a potential solution, and any alternatives you've considered are all great things to include.
Rather than opening an issue, if you'd like to work on one that currently exists, we have some issue labels that make it easy to figure out where to start. The good first issue label references issues that we think a newcomer wouldn't have too much trouble taking on. In addition, the help wanted label is for issues that the team would like to see completed, but that we don't currently have the bandwidth for.
Once you've selected an issue to tackle, forked the repository, and made your changes, the next step is to open a pull request! We've made opening one easy by providing a Pull Request Template that includes a checklist of things to complete before asking for code review. Additionally, all CI checks must pass before the PR will be merged. We look forward to reviewing your work! 🙂
You may have noticed that the examples
directory has been removed from pyQuil, and a
"launch binder" badge was added to the README. We decided to move all the example notebooks
into a separate repository, rigetti/forest-tutorials, so that they could
be run on Binder, which provides a web-based setup-free execution environment
for Jupyter notebooks. We're always looking for new tutorials to help people
learn about quantum programming, so if you'd like to contribute one, make a pull request
to that repository directly!
Before running any of the below commands, you'll need to install Poetry and run the following from the top-level directory of this repo:
poetry install
We use ruff to enforce lint and formatting requirements as part of CI. You can run these
tests yourself locally by running make check-style
(to check for violations of the linting rules)
and make check-format
(to see if ruff
would reformat the code) in the top-level directory of
the repository. If you aren't presented with any errors, then that means your code satisfies all
the linting and formatting requirements. If make check-format
fails, it will present you with a
diff, which you can resolve by running make format
. The ruff formatter is opinionated, but
saves a lot of time by removing the need for style nitpicks in PR review. The configuration for
ruff
can be found in pyproject.toml
.
In addition to linting and formatting, we use type hints for all parameters and return values,
following the PEP 484 syntax. This is enforced as part of the CI via the command
make check-types
, which uses the popular static typechecker mypy.
For more information on the specific configuration of mypy
that we use for typechecking, please
refer to the mypy.ini
file. Also, because we use the typing
module, types (e.g.
type
and rtype
entries) should be omitted when writing (useful) Sphinx-style
docstrings for classes, methods, and functions.
All of these style-related tests can be performed locally with a single command, by running the following:
make check-all
We use pytest
to run the pyQuil unit tests. These are run automatically on Python 3.7 and
3.8 as part of the CI pipeline, but you can run them yourself locally as well. Many of the
tests depend on having running QVM and quilc servers. To start them, run each of the following
in a separate terminal window:
docker run --rm -it -p 5555:5555 rigetti/quilc -R -P
docker run --rm -it -p 5000:5000 rigetti/qvm -S
Note: The above commands require Docker, but you can also download the QVM and quilc as part of the Forest SDK, and run them directly with
qvm -S
andquilc -R -P
, respectively.
Once the QVM and quilc servers are running, you can run all the unit/integration tests with:
make test
To skip slow tests, you may run:
make test-fast
You can run documentation tests with:
make doctest
You can run end-to-end tests with:
make e2e TEST_QUANTUM_PROCESSOR=<quantum processor ID>
Or you may run all tests (unit/integration/e2e) with:
make test-all TEST_QUANTUM_PROCESSOR=<quantum processor ID>
Note: for
TEST_QUANTUM_PROCESSOR
, supply a value similar to what you would supply toget_qc()
. End-to-end tests are most useful against a real QPU, but they can also be run against a QVM (e.g.make e2e TEST_QUANTUM_PROCESSOR=2q-qvm
).
We utilize doctests to validate the examples in both our docstrings and Sphinx documentation. This ensures that they are correct and remain up to date.
When contributing, consider adding an example to illustrate to users how pyQuil should be used.
To add an example to a docstring, we use Python's doctest module.
As a quick primer, you can add a doctest to a docstring by pre-pending your example code with >>>
and following it with the expected output. For example:
def hello_world():
"""Prints Hello World!
>>> hello_world()
Hello World!
"""
print("Hello World!")
To customize how output is validated, take a look at the available option flags.
If you want to add an example to Sphinx, here's a quick guide. Your example will be split between a hidden testsetup
block
and a visible testcode
block. Your expected output will go in a testoutput
block. Each block shares a test name to tell
Sphinx that they are related. Building off the previous hello world example:
.. testsetup:: hello_world
# Code in the `testsetup` block doesn't appear in the documentation.
# Put any code that your example might need, but would unnecessarily
# clutter the documentation here.
from foo import hello_world
.. testcode:: hello_world
# Code in the `testcode` block will appear in the documentation.
# Include code needed to illustrate your example here
hello_world()
.. testoutput:: hello_world
Hello World!
In many cases, this simple structure will suffice, but consider reading the Sphinx doctest documentation for more details on how to use it for more complex examples.
Some tests (particularly those related to operator estimation and readout symmetrization)
require a nontrivial amount of computation. For this reason, they have been marked
as slow and are not run unless pytest
is given the --runslow
option, which is defined
in the conftest.py
file.
For a full, up-to-date list of these slow tests, you may invoke (from the top-level directory):
grep -A 1 -r pytest.mark.slow test/unit/
When making considerable changes to operator_estimation.py
, we recommend that you set the
pytest
option --use-seed
(as defined in conftest.py
) to False
to make sure you have not broken anything. Thus, the command is:
pytest --use-seed=False <path/to/test-file-or-dir>
In addition to testing the source code for correctness, we use pytest
and the pytest-cov
plugin to calculate code coverage (via the make test
command).
The coverage report omits the autogenerated parser code, the external
module, and all of
the test code (as is specified in the .coveragerc
configuration file).
All of the above pytest
variations can be mixed and matched according to what you're
trying to accomplish. For example, if you want to carefully test the operator estimation
code, run all of the slow tests, and also calculate code coverage, you could run:
pytest --cov=pyquil --use-seed=False --runslow <path/to/test-file-or-dir>
We use benchmarks to ensure the performance of pyQuil is tracked over time, preventing unintended
regressions. Benchmarks are written and run using pytest-benchmark.
This plugin provides a fixture called benchmark
that can be used to benchmark a Python function.
For organization, all benchmarks are located in the test/benchmarks
directory. To run the
benchmarks, use the command:
pytest -v test/benchmarks # or use the Makefile: `make bench`
Note that benchmark results are unique to your machine. They can't be directly compared to benchmark
results on another machine unless it's a machine with identical specifications running in a similar
environment. To track performance over time in a controlled way, we use continuous benchmarking.
When a PR is opened, CI will run the benchmarks and compare the results to the most recent results
on the master
branch. Since CI always uses the same image and workflow, the results should be
reasonably consistent. That said, the runners could share resources or do something else unexpected
that impacts the benchmarks. If you get unexpected results, you may want to re-run the benchmark
to see if the results are consistent. When opening or reviewing a PR, you should evaluate the results
and ensure there are no unexpected regressions.
Continuous benchmarking is implemented with bencher. See their documentation for more information.
The pyQuil docs build automatically as part of the CI pipeline. However, you can also build them locally to make sure that everything renders correctly. We use Sphinx to build the documentation, and then host it on Read the Docs (RTD).
Before you can build the docs locally, you must make sure to install pandoc
via your favorite
OS-level package manager (e.g. brew
, apt
, yum
) in order to convert the Changelog
into reStructuredText (RST). Once you have done this, run the following from the top-level directory
of this repo:
make docs
If the build is successful, then you can navigate to the newly-created docs/build
directory and open the index.html
file in your browser (open index.html
works on macOS,
for example). You can then click around the docs just as if they were hosted on RTD, and
verify that everything looks right!
The parser is implemented with Lark. See the parser README.
Rather than having a user go through the effort of setting up their local Forest environment (a Python virtual environment with pyQuil installed, along with quilc and qvm servers running), the Forest Docker image gives a convenient way to quickly get started with quantum programming. This is not a wholesale replacement for locally installing the Forest SDK, as Docker containers are ephemeral filesystems, and therefore are not the best solution when the data they produce need to be persisted.
The rigetti/forest
Docker image is built
and pushed to DockerHub automatically as part of the CI pipeline. Developers can also
build the image locally by running make docker
from the top-level directory. This
creates an image tagged by a shortened version of the current git commit hash (run
docker images
to see all local images). To then start a container from this image, run:
docker run -it rigetti/forest:<commit hash>
Where <commit hash>
is replaced by the actual git commit hash. This will drop you into an
ipython
REPL with pyQuil installed and quilc
/ qvm
servers running in the background.
Exiting the REPL (via C-d
) will additionally shut down the Docker container and return
you to the shell that ran the image. Docker images typically only have one running process,
but we leverage an entrypoint.sh
script to initialize the Forest SDK
runtime when the container starts up.
The image is defined by its Dockerfile, along with a .dockerignore
to indicate which files to omit when building the image. It is additionally important to
note that this image depends on a collection of parent images, pinned to specific versions.
This pinning ensures reproducibility, but requires that these versions be updated manually
as necessary. The section of the Dockerfile that would need to be edited looks something like
this:
ARG quilc_version=1.20.0
ARG qvm_version=1.17.1
ARG python_version=3.7
Once a version has been changed, committed, and pushed, the CI will then use that new version in all builds going forward.
When merging PRs, we have a couple of guidelines:
- Double-check that the PR author has completed everything in the PR checklist that is applicable to the changes.
- Rename the title of the PR to use the correct Angular-style commit prefix. Also be sure to include the Conventional Commits breaking change indicators in the final commit message if necessary.
- Always use the "squash and merge" option so that every PR corresponds to one commit. This keeps the git history clean and encourages many small (quickly reviewable) PRs rather than behemoth ones with lots of commits.
- When pressing the merge button, each commit message will be turned into a bullet point below the title of the issue. Make sure to truncate the PR title to ~50 characters (unless completely impossible) so it fits on one line in the commit history, and delete any spurious bullet points that add no meaningful content. Also make sure the final commit message is formatted correctly for Conventional Commits.
The CI/CD pipelines that underpin pyQuil are critical for supporting the job of its maintainer. They validate formatting, style, correctness, and good code practice, and also build and distribute the repository via PyPI and DockerHub, all with minimal human intervention. These pipelines almost always work as expected, but every now and then something goes wrong, and it requires a deeper dive.
We use a collection of services for CI/CD -- GitLab CI and GitHub Actions (GHA).
The configuration for GitLab CI is contained in the .gitlab-ci.yml
, and the
GHA configuration is in the .github/workflows
directory. GHA is responsible
for running checks and tests for PRs, while GitLab is responsible for additional tasks that require
access to resources that are not available publicly. This includes publishing docs, publishing to PyPI,
publishing Docker images, and running end-to-end tests on real QPUs.
All releases are triggered manually in GitHub Actions. There is a workflow for generating a "Prerelease" as well as one for "Release". Prereleases should only be generated if there's something specific that needs additional testing before distributing to everyone. The full release process is as lightweight as possible to encourage quicker releases of merged PRs. Don't feel the need to "wait for more content" before releasing!
The release process will automatically pick the next semantic version, generate a changelog, and publish to GitHub, PyPI, and DockerHub (with image tags latest
and <version>
). The docs on Read the Docs will also be updated.
We use a collection of labels to add metadata to the issues and pull requests in the pyQuil project.
Label | Description |
---|---|
bug 🐛 |
An issue that needs fixing. |
devops 🚀 |
An issue related to CI/CD. |
discussion 🤔 |
For design discussions. |
documentation 📝 |
An issue for improving docs. |
enhancement ✨ |
A request for a new feature. |
good first issue 👶 |
A place to get started. |
help wanted 👋 |
Looking for takers. |
quality 🎨 |
Improve code quality. |
refactor 🔨 |
Rework existing functionality. |
work in progress 🚧 |
This PR is not ready to be merged. |