Skip to content

Releases: Stable-Baselines-Team/stable-baselines3-contrib

SB3-Contrib v2.4.0: New algorithm (CrossQ), Gymnasium v1.0 support

18 Nov 10:33
d5ac968
Compare
Choose a tag to compare

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 2.4.0

New Features:

  • Added CrossQ algorithm, from "Batch Normalization in Deep Reinforcement Learning" paper (@danielpalen)
  • Added BatchRenorm PyTorch layer used in CrossQ (@danielpalen)
  • Added support for Gymnasium v1.0

Bug Fixes:

  • Updated QR-DQN optimizer input to only include quantile_net parameters (@corentinlger)
  • Updated QR-DQN paper link in docs (@corentinlger)
  • Fixed a warning with PyTorch 2.4 when loading a RecurrentPPO model (You are using torch.load with weights_only=False)
  • Fixed loading QRDQN changes target_update_interval (@jak3122)

Others:

  • Updated PyTorch version on CI to 2.3.1
  • Remove unnecessary SDE noise resampling in PPO/TRPO update
  • Switched to uv to download packages on GitHub CI

New Contributors

Full Changelog: v2.3.0...v2.4.0

SB3-Contrib v2.3.0: New defaults hyperparameters for QR-DQN

31 Mar 18:41
5102922
Compare
Choose a tag to compare

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 2.3.0
  • The default learning_starts parameter of QRDQN have been changed to be consistent with the other offpolicy algorithms
# SB3 < 2.3.0 default hyperparameters, 50_000 corresponded to Atari defaults hyperparameters
# model = QRDQN("MlpPolicy", env, learning_starts=50_000)
# SB3 >= 2.3.0:
model = QRDQN("MlpPolicy", env, learning_starts=100)

New Features:

  • Added rollout_buffer_class and rollout_buffer_kwargs arguments to MaskablePPO
  • Log success rate rollout/success_rate when available for on policy algorithms

Others:

  • Fixed train_freq type annotation for tqc and qrdqn (@Armandpl)
  • Fixed sb3_contrib/common/maskable/*.py type annotations
  • Fixed sb3_contrib/ppo_mask/ppo_mask.py type annotations
  • Fixed sb3_contrib/common/vec_env/async_eval.py type annotations

Documentation:

  • Add some additional notes about MaskablePPO (evaluation and multi-process) (@icheered)

Full Changelog: v2.2.1...v2.3.0

SB3-Contrib v2.2.1

17 Nov 23:37
707cb0f
Compare
Choose a tag to compare

SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
Stable-Baselines Jax (SBX): https://github.com/araffin/sbx

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 2.2.1
  • Switched to ruff for sorting imports (isort is no longer needed), black and ruff version now require a minimum version
  • Dropped x is False in favor of not x, which means that callbacks that wrongly returned None (instead of a boolean) will cause the training to stop (@iwishiwasaneagle)

New Features:

  • Added set_options for AsyncEval
  • Added rollout_buffer_class and rollout_buffer_kwargs arguments to TRPO

Others:

  • Fixed ActorCriticPolicy.extract_features() signature by adding an optional features_extractor argument
  • Update dependencies (accept newer Shimmy/Sphinx version and remove sphinx_autodoc_typehints)

SB3-Contrib v2.1.0

20 Aug 12:15
67d3eef
Compare
Choose a tag to compare

SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
Stable-Baselines Jax (SBX): https://github.com/araffin/sbx

Breaking Changes:

  • Removed Python 3.7 support
  • SB3 now requires PyTorch >= 1.13
  • Upgraded to Stable-Baselines3 >= 2.1.0

New Features:

  • Added Python 3.11 support

Bug Fixes:

  • Fixed MaskablePPO ignoring stats_window_size argument

Full Changelog: v2.0.0...v2.1.0

SB3-Contrib v2.0.0: Gymnasium Support

23 Jun 13:00
de92025
Compare
Choose a tag to compare

Warning
Stable-Baselines3 (SB3) v2.0 will be the last one supporting python 3.7 (end of life in June 2023).
We highly recommended you to upgrade to Python >= 3.8.

SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
Stable-Baselines Jax (SBX): https://github.com/araffin/sbx

To upgrade:

pip install stable_baselines3 sb3_contrib rl_zoo3 --upgrade

or simply (rl zoo depends on SB3 and SB3 contrib):

pip install rl_zoo3 --upgrade

Breaking Changes

  • Switched to Gymnasium as primary backend, Gym 0.21 and 0.26 are still supported via the shimmy package (@carlosluis, @arjun-kg, @tlpss)
  • Upgraded to Stable-Baselines3 >= 2.0.0

Bug fixes

  • Fixed QRDQN update interval for multi envs

Others

  • Fixed sb3_contrib/tqc/*.py type hints
  • Fixed sb3_contrib/trpo/*.py type hints
  • Fixed sb3_contrib/common/envs/invalid_actions_env.py type hints

Full Changelog: v1.8.0...v2.0.0

SB3-Contrib v1.8.0

08 Apr 16:19
a84ad3a
Compare
Choose a tag to compare

Warning
Stable-Baselines3 (SB3) v1.8.0 will be the last one to use Gym as a backend.
Starting with v2.0.0, Gymnasium will be the default backend (though SB3 will have compatibility layers for Gym envs).
You can find a migration guide here.
If you want to try the SB3 v2.0 alpha version, you can take a look at PR #1327.

RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo

To upgrade:

pip install stable_baselines3 sb3_contrib rl_zoo3 --upgrade

or simply (rl zoo depends on SB3 and SB3 contrib):

pip install rl_zoo3 --upgrade

Breaking Changes:

  • Removed shared layers in mlp_extractor (@AlexPasqua)
  • Upgraded to Stable-Baselines3 >= 1.8.0

New Features:

  • Added stats_window_size argument to control smoothing in rollout logging (@jonasreiher)

Bug Fixes:

Deprecations:

Others:

  • Moved to pyproject.toml
  • Added github issue forms
  • Fixed Atari Roms download in CI
  • Fixed sb3_contrib/qrdqn/*.py type hints
  • Switched from flake8 to ruff

Documentation:

  • Added warning about potential crashes caused by check_env in the MaskablePPO docs (@AlexPasqua)

SB3-Contrib v1.7.0 : Bug fixes for PPO LSTM and quality of life improvements

10 Jan 21:41
7bf9cf3
Compare
Choose a tag to compare

Warning
Shared layers in MLP policy (mlp_extractor) are now deprecated for PPO, A2C and TRPO.
This feature will be removed in SB3 v1.8.0 and the behavior of net_arch=[64, 64]
will create separate networks with the same architecture, to be consistent with the off-policy algorithms.

Note
TRPO models saved with SB3 < 1.7.0 will show a warning about
missing keys in the state dict when loaded with SB3 >= 1.7.0.
To suppress the warning, simply save the model again.
You can find more info in issue # 1233

Breaking Changes:

  • Removed deprecated create_eval_env, eval_env, eval_log_path, n_eval_episodes and eval_freq parameters,
    please use an EvalCallback instead
  • Removed deprecated sde_net_arch parameter
  • Upgraded to Stable-Baselines3 >= 1.7.0

New Features:

  • Introduced mypy type checking
  • Added support for Python 3.10
  • Added with_bias parameter to ARSPolicy
  • Added option to have non-shared features extractor between actor and critic in on-policy algorithms (@AlexPasqua)
  • Features extractors now properly support unnormalized image-like observations (3D tensor)
    when passing normalize_images=False

Bug Fixes:

  • Fixed a bug in RecurrentPPO where the lstm states where incorrectly reshaped for n_lstm_layers > 1 (thanks @kolbytn)
  • Fixed RuntimeError: rnn: hx is not contiguous while predicting terminal values for RecurrentPPO when n_lstm_layers > 1

Deprecations:

  • You should now explicitely pass a features_extractor parameter when calling extract_features()
  • Deprecated shared layers in MlpExtractor (@AlexPasqua)

Others:

  • Fixed flake8 config
  • Fixed sb3_contrib/common/utils.py type hint
  • Fixed sb3_contrib/common/recurrent/type_aliases.py type hint
  • Fixed sb3_contrib/ars/policies.py type hint
  • Exposed modules in __init__.py with __all__ attribute (@ZikangXiong)
  • Removed ignores on Flake8 F401 (@ZikangXiong)
  • Upgraded GitHub CI/setup-python to v4 and checkout to v3
  • Set tensors construction directly on the device
  • Standardized the use of from gym import spaces

SB3-Contrib v1.6.2: Progress bar

10 Oct 16:47
52795a3
Compare
Choose a tag to compare

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 1.6.2

New Features:

  • Added progress_bar argument in the learn() method, displayed using TQDM and rich packages

Deprecations:

  • Deprecate parameters eval_env, eval_freq and create_eval_env

Others:

  • Fixed the return type of .load() methods so that they now use TypeVar

SB3-Contrib v1.6.1: Bug fix release

29 Sep 11:10
2490468
Compare
Choose a tag to compare

Breaking Changes:

  • Fixed the issue that predict does not always return action as np.ndarray (@qgallouedec)
  • Upgraded to Stable-Baselines3 >= 1.6.1

Bug Fixes:

  • Fixed the issue of wrongly passing policy arguments when using CnnLstmPolicy or MultiInputLstmPolicy with RecurrentPPO (@mlodel)
  • Fixed division by zero error when computing FPS when a small number of time has elapsed in operating systems with low-precision timers.
  • Fixed calling child callbacks in MaskableEvalCallback (@CppMaster)
  • Fixed missing verbose parameter passing in the MaskableEvalCallback constructor (@BurakDmb)
  • Fixed the issue that when updating the target network in QRDQN, TQC, the running_mean and running_var properties of batch norm layers are not updated (@honglu2875)

Others:

  • Changed the default buffer device from "cpu" to "auto"

sb3-contrib v1.6.0: RecurrentPPO (aka PPO LSTM) and better defaults for learning from pixels with offpolicy algos

12 Jul 21:14
087951d
Compare
Choose a tag to compare

Breaking changes:

  • Upgraded to Stable-Baselines3 >= 1.6.0
  • Changed the way policy "aliases" are handled ("MlpPolicy", "CnnPolicy", ...), removing the former
    register_policy helper, policy_base parameter and using policy_aliases static attributes instead (@Gregwar)
  • Renamed rollout/exploration rate key to rollout/exploration_rate for QRDQN (to be consistent with SB3 DQN)
  • Upgraded to python 3.7+ syntax using pyupgrade
  • SB3 now requires PyTorch >= 1.11
  • Changed the default network architecture when using CnnPolicy or MultiInputPolicy with TQC,
    share_features_extractor is now set to False by default and the net_arch=[256, 256] (instead of net_arch=[] that was before)

New Features

  • Added RecurrentPPO (aka PPO LSTM)

Bug Fixes:

  • Fixed a bug in RecurrentPPO when calculating the masked loss functions (@rnederstigt)
  • Fixed a bug in TRPO where kl divergence was not implemented for MultiDiscrete space