Release Notes¶
MPE2 v1.0.0 ¶
Released on 2026-04-03 - GitHub - PyPI
Multi Particle Environments 2 (MPE2) v1.0.0 Release Notes
Multiple landmark changes have been made to MPE2, warranting a major release since the original release of the multi-particle environments. These changes are documented below.
New Environments
Two new environments were added. These environments are described below, with additional documentation in their files and the MPE2 documentation website.
Collect Treasure Environment
This was inspired by the collect treasure environment from https://github.com/shariqiqbal2810/MAAC/blob/master/envs/mpe_scenarios/fullobs_collect_treasure.py.
It is a cooperative multi-agent task in which collector agents must pick up treasures and deliver them to the matching deposit agent. It consists of two types of agents: collectors and depositors. Treasures are landmarks that appear at random positions. When a collector touches a treasure (is within collision distance), the collector picks it up and the treasure disappears. The treasure then respawns at a new random location on the next step. When a collector carrying a treasure touches the matching deposit agent, the treasure is delivered, the collector's inventory is cleared, and it turns grey again.
Formation Control Environments
This is inspired by the formation control environments from https://github.com/sumitsk/marl_transfer/tree/master/mape/multiagent/scenarios.
These environments are integral MARL environments, and were a natural addition to MPE. In these environments, the different agents are required to arrange themselves in a formation.
Two environments were added: simple_line and simple_formation. In simple_formation N agents must arrange themselves in a circle of radius 0.5 around a central landmark. In simple_line N agents must arrange themselves in a line between two landmarks.
Partial Observability
Two types of partial observability support was added to select MPE environments: N-nearest landmarks and N-closest agents. They were added in #29 using a helper script, allowing for easier extension to other MPE environments as needed.
Curriculum Training Support
Support for curriculum training was added to select environments in #27. This is an environment dependent feature, with different environments requiring different types of curriculum.
Simple_spread: Agents learn to capture the landmark first, and then learn to avoid collisions. Gradually, increase number of agents, landmarks to solve this task.
simple_tag: Preys are slower at the start and start to increase velocity, similarly, number of agents can be increased as environments progresses.
Termination Calls
With the addition of the above curriculum training, agents must have termination calls. Additionally, for environments that have the goal of capturing a landmark, having termination calls on a successful capture aids learning
These termination calls were added in #28
Benchmark Data
Benchmark data is now part of the infos. A new argument: benchmark_data=False is available on every environment's constructor. It is off by default so it doesn't consume resources during training. This was added since, rewards in general for MPEs do not state how effective the policy is, we need additional information (such as number of landmark captures/preys etc.).
Other changes:
CI Changes
Multiple CI changes, including OIDC support for publishing to PyPI made in #24
Heuristic Testing of Environments
Additional testing of environments was conducted to test scaling and training
Codebase Complexity
Codebase complexity was evaluated, and accounted for in future changes added.
Action spaces
The original MPE environments were continuous actions spaces. MPE2 included discrete action spaces. This is an older MPE2 change, not performed in this major release but accounted for in all related changes.
New Contributors
- @BolunDai0216 made their first contribution in #4
- @mgoulao made their first contribution in #10
- @David-GERARD made their first contribution in #15
- @daimm2000 made their first contribution in #20
- @RushivArora made their first contribution in #24
- @mwydmuch made their first contribution in #31
Full Changelog: https://github.com/Farama-Foundation/MPE2/commits/v1.0.0
MPE2 v1.0.0rc1¶
Released on 2026-03-29 - GitHub - PyPI
Multi Particle Environments 2 (MPE2) v1.0.0rc1 Release Notes
Same as v1.0.0 release
Full Changelog: https://github.com/Farama-Foundation/MPE2/commits/v1.0.0rc1