Releases: nnaisense/evotorch
0.6.1
0.6.0
This release of EvoTorch introduces new functional programming capabilities, updates to reinforcement learning components, and a new data structure, along with several improvements and bug fixes.
New Features
-
Functional API for Optimization (#98 by @engintoklu):
- Introduces an alternative functional API for EvoTorch, compatible with
torch.func.vmap. This allows for optimizing single or batched populations simultaneously. - Functional Algorithms: Includes functional versions of Cross Entropy Method (CEM) and Policy Gradients with Parameter-based Exploration (PGPE). These can be used with
vmapor by providing batched initial centers (center_init). - Functional Optimizers: Adds functional counterparts for Adam, ClipUp, and SGD optimizers. Their interfaces are similar to the functional CEM and PGPE, facilitating easier switching between evolutionary and gradient-based approaches.
@expects_ndimDecorator: A new decorator to declare the expected number of dimensions for each positional argument of a function. If input tensors have more dimensions than expected, the function automatically appliesvmapto operate across the batch dimensions.@rowwiseDecorator: A new decorator for functions implemented with the assumption of a vector input. If a tensor with 2 or more dimensions is received, the function automatically appliesvmapto operate across batch dimensions.
- Introduces an alternative functional API for EvoTorch, compatible with
-
Functional Genetic Algorithm Operators (#109 by @engintoklu):
- Provides alternative implementations for genetic algorithm (GA) operators that follow functional programming principles.
- These operators are batchable, either by adding a leftmost dimension to the population or by using
torch.func.vmap. - Users can combine these operators to implement custom GAs.
-
TensorFrame Data Structure (#120 by @engintoklu):
- Introduces
TensorFrame, a new tabular data structure. - It is inspired by
pandas.DataFramebut is designed to work with PyTorch tensors. TensorFrameis compatible withtorch.vmap, enabling it to be batched and used within fitness functions.
- Introduces
-
Notebook Demonstrating Object Evolution (#102 by @engintoklu): Added a Jupyter notebook to illustrate how to evolve arbitrary Python objects using EvoTorch.
-
Jupyter Notebook for Visualizing Brax Agents (#105 by @engintoklu): Added a notebook for visualizing agents trained with Brax.
Improvements
-
Updated Vectorized Reinforcement Learning (#104 by @engintoklu): Vectorized RL functionalities are now compatible with the Gymnasium
1.0.xAPI, while maintaining compatibility with Gymnasium0.29.x. Key updates include an EvoTorch-specificSyncVectorEnv, performance enhancements, and refactored Brax notebook examples. -
Updated Hyperparameters for Brax Example (#108 by @engintoklu): Hyperparameters in the Brax example were updated.
-
Updated
general_usage.md(#107 by @engintoklu): The general usage documentation was updated. -
Improved Logging Documentation (#116 by @flukeskywalker): Documentation for logging was improved.
Bug Fixes
-
CMAES with Bounded Problems (#100 by @flukeskywalker): CMAES will now correctly indicate failure if the problem is bounded.
-
VecGymNE with Adaptive Popsize (#106 by @engintoklu):
VecGymNEis now compatible with adaptive population sizes. -
CMAES Center Dimensionality (#111 by @engintoklu): The "center" of CMAES is now correctly treated as 1-dimensional.
Maintenance
0.5.1
0.5.0
New Features
- Allow the user to reach the search algorithm's internal optimizer by @engintoklu in #89
- Make EvoTorch future-proof by @engintoklu in #77
- Ensure compatibility with PyTorch 2.0 and Brax 0.9
- Migrate from old Gym interface to Gymnasium
- Inform the user when
deviceis not set correctly by @engintoklu in #90
Fixes
- Fix
get_minibatch()ofSupervisedNEby @engintoklu in #74 - Fix division-by-zero while initializing
CMAESby @engintoklu in #86 - Fix wrong defaults of
SteadyStateGAby @engintoklu in #87
0.4.1
Fixes
- Fix the interface of
make_I(#62) (@engintoklu) - Fix generate_batch returning
None(#63) (@engintoklu) - Fix C decomposition rate calculation on CUDA devices (#64) (@NaturalGradient)
Docs
- Add contribution guidelines (#70) (@engintoklu, @Higgcz)
- Add "how to cite" into the README (#69) (@engintoklu)
- Include CMA-ES into the README (#57) (@NaturalGradient)
0.4.0
New Features
- Implementation of
WandbLogger(#35) (@galatolofederico) - Simplify the usage of
NeptuneLogger(#38) (@Higgcz) - GPU-friendly + vectorized pareto ranking (#32) (@NaturalGradient)
- User interface improvements by (#34) (@engintoklu)
- Add env. variable to control verbosity of the logger (#48) (@Higgcz)
- Add torch-based
CMAESimplementation (#41) (@NaturalGradient) - Improve
GeneticAlgorithmand addMAPElites(#44) (@engintoklu, @pliskowski) - Add noxfile to run pytest across multiple python versions (#40) (@Higgcz)
Fixes
- Fix all the mkdocstrings warnings (#39) (@Higgcz)
- Fix infinite live reloading of the docs (#36) (@Higgcz)
Docs
0.3.0
New
Vectorized gym support: Added a new problem class, evotorch.neuroevolution.VecGymNE, to solve vectorized gym environments. This new problem class can work with brax environments and can exploit GPU acceleration (#20).
PicklingLogger: Added a new logger, evotorch.logging.PicklingLogger, which periodically pickles and saves the current solution to the disk (#20).
Python 3.7 support: The Python dependency was lowered from 3.8 to 3.7. Therefore, EvoTorch can now be imported from within a Google Colab notebook (#16).
API Changes
@pass_info decorator: When working with GymNE (or with the newly introduced VecGymNE), if one uses a manual policy class and wishes to receive environment-related information via keyword arguments, that manual policy now needs to be decorated via @pass_info, as follows: (#27)
from torch import nn
from evotorch.decorators import pass_info
@pass_info
class CustomPolicy(nn.Module):
def __init__(self, **kwargs):
...Recurrent policies: When defining a manual recurrent policy (as a subclass of torch.nn.Module) for GymNE or for VecGymNE, the user is now required to define the forward method of the module according to the following signature:
def forward(self, x: torch.Tensor, h: Any = None) -> Tuple[torch.Tensor, Any]:
...Note: The optional argument h is the current state of the network, and the second value of the output tuple is the updated state of the network. A reset() method is not required anymore, and it will be ignored (#20).
Fixes
Fixed a performance issue caused by the undesired cloning of the entire storages of tensor slices (#21).
Fixed the signature and the docstrings of the overridable method _do_cross_over(...) of the class evotorch.operators.CrossOver (#30).
Docs
Added more example scripts and updated the related README file (#19).
Updated the documentation related to GPU usage with ray (#28).
0.2.0
Fixes:
- Fix docstrings in gaussian.py (#11) (@engintoklu)
- Fix for str_to_net(...) (#12) (@engintoklu)
- Hard-code network_device property to CPU for GymNE (#6) (@NaturalGradient)
Docs:
- Fix comment in the Gym experiments notebook (#5) (@engintoklu)
- Improve code formatting in docstrings (#3) (@flukeskywalker)
- Add documentation of NeptuneLogger class (#15) (@NaturalGradient)
0.1.1
0.1.0
We are excited to release the first public version of EvoTorch - an evolutionary computation library created at NNAISENSE.
With EvoTorch, one can solve various optimization problems, without having to worry about whether or not these problems at hand are differentiable. Among the problem types that are solvable with EvoTorch are:
- Black-box optimization problems (continuous or discrete)
- Reinforcement learning tasks
- Supervised learning tasks
- etc.
Various evolutionary computation algorithms are available in EvoTorch:
- Distribution-based search algorithms:
- PGPE: Policy Gradients with Parameter-based Exploration.
- XNES: Exponential Natural Evolution Strategies.
- SNES: Separable Natural Evolution Strategies.
- CEM: Cross-Entropy Method.
- Population-based search algorithms:
- SteadyStateGA: A fully elitist genetic algorithm implementation. Also supports multiple objectives, in which case behaves like NSGA-II.
- CoSyNE: Cooperative Synapse Neuroevolution.
All these algorithms mentioned above are implemented in PyTorch, and therefore, can benefit from the vectorization and GPU capabilities of PyTorch. In addition, with the help of the Ray library, EvoTorch can further scale up these algorithms by splitting the workload across:
- multiple CPUs
- multiple GPUs
- multiple computers over a Ray cluster