Skip to content

Releases: nnaisense/evotorch

0.6.1

14 May 17:10
cebcac4

Choose a tag to compare

Maintiance

  • chore: Update Python versions and dependencies across project configuration files by @Higgcz in #121

0.6.0

14 May 13:17
8189f76

Choose a tag to compare

This release of EvoTorch introduces new functional programming capabilities, updates to reinforcement learning components, and a new data structure, along with several improvements and bug fixes.

New Features

  • Functional API for Optimization (#98 by @engintoklu):

    • Introduces an alternative functional API for EvoTorch, compatible with torch.func.vmap. This allows for optimizing single or batched populations simultaneously.
    • Functional Algorithms: Includes functional versions of Cross Entropy Method (CEM) and Policy Gradients with Parameter-based Exploration (PGPE). These can be used with vmap or by providing batched initial centers (center_init).
    • Functional Optimizers: Adds functional counterparts for Adam, ClipUp, and SGD optimizers. Their interfaces are similar to the functional CEM and PGPE, facilitating easier switching between evolutionary and gradient-based approaches.
    • @expects_ndim Decorator: A new decorator to declare the expected number of dimensions for each positional argument of a function. If input tensors have more dimensions than expected, the function automatically applies vmap to operate across the batch dimensions.
    • @rowwise Decorator: A new decorator for functions implemented with the assumption of a vector input. If a tensor with 2 or more dimensions is received, the function automatically applies vmap to operate across batch dimensions.
  • Functional Genetic Algorithm Operators (#109 by @engintoklu):

    • Provides alternative implementations for genetic algorithm (GA) operators that follow functional programming principles.
    • These operators are batchable, either by adding a leftmost dimension to the population or by using torch.func.vmap.
    • Users can combine these operators to implement custom GAs.
  • TensorFrame Data Structure (#120 by @engintoklu):

    • Introduces TensorFrame, a new tabular data structure.
    • It is inspired by pandas.DataFrame but is designed to work with PyTorch tensors.
    • TensorFrame is compatible with torch.vmap, enabling it to be batched and used within fitness functions.
  • Notebook Demonstrating Object Evolution (#102 by @engintoklu): Added a Jupyter notebook to illustrate how to evolve arbitrary Python objects using EvoTorch.

  • Jupyter Notebook for Visualizing Brax Agents (#105 by @engintoklu): Added a notebook for visualizing agents trained with Brax.

Improvements

  • Updated Vectorized Reinforcement Learning (#104 by @engintoklu): Vectorized RL functionalities are now compatible with the Gymnasium 1.0.x API, while maintaining compatibility with Gymnasium 0.29.x. Key updates include an EvoTorch-specific SyncVectorEnv, performance enhancements, and refactored Brax notebook examples.

  • Updated Hyperparameters for Brax Example (#108 by @engintoklu): Hyperparameters in the Brax example were updated.

  • Updated general_usage.md (#107 by @engintoklu): The general usage documentation was updated.

  • Improved Logging Documentation (#116 by @flukeskywalker): Documentation for logging was improved.

Bug Fixes

  • CMAES with Bounded Problems (#100 by @flukeskywalker): CMAES will now correctly indicate failure if the problem is bounded.

  • VecGymNE with Adaptive Popsize (#106 by @engintoklu): VecGymNE is now compatible with adaptive population sizes.

  • CMAES Center Dimensionality (#111 by @engintoklu): The "center" of CMAES is now correctly treated as 1-dimensional.

Maintenance

  • Updated GitHub Actions (#112 by @Higgcz): GitHub Actions workflows were updated.

0.5.1

02 Nov 11:09
5c58566

Choose a tag to compare

Fixes

0.5.0

02 Nov 10:09
02484da

Choose a tag to compare

New Features

  • Allow the user to reach the search algorithm's internal optimizer by @engintoklu in #89
  • Make EvoTorch future-proof by @engintoklu in #77
    • Ensure compatibility with PyTorch 2.0 and Brax 0.9
    • Migrate from old Gym interface to Gymnasium
  • Inform the user when device is not set correctly by @engintoklu in #90

Fixes

0.4.1

08 Mar 10:29
e8060ff

Choose a tag to compare

Fixes

Docs

0.4.0

17 Jan 14:10
5d4bb1e

Choose a tag to compare

New Features

Fixes

  • Fix all the mkdocstrings warnings (#39) (@Higgcz)
  • Fix infinite live reloading of the docs (#36) (@Higgcz)

Docs

  • Update the logging page and add WandbLogger section (#37) (@Higgcz)

0.3.0

24 Oct 20:03
a232d04

Choose a tag to compare

New

Vectorized gym support: Added a new problem class, evotorch.neuroevolution.VecGymNE, to solve vectorized gym environments. This new problem class can work with brax environments and can exploit GPU acceleration (#20).

PicklingLogger: Added a new logger, evotorch.logging.PicklingLogger, which periodically pickles and saves the current solution to the disk (#20).

Python 3.7 support: The Python dependency was lowered from 3.8 to 3.7. Therefore, EvoTorch can now be imported from within a Google Colab notebook (#16).

API Changes

@pass_info decorator: When working with GymNE (or with the newly introduced VecGymNE), if one uses a manual policy class and wishes to receive environment-related information via keyword arguments, that manual policy now needs to be decorated via @pass_info, as follows: (#27)

from torch import nn
from evotorch.decorators import pass_info

@pass_info
class CustomPolicy(nn.Module):
    def __init__(self, **kwargs):
        ...

Recurrent policies: When defining a manual recurrent policy (as a subclass of torch.nn.Module) for GymNE or for VecGymNE, the user is now required to define the forward method of the module according to the following signature:

def forward(self, x: torch.Tensor, h: Any = None) -> Tuple[torch.Tensor, Any]:
    ...

Note: The optional argument h is the current state of the network, and the second value of the output tuple is the updated state of the network. A reset() method is not required anymore, and it will be ignored (#20).

Fixes

Fixed a performance issue caused by the undesired cloning of the entire storages of tensor slices (#21).

Fixed the signature and the docstrings of the overridable method _do_cross_over(...) of the class evotorch.operators.CrossOver (#30).

Docs

Added more example scripts and updated the related README file (#19).

Updated the documentation related to GPU usage with ray (#28).

0.2.0

31 Aug 18:15
6efb628

Choose a tag to compare

Fixes:

Docs:

0.1.1

09 Aug 09:55
3bb5996

Choose a tag to compare

What's changed

  • Re-arrange pip dependencies to make the default installation of EvoTorch runnable in most scenarios
  • Add docs badge and landing page link to the README
  • Fix broken links in PyPI

0.1.0

08 Aug 21:06
1691060

Choose a tag to compare

We are excited to release the first public version of EvoTorch - an evolutionary computation library created at NNAISENSE.

With EvoTorch, one can solve various optimization problems, without having to worry about whether or not these problems at hand are differentiable. Among the problem types that are solvable with EvoTorch are:

  • Black-box optimization problems (continuous or discrete)
  • Reinforcement learning tasks
  • Supervised learning tasks
  • etc.

Various evolutionary computation algorithms are available in EvoTorch:

  • Distribution-based search algorithms:
    • PGPE: Policy Gradients with Parameter-based Exploration.
    • XNES: Exponential Natural Evolution Strategies.
    • SNES: Separable Natural Evolution Strategies.
    • CEM: Cross-Entropy Method.
  • Population-based search algorithms:
    • SteadyStateGA: A fully elitist genetic algorithm implementation. Also supports multiple objectives, in which case behaves like NSGA-II.
    • CoSyNE: Cooperative Synapse Neuroevolution.

All these algorithms mentioned above are implemented in PyTorch, and therefore, can benefit from the vectorization and GPU capabilities of PyTorch. In addition, with the help of the Ray library, EvoTorch can further scale up these algorithms by splitting the workload across:

  • multiple CPUs
  • multiple GPUs
  • multiple computers over a Ray cluster