Composable code transformation framework for R, allowing you to run numerical programs at the speed of light. It currently implements JIT compilation for very fast execution and backward-mode automatic differentiation. Programs can run on various hardware backends, including CPU and GPU.
In order to install from source, you need a C++20 compiler, as well as
libprotobuf and the protobuf-compiler.
pak::pak("r-xla/anvil")You can also install from
r-universe, by adding the code
below to your .Rprofile.
options(repos = c(
rxla = "https://r-xla.r-universe.dev",
CRAN = "https://cloud.r-project.org/"
))Below, we create a standard R function. We cannot directly call this
function, but first need to wrap it in a jit() call. If the resulting
function is then called on AnvilTensors – the primary data type in
{anvil} – it will be JIT compiled and subsequently executed.
library(anvil)
f <- function(a, b, x) {
a * x + b
}
f_jit <- jit(f)
a <- nv_scalar(1.0)
b <- nv_scalar(-2.0)
x <- nv_scalar(3.0)
f_jit(a, b, x)
#> AnvilTensor
#> 1.0000
#> [ CPUf32{} ]Through automatic differentiation, we can also obtain the gradient of the above function.
g_jit <- jit(gradient(f, wrt = c("a", "b")))
g_jit(a, b, x)
#> $a
#> AnvilTensor
#> 3.0000
#> [ CPUf32{} ]
#>
#> $b
#> AnvilTensor
#> 1.0000
#> [ CPUf32{} ]- Automatic Differentiation:
- Gradients for functions with scalar outputs are supported.
- Fast:
- Code is JIT compiled into a single kernel.
- Runs on different hardware backends, including CPU and GPU.
- Easy to extend and contribute:
- The package is written almost entirely in R.
- It is easy to add new primitives.
- It (will be) possible to add new transformations, a.k.a. interpretation rules.
While {anvil} allows to run certain types of programs extremely fast, it only applies to a certain category of problems. Specifically, it is suitable for numerical algorithms, such as optimizing bayesian models, training neural networks or more generally numerical optimization. Another restriction is that {anvil} needs to re-compile the code for each new unique input shape. This has the advantage, that the compiler can make memory optimizations, but the compilation overhead might be a problem for fast running programs.
- This work is supported by MaRDI.
- The design of this package was inspired by and borrows from:
- JAX, especially the autodidax tutorial.
- The microjax project.
- For JIT compilation, we leverage the OpenXLA project.