π§ͺ Implemented Algorithms#
The core concept in adaptive
is the learner.
A learner samples a function at the most interesting locations within its parameter space, allowing for optimal sampling of the function.
As the function is evaluated at more points, the learner improves its understanding of the best locations to sample next.
The definition of the βbest locationsβ depends on your application domain.
While adaptive
provides sensible default choices, the adaptive sampling process can be fully customized.
The following learners are implemented:
Learner1D
, for 1D functionsf: β β β^N
,Learner2D
, for 2D functionsf: β^2 β β^N
,LearnerND
, for ND functionsf: β^N β β^M
,AverageLearner
, for random variables where you want to average the result over many evaluations,AverageLearner1D
, for stochastic 1D functions where you want to estimate the mean value of the function at each point,IntegratorLearner
, for when you want to intergrate a 1D functionf: β β β
.BalancingLearner
, for when you want to run several learners at once, selecting the βbestβ one each time you get more points.
Meta-learners (to be used with other learners):
BalancingLearner
, for when you want to run several learners at once, selecting the βbestβ one each time you get more points,DataSaver
, for when your function doesnβt just return a scalar or a vector.
In addition to the learners, adaptive
also provides primitives for running the sampling across several cores and even several machines, with built-in support for
concurrent.futures,
mpi4py,
loky,
ipyparallel, and
distributed.
π‘ Examples#
Here are some examples of how Adaptive samples vs. homogeneous sampling. Click on the Play button or move the sliders.
Show code cell content
import itertools
import holoviews as hv
import numpy as np
import adaptive
from adaptive.learner.learner1D import default_loss, uniform_loss
adaptive.notebook_extension()
hv.output(holomap="scrubber")
adaptive.Learner1D
#
The Learner1D
class is designed for adaptively learning 1D functions of the form f: β β β^N
. It focuses on sampling points where the function is less well understood to improve the overall approximation.
This learner is well-suited for functions with localized features or varying degrees of complexity across the domain.
Adaptively learning a 1D function (the plot below) and live-plotting the process in a Jupyter notebook is as easy as
from adaptive import notebook_extension, Runner, Learner1D
notebook_extension() # enables notebook integration
def peak(x, a=0.01): # function to "learn"
return x + a**2 / (a**2 + x**2)
learner = Learner1D(peak, bounds=(-1, 1))
def goal(learner):
return learner.loss() < 0.01 # continue until loss is small enough
runner = Runner(learner, goal) # start calculation on all CPU cores
runner.live_info() # shows a widget with status information
runner.live_plot()
Show code cell source
from bokeh.models import WheelZoomTool
wheel_zoom = WheelZoomTool(zoom_on_axis=False)
def f(x, offset=0.07357338543088588):
a = 0.01
return x + a**2 / (a**2 + (x - offset) ** 2)
def plot_loss_interval(learner):
if learner.npoints >= 2:
x_0, x_1 = max(learner.losses, key=learner.losses.get)
y_0, y_1 = learner.data[x_0], learner.data[x_1]
x, y = [x_0, x_1], [y_0, y_1]
else:
x, y = [], []
return hv.Scatter((x, y)).opts(size=6, color="r")
def plot_interval(learner, npoints):
adaptive.runner.simple(learner, npoints_goal=npoints)
return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1]
def get_hm(loss_per_interval, N=101):
learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=loss_per_interval)
plots = {n: plot_interval(learner, n) for n in range(N)}
return hv.HoloMap(plots, kdims=["npoints"])
plot_homo = get_hm(uniform_loss).relabel("homogeneous sampling")
plot_adaptive = get_hm(default_loss).relabel("with adaptive")
layout = plot_homo + plot_adaptive
layout.opts(hv.opts.Scatter(active_tools=["box_zoom", wheel_zoom]))
adaptive.Learner2D
#
The Learner2D
class is tailored for adaptively learning 2D functions of the form f: β^2 β β^N
. Similar to Learner1D
, it concentrates on sampling points with higher uncertainty to provide a better approximation.
This learner is ideal for functions with complex features or varying behavior across a 2D domain.
Show code cell source
def ring(xy):
import numpy as np
x, y = xy
a = 0.2
return x + np.exp(-((x**2 + y**2 - 0.75**2) ** 2) / a**4)
def plot_compare(learner, npoints):
adaptive.runner.simple(learner, npoints_goal=npoints)
learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)
xs = ys = np.linspace(*learner.bounds[0], int(learner.npoints**0.5))
xys = list(itertools.product(xs, ys))
learner2.tell_many(xys, map(ring, xys))
return (
learner2.plot().relabel("homogeneous grid")
+ learner.plot().relabel("with adaptive")
+ learner2.plot(tri_alpha=0.5).relabel("homogeneous sampling")
+ learner.plot(tri_alpha=0.5).relabel("with adaptive")
).cols(2)
learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])
plots = {n: plot_compare(learner, n) for n in range(4, 1010, 20)}
plot = hv.HoloMap(plots, kdims=["npoints"]).collate()
plot.opts(hv.opts.Image(active_tools=[wheel_zoom]))