Willie Neiswanger Profile picture
Jul 7, 2021 9 tweets 4 min read Read on X
(1/9) Presenting: Bayesian Algorithm Execution (BAX) and the InfoBAX algorithm.

Bayesian optimization finds global optima of expensive black-box functions. But what about other function properties?

w/ @KAlexanderWang @StefanoErmon at #ICML2021

URL: willieneis.github.io/bax-website
(2/9) Methods like Bayes opt / quadrature can be viewed as estimating properties of a black-box function (e.g. global optima, integrals). But in many applications we also care about local optima, level sets, top-k optima, boundaries, integrals, roots, graph properties, and more Example properties of black...
(3/9) For a given property, we can often find an algorithm that computes it, in the absence of any budget constraint. We then reframe the task as: how do we adaptively sample to estimate *the output of this algorithm*?

We call this task “Bayesian algorithm execution” or BAX. Figure describing the task ...
(4/9) Given an algorithm that computes the property we desire, our procedure InfoBAX sequentially chooses queries to the black-box function that maximally inform us of this computable property. Figure describing the InfoB...
(5/9) Example 1: Bayesian local optimization.

If we run InfoBAX with a local optimization algorithm (e.g. evolution strategies, Nelder-Mead method, gradient descent), we get a variant of Bayesian optimization that targets local, rather than global, optima. Figure showing an example o...
(6/9) Here’s an animation of InfoBAX with an evolution strategy.

Notice that, unlike (standard) Bayesian optimization, this algorithm only spends budget converging to a single local optima. It shows good performance in high dimensions.
(7/9) Example 2: Estimating shortest paths in graphs.

In graphs where it is expensive to access edge weights (e.g. transportation networks) we can use InfoBAX with Dijkstra’s algorithm to estimate shortest paths. Figure showing an example o...
(8/9) Example 3: Top-k estimation.

Given a set of points X, we may wish to estimate the subset of k points that have the highest value under a black-box function f. This generalizes standard Bayesian optimization (i.e. top-1 estimation) to top-k estimation.
(9/9) Our method has connections to stats/ML areas such as Bayesian optimal experimental design, information theory, optimal sensor placement, stepwise uncertainty reduction, likelihood-free Bayesian inference, and more.

See more discussion in our paper arxiv.org/abs/2104.09460

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Willie Neiswanger

Willie Neiswanger Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(