{{
}}
Joseph Salmon
IMAG, Univ Montpellier, CNRS, Inria, Montpellier, France
Une plateforme de science citoyenne utilisant l’apprentissage automatique pour aider les gens à identifier les plantes avec leur téléphone


Conversation with Alex (Gramfort) while:
We talked about launching a benchmarking platform for optimization algorithms, but got too busy for a while.
and then Thomas (Moreau) arrived, soon backed-up by Mathurin (Massias)


Choosing the best algorithm to solve an optimization problem often depends on:
An impartial selection requires a time-consuming benchmark!
The goal of benchopt is to make this step as easy as possible.
Doing a benchmark for the \(\ell_2\) regularized logistic regression with multiple solvers and datasets is now easy as calling:

benchopt: Language Comparisonbenchopt can also compare the same algorithm in different languages.
Here is an example comparing PGD in: Python; R; Julia.

benchopt: Publishing Resultsbenchopt also allows easy publishing of benchmark results:

A benchmark is a directory with:
objective.py file with an Objectivesolvers with one file per Solverdatasets with Dataset generators/fetchers
The benchopt client runs a cross product and generates a CSV file + convergence plots like above.
class Objective(BaseObjective):
name = "Benchmark Name"
def set_data(self, X, y):
# Store data
def compute(self, beta):
return dict(obj1=.., obj2=..)
def to_dict(self):
return dict(X=.., y=.., reg=..)class Solver(BaseSolver):
name = "Solver Name"
def set_objective(self, X, y, reg):
# Store objective info
def run(self, n_iter):
# Run computations for n_iter
def get_result(self):
return betaFlexible API
get_data and set_objective allow compatibility between packages.n_iter can be replaced with a tolerance or a callback.benchopt
benchopt: Making Tedious Tasks EasyAutomatizing tasks:
1 and half:
Mostly Thomas, Mathurin could revive it needed.


and more persons, but those where the only one with pictures at hand!