Logo
Overview

Robyn: Continuos & Semi Automated MMM

September 6, 2025
4 min read

What is Robyn?

Robyn is an open-source Marketing Mix Modeling (MMM) package developed by Meta Marketing Science. It leverages AI/ML algorithms to improve marketing optimization and decision-making.

  • Designed for large-scale, granular datasets, making it particularly useful for advertisers with many independent variables and complex marketing environments.
  • Through AI-powered automation, Robyn supports efficient, near real-time marketing decisions.
  • Originally written in R, but it also provides a Python API via Nevergrad, enabling use across different environments.
Note (How to use the Robyn API in Python)

Overall workflow:

[Robyn hosted on R] → [API exposed via R/plumber] → [Python client]

Prerequisites:

  • Install R
  • Install R packages: Robyn, plumber, and supporting dependencies
  • Install Python
  • Install Nevergrad (Meta’s gradient-free optimization library)

Steps:

Terminal window
brew install r
# Install Robyn (R package from GitHub)
Rscript -e 'remotes::install_github("facebookexperimental/Robyn/R")'
# Install required dependencies
Rscript -e 'install.packages(c("arrow", "dplyr", "plumber", "ggplot2", "jsonlite", "tibble"))'
# Follow setup instructions for Nevergrad:
# https://github.com/facebookexperimental/Robyn/blob/main/demo/install_nevergrad.R

Key Features

  • Automation & AI Integration Uses multi-objective evolutionary algorithms for hyperparameter optimization, automating key modeling processes.

  • Time-Series Decomposition Decomposes trends and seasonality in time-series data, essential for long-term marketing strategy.

  • Ridge Regression Employs regularized regression for model fitting, improving stability and interpretability in high-dimensional datasets.

  • Gradient-Based Optimizer Automatically optimizes budget allocation across multiple marketing channels based on ROI objectives.


Code Example 1

Here’s a minimal example showing how to use Nevergrad with Pyomo models, similar to how Robyn integrates optimization into MMM workflows:

Note (What is Nevergrad?)

Nevergrad is an open-source Python library developed by Meta for derivative-free optimization. It is used in this case to call Robyn’s optimization routines from Python.

import pyomo.environ as pyo
import nevergrad as ng
import nevergrad.functions.pyomo as ng_pyomo
# Define variables
A = ['hammer', 'wrench', 'screwdriver', 'towel']
b = {'hammer': 8, 'wrench': 3, 'screwdriver': 6, 'towel': 11} # benefit values
w = {'hammer': 5, 'wrench': 7, 'screwdriver': 4, 'towel': 3} # weights
w_max = 14 # capacity constraint
# Define Pyomo model
model = pyo.ConcreteModel()
model.x = pyo.Var(A, domain=pyo.Binary) # binary decision variables
# Objective function: maximize benefit (minimize negative benefit)
model.obj = pyo.Objective(expr = -sum(b[i] * model.x[i] for i in A))
# Constraint: total weight ≤ max
model.constr = pyo.Constraint(expr = sum(w[i] * model.x[i] for i in A) <= w_max)
# Convert Pyomo model into a Nevergrad-compatible function
func = ng_pyomo.Pyomo(model)
# Choose optimizer (BFGS-CMA hybrid in this case)
optimizer = ng.optimizers.BFGSCMA(parametrization=func.parametrization, budget=100)
# Run optimization
recommend = optimizer.minimize(func.function)
# View recommended solution
recommend.kwargs
# Example output: {'hammer': 1.0, 'wrench': 0.0, 'screwdriver': 1.0, 'towel': 1.0}

Code Example 2

Here’s a more complete example showing how to run a full Robyn MMM analysis using the Python API:

import robyn
from robyn import robyn_api, pandas_builder
dt_simulated_weekly = pandas_builder (robyn_api('dt_simulated_weekly'))
dt_prophet_holidays = pandas_builder(robyn_api("dt_prophet_holidays"))
# specify input variables
inputArgs = {
"date_var": "DATE", # date format must be "2020-01-01"
"dep_var": "revenue", # there should be only one dependent variable
"dep_var_type": "revenue" # "revenue" (ROI) or "conversion" (CPA)
"prophet_vars": ["trend", "season", "holiday"] # "trend", "season", "weekday" & "holiday"
"prophet_country": "DE", # input country code. Check: dt_prophet_holidays
"context_vars" : ["competitor_sales_B", "events"], # e.g. competitors, discount, unemployment etc
"paid_media_spends": ["tv_S", "ooh_S", "search_S", "social_S"], # media spend variables
"paid_media_vars": ["tv_exposure", "ooh_exposure", "search_exposure", "social_exposure"], # media exposure variables
# paid_media_vars must have same order as paid _media_spends. Use media exposure metrics like
# impressions, GRP etc. If not applicable, use spend instead.
"organic_vars" : "newsletter", # marketing activity without media spend
# "factor_vars" : ["events"], # force variables in context_vars or organic_vars to be categorical
"window_start": "2016-01-01",
"window_end": "2018-12-31",
"adstock": "geometric" # geometric, weibull_cdf or weibull_pdf.
}
# build the payload for the robyn_inputs()
payload = {
'dt_input': asSerialisedFeather(dt_simulated_weekly),
'dt_holidays': asSerialisedFeather(dt_prophet_holidays),
'jsonInputArgs': json.dumps (inputArgs)
}
InputCollect = robyn_api('robyn_inputs', payload=payload)
# define hyperparameters
payload = {
'adstock': InputCollect['adstock'],
'all_media': json.dumps(InputCollect['all_media'])
}
hyper_names = robyn_api('hyper_names', payload=payload)
inputArgs = {
"hyper_parameters": {
"facebook_S_alphas": [0.5, 3],
"facebook_S_gammas": [0.3, 31],
"facebook_S_thetas": [0.1, 11],
"print_S_alphas": [0.5, 3],
"print_S_gammas": [0.3, 31],
"print_S_thetas": [0.1, 11],
"tv_S_alphas": [0.5, 3],
"tv_S_gammas": [0.3, 31],
"tv_S_thetas": [0.1, 11],
"search_S_alphas": [0.5, 3],
"search_S_gammas": [0.3, 31],
"search_S_thetas": [0.1, 11],
"ooh_S_alphas": [0.5, 3],
"ooh_S_gammas": [0.3, 31],
"ooh_S_thetas": [0.1, 11],
"newsletter_alphas": [0.5, 3],
"newsletter_gammas": [0.3, 31],
"newsletter_thetas": [0.1, 11],
"train_size": [0.5, 0.8]
}
}
payload = {
'InputCollect': json.dumps(InputCollect),
'jsonInputArgs': json.dumps(inputArgs)
}
InputCollect = robyn_api('robyn_inputs', payload=payload)
# Build initial models
runArgs = {
"iterations": 2000, #nevergrad
"trials": 5, # nevergrad
"ts_validation": True,
"add_penalty_factor": False
}
payload = {
'InputCollect': json.dumps(InputCollect),
'jsonRunArgs': json.dumps(runArgs)
}
OutputModels = robyn_api('robyn_run', payload=payload)
plot_outputgraphs(OutputModels, graphytype='moo_distrb_plot', max_size=(1000, 1500))
plot_outputgraphs(OutputModels, graphytype='moo_cloud_plot', max_size=(1000, 1500))
plot_outputgraphs(OutputModels, graphytype='ts_validation_plot', max_size=(1000, 1500))
# evaluate models - robyn_outputs
outputArgs = {
"pareto_fronts": "auto",
"csv_out": "pareto"
"cluster": True,
"export": create_files,
"plot_folder": robyn_directory,
"plot_pareto": create_files,
}
payload = {
'InputCollect': json.dumps(InputCollect),
'OutputModels': json.dumps(OutputModels),
'jsonOutputArgs': json.dumps(outputArgs)
}
OutputCollect = robyn_api('robyn_outputs', payload=payload)
for i in OutputCollect['clusters']['models']:
print(i['solID'])
# select & save model
load_onepager (top_pareto=True,sol='all' , InputJson=InputCollect,OutputJson=OutputCollect, path=robyn_directory)
# budget allocation
InputCollect['paid_media_spends'] # ['tv_S', 'ooh_S', 'search_S', 'social_S']
select_model = '2_143_11'
allocatorArgs = {
'select_model': select_model,
# 'date_range': InputCollect['date_range'],
# 'total_budget': InputCollect['total_budget'],
'channel_constr_low': 0.7,
'channel_constr_up': 1.2,
'channel_constr_multiplier': 3,
'scenario': 'max_response'
}
payload = {
'InputCollect': json.dumps(InputCollect),
'OutputCollect': json.dumps(OutputCollect),
'jsonAllocatorArgs': json.dumps(allocatorArgs),
'dpi': 100,
'width': 15,
'height': 15
}
allocator = robyn_api('robyn_allocator', payload=payload)
# plot the graphs again...