y ~ normal(f(x), sigma)
with a first-order Taylor approximation of f,
f(x) = f(x_0) + df/dx(x0) * (x - x0)
f(x) = alpha + beta * (x - x0).
beta_{nm} * (x_{n} - x_{n,0} * (x_{m} -x_{m, 0},
that capture the cross derivatives in the Taylor expansion, but we can't forget about the regular quadratic terms!
f(x_0) + df/fx(x_0) * (x - x_0) = alpha + beta * x
then the intercept is the baseline value of f at the approx point and the slope is the local gradient.
alpha = f(x_0) + df/dx(x_0) * x_0
beta = df/dx(x_0) * x.
In this case the intercept becomes much less interpretable and in particular harder to build principled prior models for!
=
[f(x_0) + df/dx(x_0) * x_0 + 1/2df^2/dx^2(x_0) * x_0^2]
+ [df/dx(x_0) + df^2/dx^2(x_0) * x_0] * x
+ 1/2 * df^2/dx^2(x_0) * x^2
Yuck, good luck interpreting that.