#sciml#machinelearning in chemical engineering using prior scientific knowledge of chemical processes? New paper: we dive deep into using universal differential equation hybrid models and see how well gray boxes can recover the dynamics. arxiv.org/abs/2303.13555#julialang
For learning these cases, we used neural networks mixed with known physical dynamics, and mixed it with orthogonal collocation on finite elements (OCFEM) to receive a stable simulation simulation and estimation process.
We looked into learning reaction functions embedded within diffusion-advection equations. This is where you have spatial data associated with a chemical reaction but generally know some properties of the spatial movement, but need to learn the (nonlinear) reaction dynamics
We found in multiple instances the method could extrapolate to non-trivial behaviors outside of the training data.
This was mixed with a sparse regression for automating the discovery of the missing reaction terms. @MilesCranmer's SymbolicRegression.jl outperformed the sparse regression techniques commonly used by SINDy.
In cases where the exact polynomial was not found, we were able to show that it found missing terms in the model whose first two terms of the Taylor series matched the original one. So not an exact recovery, but clearly recovering behavior of note!
Thus the symbolic regression was finding simplified models of the phenomena! Together this shows in some non-trivial chemical engineering cases that autocompleting models via Universal Differential Equations leads to some nice results.
If you want more information on the general method, check out the original UDE paper.
In this we detail how #julialang's core compute model gives faster code, with a detailed calculation of the effects of the #python interpreter and kernel launching costs on simulation performance. It's pretty cool how one can pen and paper calculate the 100x expected difference.
Differentiable programming (dP) is great: train neural networks to match anything w/ gradients! ODEs? Neural ODEs. Physics? Yes. Agent-Based models? Nope, not differentiable... or are they? Check out our new paper at NeurIPS on Stochastic dP!🧵
Problem: if you flip a coin with probability p of being heads, how do you generate a code that takes the derivative with respect to that p? Of course that's not well-defined: the coin gives a 0 or 1, so it cannot have "small changes". Is there a better definition?
Its mean (or in math words, "expectation") can be differentiable! So let's change the question: is there a form of automatic differentiation that generates a program which directly calculates the derivative with respect to the mean?