Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
"Guiding Future STEM Leaders through Innovative Research Training" ~ thinkingbeyond.education
Image: ubuntu2204
Week 14: Neural Ordinary Differential Equations
##Khurshid Fayzullayev
1. Solvers
1-1. Euler
1-2. Runge-Kutta4 (RK4)
2. Neural ODEs
2-1. implementing Neural ODEs
3. Experiments
3-1. check our Neural ODE
3-1. spiral dataset
4. Comparison
4-1. comparison with adaptive solvers 4-2. comparison with adjoint backpropagation method
0. Import Packages
1. Solvers
A solver applies a numerical method to solve the set of ordinary differential equations that represent the model. There are two kinds of solvers: fixed step size solvers, adaptive step size solvers.
Fixed-step size solvers solve the model using the same step size from the beginning to the end of the simulation. Generally, small step size increases the accuracy of the results and the time required to simulate the system.
Adaptive-step size solvers vary the step size during the simulation. These solvers reduce the step size to increase accuracy at certain events during the simulation of the model, such as rapid state changes, zero-crossing events, etc. Also, they increase the step size to avoid taking unnecessary steps when the states of a model change slowly.
Euler solver
Euler's method is one of the oldest and simplest algorithms. The core idea is approximating the solution function step by step using tangents lines.
Here, we are going to make a new function that calculates euler_step. Please refer to the below equation.
Euler's Method
when and , then
Runge-Kutta method
Runge-Kutta(also known as rk) is one of popular methods used to solve the equation
Here, we are going to make a new function that calculates runge-kutta4 method please refer to the below equation
2. Neural ODEs
In this practice, I am going to implement Neural ODEs
3. Experiments
3-1. Check our Neural ODEs
we are going to check out NODEs result by giving dynamic of cos(t). The result will be sin(t)
3-2. sprial dataset Here, we are going to test our Neural ODEs in toy sprial datset.
4. Experiments
4-1. comparison with adaptive solvers
we going to implement adaptvie solver using torchdiffeq
package.
torchdiffeq
library provides main interface odeint which contains general-purpose algorithms for solving initial value problems (IVP).
To solve an IVP using the default solver:
Arguments rtol: optional float64 Tensor specifying an upper bound on relative error atol: optional float64 Tensor specifying an upper bound on absolute error
4. Experiments
4-2. comparison with adjoint backpropagation method
we going to implement adjoint backpropgation using torchdiffeq
package.
Backpropagation through odeint goes through the internals of the solver. Note that this is not numerically stable for all solvers (but should probably be fine with the default dopri5 method). Instead, we encourage the use of the adjoint method explained in [1], which will allow solving with as many steps as necessary due to memory usage.