CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

| Download

Bálint Kaszás and George Haller, 'Capturing the Edge of Chaos as a Spectral Submanifold in Pipe Flows ', Journal of Fluid Mechanics, submitted, Figure 5 key words: spectral submanifold, reduced-order model, edge of chaos, pipe flow

Views: 17
Visibility: Unlisted (only visible to those who know the link)
Image: ubuntu2204
Kernel: Python 3 (system-wide)

logo

Fitting the SSM-reduced model

In this notebook we use the training data located in the folder data/ to obtain an SSM-reduced model. The data was generated by Openpipeflow, developed by Ashley Willis (see https://openpipeflow.org/).

We use the SSMLearn library to obtain a parametrization of the two-dimensional mixed-mode SSM connecting the lower branch traveling wave with the laminar state. The restriction of the dynamics onto this SSM then yields a two dimensional SSM-reduced model.

  • We load several trajectories, each of which contains a time series of the energy input (II) and energy dissipation (DD) values, as computed by Openpipeflow. We also load the time dependent velocity fields u(r,φ,z,t)\mathbf{u}(r, \varphi, z, t) in a compressed format. Instead of every Fourier-coefficient returned by Openpipeflow we have only retain the leading 100 PCA-components of the full dataset.

  • We then determine the optimal polynomial order and the regularizing parameter α\alpha via cross validation.

  • We visualize the two-dimensional SSM-reduced model as a vector field and explicitly calculate the stable manifold of the lower branch traveling wave. Since the edge of chaos intersects the mixed-mode SSM transversely, we can conclude that this intersection is formed as the stable manifold of the lower branch traveling wave in the SSM-reduced model.

import numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp
def transform(x): return np.sqrt(1+x)-1 def untransform(x): return (x+1)**2-1

Load the data from the pre-compiled dictionary

This file contains

  • Reduced coordinates of the trajectories (as computed by Openpipeflow)

  • Compressed version of the velocity fields. The compression is the projection onto the first 100 PCA modes

training_data = np.load('data/Re2400_train.npy', allow_pickle=True).item()
n_trajs = 8 reduced_state = [] full_state = [] for i in range(n_trajs): reduced_state.append(training_data['reduced_state_%s' %i][:]) full_state.append(training_data['full_state_%s' %i][:])
time_array = []
for i in range(n_trajs): time = np.arange(0, reduced_state[i].shape[1], 1)*0.1 time_array.append(time)
from ssmlearnpy import SSMLearn
INFO 2023-05-30 20:25:17 utils NumExpr defaulting to 4 threads.
from ssmlearnpy.reduced_dynamics.advector import advect

We initialize an SSMLearn object from the training trajectories.

ssm = SSMLearn( t = time_array, x = full_state, reduced_coordinates = reduced_state, ssm_dim=2, dynamics_type = 'flow' )

Fit the reduced dynamics in the form

ddt(JK)=f(J,K)=n+mMr(Rnm(J)KmJnRnm(K)KmJn)\frac{d}{dt}\begin{pmatrix}J \\ K \end{pmatrix} = \mathbf{f}(J,K)= \sum_{n+m\leq M_r} \begin{pmatrix} R^{(J)}_{nm} K^m J^n \\ R^{(K)}_{nm} K^m J^n\end{pmatrix}

We enforce that the lower branch traveling wave remains a fixed point of the reduced dynamics. Its coordinates are J=0.101187  K=0.1011201. J = 0.101187 \; K = 0.1011201.

lb_jk_coords = [0.101187, 0.1011201]

To determine the coefficients Rnm(J,K)R^{(J,K)}_{nm}, we minimize the cost function

L=i[(J˙n,mRnm(J)KmJn)2+(K˙n,mRnm(K)KmJn)2+αn,m((Rnm(J))2+(Rnm(K))2)],L = \sum_{i}\left[ \left(\dot{J} - \sum_{n,m} R^{(J)}_{nm}K^mJ^n\right)^2 +\left(\dot{K} - \sum_{n,m} R^{(K)}_{nm}K^mJ^n\right)^2 + \alpha \sum_{n,m}\left((R^{(J)}_{nm})^2 + (R^{(K)}_{nm})^2 \right)\right],

where α\alpha is a ridge-type regularizing term.

The parametrization, i.e. the mapping W\mathbf{W}

with

u(t)=W(J(t),K(t))\mathbf{u}(t) = \mathbf{W}(J(t), K(t))

is also determined via ridge regression.

Determining the optimal model order by cross validation

The maximal polynomial order for the regression MrM_r and the parameter α\alpha can be determined by cross validating against a trajectory that was withheld from the training set.

Given the true trajectory utrue\mathbf{u}_{true}, we define the prediction error as

Error=iutrue(ti)WFti(Jtrue(0),Ktrue(0)),\text{Error} = \sum_i \left \Vert \mathbf{u}_{true}(t_i) - \mathbf{W} \circ F^{t_i}(J_{true}(0), K_{true}(0))\right \Vert,

i.e. the averaged error between the image of the predicted reduced trajectory under the parametrization W\mathbf{W} and the true trajectory.

def crossval(degree, alpha, true_reduced_state, true_full_state, indexStart): # fit the two mappings: ssm.get_reduced_dynamics(poly_degree = degree, alpha = alpha, constraints=[[lb_jk_coords],[[0.,0.]]]) ssm.get_parametrization(poly_degree = degree, alpha = alpha, do_scaling = False) # helper functions for solve_ivp predictor = lambda x: ssm.reduced_dynamics.predict(x) def tocallfunDynamics(t, x, dyn): dy = dyn(np.array(x).reshape(1,-1)) return np.squeeze(np.array(dy)) # prediction ic = true_reduced_state[:,indexStart] teval = np.arange(indexStart, true_reduced_state.shape[1], 1)*0.1 tspan = [teval[0], teval[-1]] sol = solve_ivp(tocallfunDynamics, tspan, ic, t_eval = teval, method = 'DOP853', rtol = 1e-6, atol = 1e-6, args=(predictor, )) if not sol.success: return np.nan validation_pred = ssm.decoder.predict(sol.y.T).T err = true_full_state[:, indexStart:] - validation_pred errnorm = np.linalg.norm(err, axis = 0) return np.mean(errnorm)
import logging # disable logging in the cross validation loop logging.disable(logging.CRITICAL)
orders = np.arange(2, 10, 1) alphas = np.logspace(-12, -1, 20)
indexStart = 150
# suppress ill-conditioned warnings. # we select the best model by cross-validation anyways import warnings warnings.filterwarnings('ignore')
errors = [] for o in orders: errors_for_alpha = [] for a in alphas: errors_for_alpha.append(crossval(o, a, transform(training_data['validation_parameters']), training_data['validation_full_state'], indexStart)) errors.append(errors_for_alpha)

After calculating the erorrs, we can visualize them over a grid of α\alpha values and polynomial orders.

errors = np.array(errors) O, A = np.meshgrid(orders, alphas)
%matplotlib inline f = plt.figure(figsize = (8, 4)) ax = f.add_subplot(121,projection = '3d') ax2 = f.add_subplot(122) ax.plot_surface(O,A, errors.T, alpha = 0.3) ax.set_title('Validation error') ax.set_xlabel('Polynomial order') ax.set_ylabel(r'$\alpha$') ax.set_zlabel('Error') ax.plot(alphas*0 + 5, alphas, errors[2,:], '-', linewidth = 5, label = 'Order 5') ax2.loglog(alphas, errors[2, :], '.', ) ax2.set_title('Order 5 Validation error') ax2.set_xlabel(r'$\alpha$') ax2.set_ylabel('Error') ax2.plot(alphas[-8], errors[2, -8], '.', markersize = 10, c='Crimson', label = r'$\alpha$ = 1e-5') ax.legend() ax2.legend() f.tight_layout()
Image in a Jupyter notebook

Based on the α\alpha-dependence of the validation error, we may select α\alpha=1e-5, which is a relatively high value, allowing strong regularization.

Predictions of the optimal SSM-reduced model

First fit the parametrization and then the reduced dynamics.

ssm.get_parametrization(poly_degree = 5, alpha = 1e-5)
ssm.get_reduced_dynamics(poly_degree=5, alpha =1e-5, constraints=[[lb_jk_coords],[[0.,0.]]])

We can check that the constraint is satisfied

ssm.reduced_dynamics.predict(np.array(lb_jk_coords).reshape(1,-1))
array([[ 2.92799639e-14, -9.85154666e-13]])

Visualizing the prediction error along the validation trajectory

def tocallfunDynamics(t, x, dyn): dy = dyn(np.array(x).reshape(1,-1)) return np.squeeze(np.array(dy))
predictor = lambda x: ssm.reduced_dynamics.predict(x)
from scipy.integrate import solve_ivp

Visualizing the validation-prediction

This is the trajectory we have used to select α\alpha and the polynomial order. We now compute the SSM-perdicted trajectory, by integrating its initial reduced-coordinates forward in time.

ic = transform(training_data['validation_parameters'][:,indexStart]) # need to take the square root
dt = 0.1 # openpipeflow saves the I-D coordinates 10 times in each integer time instant teval = np.arange(indexStart, training_data['validation_parameters'].shape[1], 1)*dt tspan = [teval[0], teval[-1]]
sol = solve_ivp(tocallfunDynamics, tspan, ic, t_eval = teval, method = 'DOP853', rtol = 1e-6, atol = 1e-12, args=(predictor, )) validation_pred = ssm.decoder.predict(sol.y.T).T

We now plot the time evolution of the predicted reduced coordinate and the time-dependent relative prediction error

# maximal value of the norm of the true velocity field tonorm = np.linalg.norm(training_data['validation_full_state'], axis = 0).max()
err = training_data['validation_full_state'][:, indexStart:] - validation_pred
f = plt.figure(figsize = (8, 3)) ax = f.add_subplot(121) ax2 = f.add_subplot(122) ax.plot(teval, transform(training_data['validation_parameters'][0,150:]), '-', c='black', label = 'True trajectory') ax.plot(teval, sol.y[0,:], '--', c='crimson', label = 'Predicted') ax2.plot(teval, 100*np.linalg.norm(err, axis = 0)/tonorm, '-') ax.legend() ax.set_xlabel('Time') ax2.set_xlabel('Time') ax.set_ylabel('$J(t)$') ax2.set_ylabel('Relative error [%]') f.tight_layout() ax.set_title('Validation trajectory')
Text(0.5, 1.0, 'Validation trajectory')
Image in a Jupyter notebook

Reduced phase portrait

Finally, we visualize the phase portrait generated by the vector field

ddt(JK)=f(J,K)=n+mMr(Rnm(J)KmJnRnm(K)KmJn),\frac{d}{dt}\begin{pmatrix}J \\ K \end{pmatrix} = \mathbf{f}(J,K)= \sum_{n+m\leq M_r} \begin{pmatrix} R^{(J)}_{nm} K^m J^n \\ R^{(K)}_{nm} K^m J^n\end{pmatrix},

identified above, i.e. the SSM-reduced model.

linecolor = [0.5111196874841155, 0.573802174897223, 0.1737010491405684]
plt.figure() x_linear = np.linspace(0.0, 0.13, 150) y_linear = np.linspace(0.0, 0.13, 150) X, Y = np.meshgrid(x_linear, y_linear) points_to_eval = np.vstack((X.ravel(), Y.ravel())) U = ssm.reduced_dynamics.predict(points_to_eval.T) # call the reduced vector field on the grid of points UU = U[:,0].reshape(X.shape) VV = U[:,1].reshape(Y.shape) plt.streamplot(X, Y, UU, VV, density = 3, linewidth = 0.9, color='grey') # draw the training trajectories: flag = True for x in ssm.emb_data['reduced_coordinates']: if flag: plt.plot(x[0,:], x[1,:], '-', c=linecolor, label = 'Training data') else: plt.plot(x[0,:], x[1,:], '-', c=linecolor) flag = False plt.xlabel('$J$') plt.ylabel('$K$') plt.xlim(-0.001, 0.13) plt.ylim(-0.001, 0.13) plt.plot(0,0, 'o', c='black', label = 'Laminar state') plt.plot(*lb_jk_coords, 'o', c='crimson', label = 'Lower branch traveling wave') plt.legend()
<matplotlib.legend.Legend at 0x7f267d1c7a00>
Image in a Jupyter notebook

Computing the stable manifold of the lower branch traveling wave

To find the stable manifold of the lower branch traveling wave, we rely on information about its spectrum, as computed from the SSM-reduced model.

import sympy as sy from sympy import latex from IPython.display import display_latex # Helper function to print the computed eigenvalue def disp(idx, symObj): eqn = '$$' + idx + ' ' + latex(symObj) + '$$' display_latex(eqn,raw=True) return from scipy.optimize import root
predictor = lambda x: ssm.reduced_dynamics.predict(x) def tocallfunDynamics(t, x, dyn): dy = dyn(np.array(x).reshape(1,-1)) return np.squeeze(np.array(dy)) def fixedPt(x, dyn): xx = np.array(x) odefun = lambda t, x : tocallfunDynamics(t, x, dyn) return odefun(0, xx)

We have constrained the SSM-reduced order model to have a fixed point at the JKJ-K coordinates of the lower branch traveling wave. The following calculation verifies that this constraint is satisfied. We use scipy's built-in root finder method to find the fixed point.

res = root(fixedPt, [0.101185, 0.101185], method = 'lm', tol = 1e-8, args = (predictor))

Given the precise location of the lower branch traveling wave in the SSM-reduced model, we can compute its spectrum, since the stable manifold can be well approximated by integrating a small vector tangent to the stable subspace of the Jacobian.

def computeSpectrum(fixedPoint, delta, dyn): # explicit finite difference approximation for the Jacobian xpdx = fixedPoint + np.array([delta, 0]) xmdx = fixedPoint - np.array([delta, 0]) xpdy = fixedPoint + np.array([0, delta]) xmdy = fixedPoint - np.array([0, delta]) fxpdx = dyn(xpdx)[0] fxmdx = dyn(xmdx)[0] fxpdy = dyn(xpdy)[0] fxmdy = dyn(xmdy)[0] dfxdx = (fxpdx[0] - fxmdx[0])/(2*delta) dfxdx = (fxpdx[0] - fxmdx[0])/(2*delta) dfxdy = (fxpdy[0] - fxmdy[0])/(2*delta) dfxdy = (fxpdy[0] - fxmdy[0])/(2*delta) dfydx = (fxpdx[1] - fxmdx[1])/(2*delta) dfydx = (fxpdx[1] - fxmdx[1])/(2*delta) dfydy = (fxpdy[1] - fxmdy[1])/(2*delta) dfydy = (fxpdy[1] - fxmdy[1])/(2*delta) A = np.array([[dfxdx, dfxdy], [dfydx, dfydy]]) w, v = np.linalg.eig(A) return A, w, v
fixedPoint = np.array(res.x).reshape(1,-1) Df, w, v = computeSpectrum(fixedPoint, 1e-6, predictor)
disp('D\mathbf{f}(J_{LB}, K_{LB})=', sy.Matrix(Df))
Df(JLB,KLB)=[0.02425958998812130.02592222121916390.02315448081879890.0334948928246148]D\mathbf{f}(J_{LB}, K_{LB})= \left[\begin{matrix}-0.0242595899881213 & 0.0259222212191639\\-0.0231544808187989 & 0.0334948928246148\end{matrix}\right]

This matrix has the eigenvalues and eigenvectors

Dfv±=λ±v±,D\mathbf{f} \mathbf{v}_{\pm} = \lambda_{\pm} \mathbf{v}_{\pm},

with

disp('\mathbf{v}_{-}=', sy.Matrix(v[:,0])) disp('\mathbf{v}_{+}=', sy.Matrix(v[:,1])) disp('\lambda_{-}=', w[0]) disp('\lambda_{+}=', w[1])
v=[0.8856579533232210.464338227712649]\mathbf{v}_{-}= \left[\begin{matrix}-0.885657953323221\\-0.464338227712649\end{matrix}\right]
v+=[0.5062002610335130.862415964445002]\mathbf{v}_{+}= \left[\begin{matrix}-0.506200261033513\\-0.862415964445002\end{matrix}\right]
λ=0.0106689275725499\lambda_{-}= -0.0106689275725499
λ+=0.0199042304090434\lambda_{+}= 0.0199042304090434

To approximate of its stable manifold, we integrate a small segment of the stable subspace in backward time, i.e. the segment defined by the two vectors

(JLBKLB)±εv,\begin{pmatrix} J_{LB} \\ K_{LB} \end{pmatrix} \pm \varepsilon \mathbf{v}_{-},

with ε106\varepsilon\approx 10^{-6}.

ic = res.x + 1e-6*v[:,0] ic2 = res.x - 1e-6*v[:,0] teval = np.linspace(0, -1200, 5000)
solStab1 = solve_ivp(tocallfunDynamics, [0, -1200], ic, method = 'DOP853', t_eval = teval, rtol = 1e-12, atol = 1e-12, args=(predictor, )) solStab2 = solve_ivp(tocallfunDynamics, [0, -1200], ic2, method = 'DOP853', t_eval = teval, rtol = 1e-12, atol = 1e-12, args=(predictor, ))
f = plt.figure(figsize = (12,5)) ax = f.add_subplot(121) ax2 = f.add_subplot(122) ax.plot(solStab2.t, solStab2.y[0,:], label = ' - Initial condition') ax.plot(solStab1.t, solStab1.y[0,:], label = ' + Initial condition') ax.set_xlabel('Time') ax.set_ylabel('$J$') ax2.plot(solStab2.y[0,:], solStab2.y[1,:], label = ' - Initial condition') ax2.plot(solStab1.y[0,:], solStab1.y[1,:], label = ' + Initial condition') ax2.set_xlabel('$J$') ax2.set_ylabel('$K$') ax.legend()
<matplotlib.legend.Legend at 0x7f2675d1d0f0>
Image in a Jupyter notebook
linecolor = [0.5111196874841155, 0.573802174897223, 0.1737010491405684]
## The predicted stable manifold quickly runs out of the domain of the training data # we specify an index along the integrated trajectories, up to which we should display the manifold index_high_confidence = 3500 index_lower_confidence = 4000
plt.figure(figsize = (8,6)) plt.streamplot(X, Y, UU, VV, density = 3, linewidth = 0.4, color='grey') plt.plot((solStab1.y[0,:index_lower_confidence]), (solStab1.y[1,:index_lower_confidence]), '--', linewidth = 1.5, c='black') plt.plot((solStab2.y[0,:4000]), (solStab2.y[1,:index_lower_confidence]), '--', linewidth = 1.5, c='black') plt.plot((solStab1.y[0,:index_high_confidence]), (solStab1.y[1,:index_high_confidence]), '-', linewidth = 1.5, c='black', label ='Stable manifold (predicted)') plt.plot((solStab2.y[0,:index_high_confidence]), (solStab2.y[1,:index_high_confidence]), '-', linewidth = 1.5, c='black') plt.xlabel('$J$') plt.ylabel('$K$') flag = True for x in ssm.emb_data['reduced_coordinates']: if flag: xx = np.hstack((x, np.array([0,0]).reshape(-1,1))) plt.plot(xx[0,:], xx[1,:], '-', c=linecolor, linewidth = 2, label= 'Training data') else: plt.plot(x[0,:], x[1,:], '-', c=linecolor, linewidth = 2) flag = False plt.plot(0,0, 'o', c='black', label = 'Laminar flow') plt.xlabel('$J$') plt.ylabel('$K$') plt.xlim(-0.001, 0.13) plt.ylim(-0.001, 0.13) plt.plot(*lb_jk_coords, 'o', c='crimson', label = 'Lower branch traveling wave') plt.legend(loc = 'upper left')
<matplotlib.legend.Legend at 0x7f26724bfca0>
Image in a Jupyter notebook

We may then save the SSM-reduced model for later use:

from joblib import load, dump !mkdir saved_models
dump(ssm, 'saved_models/ssm_model.model')
['saved_models/ssm_model.model']