Burgers’ equation is simplified type of the Naiver-Stokes equation to know fundamental fluid behaviour.
If we take 1-dimensional movement and solely maintain diffusion time period from R.H.S, we get Burgers’ equation.
This manner we assume that movement is pushed purely by fluid’s bulk movement and viscosity.
For small values of the viscosity, Burgers’ equation can result in shock formation that’s notoriously onerous to resolve by classical numerical strategies.
We will attempt to remedy this utilizing PINNs ( Physics Knowledgeable Neural Networks).
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy.stats import qmctf.keras.backend.set_floatx('float64')
First, we have to generate some random knowledge factors.
# variety of boundary and preliminary knowledge factors
# Nd = number_of_ic_points + number_of_bc1_points + number_of_bc1_points
number_of_ic_points = 50
number_of_bc1_points = 25
number_of_bc2_points = 25# Latin Hypercube Sampling (LHS) engine ; to pattern random factors in area,
engine = qmc.LatinHypercube(d=1)
# temporal knowledge factors
t_d = engine.random(n=number_of_bc1_points + number_of_bc2_points)
#for boundary situation
temp = np.zeros([number_of_ic_points, 1]) # for preliminary situation t=0
t_d = np.append(temp, t_d, axis=0)
t_d.form
would give (100,1) which suggests we’ve got 100 knowledge factors for time now.
# spatial knowledge factors
x_d = engine.random(n=number_of_ic_points)
x_d = 2 * (x_d - 0.5) #scales the values from [0,1] to [-1,1]
temp1 = -1 * np.ones([number_of_bc1_points, 1]) # for BC1 ; x = -1
temp2 = +1 * np.ones([number_of_bc2_points, 1]) # for BC2 ; x = +1
x_d = np.append(x_d, temp1, axis=0)
x_d = np.append(x_d, temp2, axis=0)
x_d.form
would additionally give (100,1) which suggests we’ve got 100 knowledge factors for x .
A scatter plot of generated knowledge appears like this
Array y_d
shops output of boundary and preliminary circumstances.
# initizalize an array for the output of boundary and preliminary circumstances
y_d = np.zeros(x_d.form)# replace it in line with the preliminary situation
y_d[ : number_of_ic_points] = -np.sin(np.pi * x_d[:number_of_ic_points])
# all boundary circumstances are additionally set to zero
y_d[number_of_ic_points : number_of_bc1_points + number_of_ic_points] = 0
y_d[number_of_bc1_points + number_of_ic_points : number_of_bc1_points + number_of_ic_points + number_of_bc2_points] = 0
Earlier than we transfer on, we’d like some extra factors the place the neural community predictions are checked towards: collocation factors
These factors act as “checkpoints” for imposing constraints of the equation.
# variety of collocation factors for the physics loss perform
Nc = 10000# LHS for collocation factors
engine = qmc.LatinHypercube(d=2)
factors = engine.random(n=Nc)
# set x values between -1. and +1. We are going to assign this to x in subsequent step
factors[:, 1] = 2*(factors[:, 1]-0.5)
t_c = np.expand_dims(factors[:, 0], axis=1)
x_c = np.expand_dims(factors[:, 1], axis=1)
When d=2 in LatinHypercube, we get factors in a pairs
(x,t)=[(0.2,0.1),(0.5,0.7)......]
np.expand_dims
reshapes collocation factors into 2D arrays of form (Nc, 1).
Many TensorFlow capabilities count on 2-D arrays as enter.
# convert all knowledge and collocation factors to tf.Tensor
x_d, t_d, y_d, x_c, t_c = map(tf.convert_to_tensor, [x_d, t_d, y_d, x_c, t_c])
Now, we are able to specify the design for our neural community.
### mannequin design
neuron_per_layer = 20
# activation perform for all hidden layers
actfn = "tanh"# enter
input_layer = tf.keras.layers.Enter(form=(2,))
num_hidden_layers = 8
# Begin with the enter layer
hidden = input_layer
# Create hidden layers utilizing a loop
for i in vary(num_hidden_layers):
hidden = tf.keras.layers.Dense(neuron_per_layer, activation=actfn)(hidden)
# output layer
output_layer = tf.keras.layers.Dense(1, activation=None)(hidden)
mannequin = tf.keras.Mannequin(input_layer, output_layer)
mannequin.abstract()
@tf.perform
def u(t, x):
u = mannequin(tf.concat([t, x], axis=1))
return u
We are going to approximate u(t,x)
by neural community.
Since I used concatenated enter (2,)
in community structure, I additionally have to concatenate the inputs in my u(t,x)
perform.
Now, we are able to apply Physics Knowledgeable Loss perform.
# this loss perform is used for collocation factors
@tf.perform
def F(t, x):
u0 = u(t, x)
u_t = tf.gradients(u0, t)[0]
u_x = tf.gradients(u0, x)[0]
u_xx = tf.gradients(u_x, x)[0]F = u_t + u0*u_x - (0.01/np.pi)*u_xx
return tf.reduce_mean(tf.sq.(F))
the place,
Second Loss perform is for the boundary and preliminary situation.
@tf.perform
def mse(y, y_pred):
return tf.reduce_mean(tf.sq.(y-y_pred))
The mannequin computes the physics-informed loss (L1) and the MSE loss (L2)
Whole loss= L1 + L2
The gradients of the loss with respect to the mannequin parameters are calculated.
The optimiser (Adam) updates the mannequin parameters to minimise the loss.
epochs = 15000
loss_list = []choose = tf.keras.optimizers.Adam(learning_rate=5e-4)
for epoch in vary(epochs):
with tf.GradientTape() as tape:
# mannequin output/prediction
y_pred = u(t_d, x_d)
# physics-informed loss for collocation factors
L1 = F(t_c, x_c)
# MSE loss for knowledge factors
L2 = mse(y_d, y_pred)
loss = L1 + L2
# compute gradients
g = tape.gradient(loss, mannequin.trainable_weights)
loss_list.append(loss)
# apply gradients
choose.apply_gradients(zip(g, mannequin.trainable_weights))
The plot of loss towards epochs,
Later, I realised that utilizing L-BFGS optimiser — as an alternative of ADAM — would probably result in higher convergence. However L-BFGS isn’t native in TensorFlow and I’ve not but discovered a approach to implement it.
After some plotting code, the output is
From
I discovered the precise answer of Burgers’ Equation.
Quickly, I’ll attempt to discover the affect of utilizing L-BFGS optimiser.