-
Notifications
You must be signed in to change notification settings - Fork 95
Expand file tree
/
Copy pathtutorial.py
More file actions
294 lines (207 loc) · 11.4 KB
/
tutorial.py
File metadata and controls
294 lines (207 loc) · 11.4 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
#!/usr/bin/env python
# coding: utf-8
# # Tutorial: Reduced Order Modeling with POD-RBF and POD-NN Approaches for Fluid Dynamics
#
# [](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial8/tutorial.ipynb)
# The goal of this tutorial is to demonstrate how to use the **PINA** library to apply a reduced-order modeling technique, as outlined in [1]. These methods share several similarities with machine learning approaches, as they focus on predicting the solution to differential equations, often parametric PDEs, in real-time.
#
# In particular, we will utilize **Proper Orthogonal Decomposition** (POD) in combination with two different regression techniques: **Radial Basis Function Interpolation** (POD-RBF) and **Neural Networks**(POD-NN) [2]. This process involves reducing the dimensionality of the parametric solution manifold through POD and then approximating it in the reduced space using a regression model (either a neural network or an RBF interpolation). In this example, we'll use a simple multilayer perceptron (MLP) as the regression model, but various architectures can be easily substituted.
#
# Let's start with the necessary imports.
# In[1]:
## routine needed to run the notebook on Google Colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
get_ipython().system('pip install "pina-mathlab[tutorial]"')
import matplotlib
import matplotlib.pyplot as plt
import torch
import numpy as np
import warnings
from pina import Trainer
from pina.model import FeedForward
from pina.solver import SupervisedSolver
from pina.optim import TorchOptimizer
from pina.problem.zoo import SupervisedProblem
from pina.model.block import PODBlock, RBFBlock
warnings.filterwarnings("ignore")
# We utilize the [Smithers](https://github.com/mathLab/Smithers) library to gather the parametric snapshots. Specifically, we use the `NavierStokesDataset` class, which contains a collection of parametric solutions to the Navier-Stokes equations in a 2D L-shaped domain. The parameter in this case is the inflow velocity.
#
# The dataset comprises 500 snapshots of the velocity fields (along the $x$, $y$ axes, and the magnitude), as well as the pressure fields, along with their corresponding parameter values.
#
# To visually inspect the snapshots, let's also plot the data points alongside the reference solution. This reference solution represents the expected output of our model.
# In[2]:
from smithers.dataset import NavierStokesDataset
dataset = NavierStokesDataset()
fig, axs = plt.subplots(1, 4, figsize=(14, 3))
for ax, p, u in zip(axs, dataset.params[:4], dataset.snapshots["mag(v)"][:4]):
ax.tricontourf(dataset.triang, u, levels=16)
ax.set_title(f"$\mu$ = {p[0]:.2f}")
# The *snapshots*—i.e., the numerical solutions computed for several parameters—and the corresponding parameters are the only data we need to train the model, enabling us to predict the solution for any new test parameter. To properly validate the accuracy, we will split the 500 snapshots into the training dataset (90% of the original data) and the testing dataset (the remaining 10%) inside the `Trainer`.
#
# It is now time to define the problem!
# In[3]:
u = torch.tensor(dataset.snapshots["mag(v)"]).float()
p = torch.tensor(dataset.params).float()
problem = SupervisedProblem(input_=p, output_=u)
# We can then build a `POD-NN` model (using an MLP architecture as approximation) and compare it with a `POD-RBF` model (using a Radial Basis Function interpolation as approximation).
#
# ## POD-NN reduced order model
# Let's build the `PODNN` class
# In[4]:
class PODNN(torch.nn.Module):
def __init__(self, pod_rank, layers, func):
super().__init__()
self.pod = PODBlock(pod_rank)
self.nn = FeedForward(
input_dimensions=1,
output_dimensions=pod_rank,
layers=layers,
func=func,
)
def forward(self, x):
coefficents = self.nn(x)
return self.pod.expand(coefficents)
def fit_pod(self, x):
self.pod.fit(x)
# We highlight that the POD modes are directly computed by means of the singular value decomposition (SVD) over the input data, and not trained using the backpropagation approach. Only the weights of the MLP are actually trained during the optimization loop.
# In[5]:
pod_nn = PODNN(pod_rank=20, layers=[10, 10, 10], func=torch.nn.Tanh)
pod_nn_stokes = SupervisedSolver(
problem=problem,
model=pod_nn,
optimizer=TorchOptimizer(torch.optim.Adam, lr=0.0001),
use_lt=False,
)
# Before starting, we need to fit the POD basis on the training dataset. This can be easily done in **PINA** as well:
# In[ ]:
trainer = Trainer(
solver=pod_nn_stokes,
max_epochs=1000,
batch_size=None,
accelerator="cpu",
train_size=0.9,
val_size=0.0,
test_size=0.1,
)
# fit the pod basis
trainer.data_module.setup("fit") # set up the dataset
train_data = trainer.data_module.train_dataset.get_all_data()
x_train = train_data["data"]["target"] # extract data for training
pod_nn.fit_pod(x=x_train)
# now train
trainer.train()
# Done! Now that the computationally expensive part is over, we can load the model in the future to infer new parameters (simply by loading the checkpoint file automatically created by `Lightning`) or test its performances. We measure the relative error for both the training and test datasets, printing the mean error.
# In[7]:
# extract train and test data
trainer.data_module.setup("test") # set up the dataset
p_train = trainer.data_module.train_dataset.conditions_dict["data"]["input"]
u_train = trainer.data_module.train_dataset.conditions_dict["data"]["target"]
p_test = trainer.data_module.test_dataset.conditions_dict["data"]["input"]
u_test = trainer.data_module.test_dataset.conditions_dict["data"]["target"]
# compute statistics
u_test_nn = pod_nn_stokes(p_test)
u_train_nn = pod_nn_stokes(p_train)
relative_error_train = torch.norm(u_train_nn - u_train) / torch.norm(u_train)
relative_error_test = torch.norm(u_test_nn - u_test) / torch.norm(u_test)
print("Error summary for POD-NN model:")
print(f" Train: {relative_error_train.item():e}")
print(f" Test: {relative_error_test.item():e}")
# ## POD-RBF Reduced Order Model
#
# Next, we define the model we want to use, incorporating the `PODBlock` and `RBFBlock` objects.
# In[8]:
class PODRBF(torch.nn.Module):
def __init__(self, pod_rank, rbf_kernel):
super().__init__()
self.pod = PODBlock(pod_rank)
self.rbf = RBFBlock(kernel=rbf_kernel)
def forward(self, x):
coefficents = self.rbf(x)
return self.pod.expand(coefficents)
def fit(self, p, x):
self.pod.fit(x)
self.rbf.fit(p, self.pod.reduce(x))
# We can now fit the model and use it to predict the required field for unseen parameter values. Note that this model does not require a `Trainer` since it does not include any neural networks or learnable parameters.
# In[9]:
pod_rbf = PODRBF(pod_rank=20, rbf_kernel="thin_plate_spline")
pod_rbf.fit(p_train, u_train)
# Compute errors
# In[10]:
u_test_rbf = pod_rbf(p_test)
u_train_rbf = pod_rbf(p_train)
relative_error_train = torch.norm(u_train_rbf - u_train) / torch.norm(u_train)
relative_error_test = torch.norm(u_test_rbf - u_test) / torch.norm(u_test)
print("Error summary for POD-RBF model:")
print(f" Train: {relative_error_train.item():e}")
print(f" Test: {relative_error_test.item():e}")
# ## POD-RBF vs POD-NN
#
# We can compare the solutions predicted by the `POD-RBF` and the `POD-NN` models with the original reference solution. By plotting these predicted solutions against the true solution, we can observe how each model performs.
#
# ### Observations:
# - **POD-RBF**: The solution predicted by the `POD-RBF` model typically offers a smooth approximation for the parametric solution, as RBF interpolation is well-suited for capturing smooth variations.
# - **POD-NN**: The `POD-NN` model, while more flexible due to the neural network architecture, may show some discrepancies—especially for low velocities or in regions where the training data is sparse. However, with longer training times and adjustments in the network architecture, we can improve the predictions.
# In[11]:
idx = torch.randint(0, len(u_test), (4,))
u_idx_rbf = pod_rbf(p_test[idx])
u_idx_nn = pod_nn_stokes(p_test[idx])
fig, axs = plt.subplots(4, 5, figsize=(14, 9))
relative_error_rbf = np.abs(u_test[idx] - u_idx_rbf.detach())
relative_error_rbf = np.where(
u_test[idx] < 1e-7, 1e-7, relative_error_rbf / u_test[idx]
)
relative_error_nn = np.abs(u_test[idx] - u_idx_nn.detach())
relative_error_nn = np.where(
u_test[idx] < 1e-7, 1e-7, relative_error_nn / u_test[idx]
)
for i, (idx_, rbf_, nn_, rbf_err_, nn_err_) in enumerate(
zip(idx, u_idx_rbf, u_idx_nn, relative_error_rbf, relative_error_nn)
):
axs[0, 0].set_title(f"Real Snapshots")
axs[0, 1].set_title(f"POD-RBF")
axs[0, 2].set_title(f"POD-NN")
axs[0, 3].set_title(f"Error POD-RBF")
axs[0, 4].set_title(f"Error POD-NN")
cm = axs[i, 0].tricontourf(
dataset.triang, rbf_.detach()
) # POD-RBF prediction
plt.colorbar(cm, ax=axs[i, 0])
cm = axs[i, 1].tricontourf(
dataset.triang, nn_.detach()
) # POD-NN prediction
plt.colorbar(cm, ax=axs[i, 1])
cm = axs[i, 2].tricontourf(dataset.triang, u_test[idx_].flatten()) # Truth
plt.colorbar(cm, ax=axs[i, 2])
cm = axs[i, 3].tripcolor(
dataset.triang, rbf_err_, norm=matplotlib.colors.LogNorm()
) # Error for POD-RBF
plt.colorbar(cm, ax=axs[i, 3])
cm = axs[i, 4].tripcolor(
dataset.triang, nn_err_, norm=matplotlib.colors.LogNorm()
) # Error for POD-NN
plt.colorbar(cm, ax=axs[i, 4])
plt.show()
# ## What's Next?
#
# Congratulations on completing this tutorial using **PINA** to apply reduced order modeling techniques with **POD-RBF** and **POD-NN**! There are several directions you can explore next:
#
# 1. **Extend to More Complex Problems**: Try using more complex parametric domains or PDEs. For example, you can explore Navier-Stokes equations in 3D or more complex boundary conditions.
#
# 2. **Combine POD with Deep Learning Techniques**: Investigate hybrid methods, such as combining **POD-NN** with convolutional layers or recurrent layers, to handle time-dependent problems or more complex spatial dependencies.
#
# 3. **Evaluate Performance on Larger Datasets**: Work with larger datasets to assess how well these methods scale. You may want to test on datasets from simulations or real-world problems.
#
# 4. **Hybrid Models with Physics Informed Networks (PINN)**: Integrate **POD** models with PINN frameworks to include physics-based regularization in your model and improve predictions for more complex scenarios, such as turbulent fluid flow.
#
# 5. **...and many more!**: The potential applications of reduced order models are vast, ranging from material science simulations to real-time predictions in engineering applications.
#
# For more information and advanced tutorials, refer to the [PINA Documentation](https://mathlab.github.io/PINA/).
#
# ### References
# 1. Rozza G., Stabile G., Ballarin F. (2022). Advanced Reduced Order Methods and Applications in Computational Fluid Dynamics, Society for Industrial and Applied Mathematics.
# 2. Hesthaven, J. S., & Ubbiali, S. (2018). Non-intrusive reduced order modeling of nonlinear problems using neural networks. Journal of Computational Physics, 363, 55-78.