Machine Learning To Predict Aerodynamic Stall
Machine Learning To Predict Aerodynamic Stall
Machine Learning To Predict Aerodynamic Stall
response of the airfoil pressure distribution to changes in the angle of attack. After a sensi-
tivity analysis on the learning infrastructure, we investigate the latent space identified by the
autoencoder targeting extreme compression rates, i.e. very low-dimensional reconstructions.
We also propose a strategy to use the decoder to generate new synthetic airfoil geometries and
aerodynamic solutions by interpolation and extrapolation in the latent representation learned
by the autoencoder.
I. Introduction
Experimental and computational techniques to investigate aerodynamic performance have matured in the last decades
providing increasingly large amounts of quantitative information. These extensive datasets and the recent success of
machine learning (ML) techniques in a variety of applications have spurred interest in developing data-driven techniques
for fluid mechanics. Artificial neural networks and other supervised ML techniques algorithms have demonstrated
promising potential to improve turbulence models [1–4], to accelerate shape optimization [5, 6] and to investigate flow
control strategies [7, 8].
In this work we investigate the ability of an unsupervised ML strategy, convolutional autoencoders, to predict
aerodynamic characteristics and specifically the stall of classic wing cross-sections.
Airfoil stall is a strongly non-linear phenomenon corresponding to the loss of lift force and a primary design
consideration for airplanes and rotor-crafts. Fig. 1 illustrates the lift curves (variation of lift coefficient 𝐶𝑙 as function of
the angle of attack 𝛼) for the Boeing VR12 airfoil in steady (1a) and unsteady (1b) regimes, experiment by Matalanis et
al. [9]. The figure also shows the capabilities of CFD simulations as obtained by present authors (SU2 RANS and
URANS solver [10]) and by Matalanis et al. (CFL3D) in predicting these phenomena.
The stall condition corresponds to the maximum lift. For fixed wing aircraft at low angle of attack, classical
aerodynamic theories predict a lift coefficient (𝐶𝑙 ) that varies linearly with 𝛼; on the other hand for 𝛼 > 10◦ a strong non
linear behavior is observed with the lift decreasing after the stall angle due mainly to the presence of turbulent separated
flow on the upper surface of the airfoil. The aerodynamics of rotating wing aircraft is even more complex because of the
periodic change of the wing position with respect to the incoming flow. In this case, a sinusoidal variation of the angle
of attack in time (pitching airfoil) corresponds to an unsteady flow and the maximum lift condition is referred to as a
dynamic stall. In figure 1b a pitching airfoil with a reduced frequency of 𝑘 = 0.05 and 𝛼 ∈ [0◦ , 20◦ ] is reported for the
same VR12 airfoil. The dynamic stall occurs at higher angles of attack compared to the steady counterpart, with flow
separation persisting when the angle of attack reduces (hysteresis effect).
The comparison of the numerical simulations are quite satisfactory in the proposed picture. The small shift of
numerical and experimental results is very likely due to a not perfect angle of attack correction of wind tunnel data.
Present limits of RANS and URANS methods however appear in the post-stall and deep-stall conditions dominated by a
very massive separated flow.
The static and dynamic stall characteristics have a critical role in defining performance of aircrafts, helicopters and
wind turbines. Experimental studies are typically limited by the ability to achieve flight conditions in laboratory settings;
numerical simulations, on the other hand, require the representation of the thin turbulent boundary layer developing
on the airfoil surfaces, and several complex physical processes such as unsteadiness, laminar-turbulent transition and
flow separation. The corresponding predictions are computationally expensive and extensive research continues to be
∗ PhD student, Department of Industrial Engineering, University of Naples Federico II.
† Associate Professor, Department of Industrial Engineering, University of Naples Federico II.
‡ Professor, Department of Mechanical Engineering, Stanford University.
1
devoted to the improvement and the assessment of the corresponding simulations. A summary of the state of the art for
static stall is the AIAA high-lift prediction workshop [11].
As mentioned earlier, in this work we concentrate on static stall predictions and approach the problem using a
machine learning technique.
2 Stall 2 SU2
CFL3D [9]
Experiment [9]
1.5 1.5
𝐶𝑙
1 1
SU2
0.5 CFL3D [9] 0.5
Experiment [9]
0 0
0 5 10 15 20 25 0 5 10 15 20
𝛼 𝛼
(a) Static stall (b) Dynamic stall, 𝑘 = 0.07, 𝛼 ∈ [0◦ , 20◦ ]
Fig. 1 VR12 airfoil lift curves, 𝑀∞ = 0.3, 𝑅𝑒 ∞ = 2.6 × 106 . Comparison between CFD simulation present
authors obtained using SU2, CFD results by Matalanis et al. using CFL3D and experimental data [9].
Autoencoders (AE) are unsupervised deep-learning algorithm, which fundamentally target data compression, i.e.
reducing the dimensionality of the input data [12]. The architecture of an autoencoder consists of two main elements, the
encoder and the decoder. Giving an input dataset, AEs construct (learn) an equivalent low dimensional representation,
termed the latent space using the encoder portion of the network. On the other hand, The decoder rebuilds the input
from the latent space. A simplified scheme of an autoencoder is reported in Fig. 2 and summarized mathematically as:
where 𝑒 () is the encoder function, 𝑑 () is the decoder function, 𝑓 () is the entire transformation encoder+decoder,
𝑽 is the vector of latent variables (also referred to as latent space) and 𝒙 and 𝒙ˆ are respectively the input and its
reconstruction.
𝒙, 𝒙ˆ ∈ R𝑛×𝑚×𝑐 ; 𝑽 ∈ R𝑝 .
where 𝑛 × 𝑚 × 𝑐 is the dimension of the input dataset and 𝑝 is the dimension of the latent space.
In a Convolutional Autoencoder at least one of the layer of the network performs a convolution operation (denoted
with ∗), which, in discrete form, is given by:
∞
∑︁
𝑠 (𝑡) = (𝑥 ∗ 𝑤) (𝑡) = 𝑥 (𝑎) 𝑤 (𝑡 − 𝑎) (2)
𝑎=−∞
2
Convolutional autoencoders have been already used for aerodynamic predictions, for instance Bhatnagar et al. in
[13] showed their capability to reproduce the flow around airfoils with different shapes, angle of attack and Reynolds
number. In their work they used a signed distance function (SDF) as input to the encoder and provided the velocity and
pressure fields as output of the decoder. In this way it is possible to obtain the flow-field around a new geometry in
real-time by using only geometrical information (the SDF). Same methodology was adopted by Tangsali et al. in [14],
but using a very rich training database (11,000+ CFD simulations), in order to demonstrate the generalizability of this
approach for a wide range of airfoil geometries, Reynolds numbers and Mach numbers.
Rather than focusing on overall accuracy, Agostini [15] focused on the interpretation of the latent space in the case
of a flow around a cylinder. In his work, Agostini, showed the possibility to derive a low-dimensional dynamical model
(in the latent space) by applying a clustering algorithm.
The performance of the autoencoder in predicting the fluid flow around airfoils, shown in [14] appears quite
promising. However, the present approach is quite different: instead of constructing the autoencoder to generate the
flow given a geometrical representation of the airfoil, we trained the autoencoder to reproduce the flow-field itself (as
sketched in Fig. 2, input and output are the same) and then focused on the latent space representation to generate new
airfoils and flow-fields by using only the decoder.
The article is organized as follows: first we provide a description of the training dataset construction; then we
analyze the latent variables of the autoencoder trained in the linear regime (airfoil operating in the linear part of the
lift curve) for both images and raw data in input. We performed a sensitivity analysis on the autoencoder architecture,
hyperparameters and regularization method. We present an investigation of the latent space for different values of its
dimension to assess interpretability. Finally, we investigate how to use the latent space with only the decoder in order to
generate new synthetic airfoils and flow-fields by interpolation and extrapolation beyond the trained latent space.
3
Fig. 3 𝑖 𝑗 mapping.
The SU2 computations were carried out using a structured O-mesh (as reported in Fig.3); the field variables are
organized in matrices using the 𝑖 and 𝑗 indexes of the mesh. The 𝑖 index is a curvilinear coordinate on the airfoil surface
and it corresponds to the horizontal axis of the encoded mapping, while the 𝑗 index tracks the radial direction away
from the airfoil and corresponds to the vertical axis encoded mapping. This input representation has also the advantage
of amplifying the boundary layer as 𝑗 does not follow the mesh clustering at the airfoil surface (see Fig.3)); this in turns
enables us to highlight the boundary layers in the training step. In summary, the input to the convolutional autoencoder
consists of 2D matrices (𝑖 𝑗-mappings) each with 5 channels (𝑐 𝑝 , 𝜔, 𝑀, 𝑥 and 𝑦), and the variables are scaled between
0 and 1. The datasets are composed of either images (RGB channels) or raw solutions (matrix of floating points).
Therefore, the number of input channels is 15 for the images (3 RGB × 5 variables) and 5 for the raw data (1 channel ×
5 variables).
A. Architecture
As a preliminary step of the present investigation we carried out an extensive analysis to assess the effect of the
network architecture (number of layers, activation functions, etc.) and the training strategy and related hyperparameters.
As a trade-off between ease of training and accuracy, we focused on the following structure:
• Encoder:
- 4 convolutional layers.
- 2 linear layers.
- ReLU activation functions.
• Latent space dimension: 𝑝 = 1, 2, 3 .
• Decoder:
- 2 linear layers.
- 4 transposed convolutional layers.
- ReLU activation functions; Sigmoid for the last layer.
We trained the autoencoder using a completely randomized dataset with a batch size equal to 2, the Adam optimizer
and the Mean Squared Error (MSE) as loss function. The algorithm is implemented using the Pytorch library [16] and
trained on a single GeForce GTX Titan X GPU.
The latent space dimension 𝑝 is the most important hyperparameter of an autoencoder. By increasing 𝑝 it is possible
to reach a certain level of tolerance in the training process with less epochs. However, a high dimensional latent space
can lead to overfitting and ultimately, poor performance on new unseen cases. For our purposes, another critical design
goal is interpretability: a very high dimensional latent space limits our ability to extract physical understanding from
the latent variables. In the results sections We focus on extremely low values of (𝑝 ≤ 3) and in many cases we have
achieved error tolerances that are comparable to richer latent representations.
4
Fig. 4 Comparison of the convergence training procedures for 𝑝 = 1, 2 and 3 using the non-linear dataset.
As an example, in Fig. 4 we report the convergence of the training procedure for the same autoencoder and dataset
(the non-linear regime) for 𝑝 = 1, 2 and 3. The results illustrate that 𝑝 = 2 and 𝑝 = 3 reach a reasonably close level of
accuracy in the training, while 𝑝 = 1 is clearly insufficient in this case.
In Fig. 5 we compare the convergence of the training procedure for the two datasets (linear and non-linear) using the
same autoencoder with 𝑝 = 3. The two cases show the same rate of decay of the loss function. The comparison is
done using the same training parameters (e.g. learning rate) for both linear and non-linear case without any tuning to
speed-up the training process.
Fig. 5 Comparison of the convergence training procedures for the non-linear and linear datasets (𝑝 = 3).
In our specific case, 2 parameters are varying in the database: the airfoil thickness and the angle of attack. For this
reason, at least 2 latent variables are necessary and sufficient, to describe our dataset. For our analysis we decided to use
3 latent variables that is a good compromise between latent space interpretability and training speed performance.
5
in the angle-of-attack.
(a) Image, contour levels: 50 (b) Image, contour levels: 500 (c) Image, contour levels: 5000 (d) Raw data
Fig. 6 Latent variables extracted by the AE (normalized between 0 and 1). Linear case: NACA 6412.
Fig. 7 shows the same sensitivity analysis carried out in the non-linear regime. As before, we notice a sensitivity of
the latent space to the number of contour levels. Moreover, the raw data in input produced latent variables which are
linear for low angle-of-attack consistently with what shown previously. All the results reported in what follows are
obtained by using raw data in input.
(a) Image, contour levels: 50 (b) Image, contour levels: 500 (c) Image, contour levels: 5000 (d) Raw data
Fig. 7 Latent variables provided by the AE (scaled between 0 and 1). Non-linear case: NACA 0014.
C. Regularization
As described by Newson et al. in [17], different regularization methods can lead to different behavior of the
latent variables and different reconstruction performance by the autoencoder. We studies the sensitivity of the present
algorithm to using 𝐿 2 norm regularization on the encoder and/or decoder weights and on the latent variables. Eventually
we found that the 𝐿 2 norm regularization applied only to the encoder weights leads to a tidy and compact latent space
which is amenable to interpretation. As a comparison of the various types of regularization we report the latent variables
obtained using the non-linear regime dataset and 𝑝 = 3 (Fig. 8): the results are not vastly different, although it is clear
that regularizing the the encoder weights leads to a compact latent space.
6
(a) No regularization (b) Latent space regularization (c) Encoder weights regularization
Fig. 8 Comparison of latent spaces for 𝑝 = 3 changing the regularization technique. Autoencoder trained with
the non-linear data-set.
The latent space regularization in Fig. 8b is obtained by applying a penalization on the 𝐿 2 norm of the latent
variables, while the encoder weights regularization in Fig. 8c is an 𝐿 2 norm of the weights of the Encoder.
Fig. 9 𝑝 = 1, 2, 3 latent spaces learned by the convolutional autoencoder trained with the non-linear dataset.
Fig. 9a shows that 𝑝 = 1 is not sufficient to describe the entire dataset; for 𝛼 < 10◦ , one value of the latent variable
corresponds to multiple airfoils. However, this case illustrates an important finding that connects the latent variables to
the physics of the phenomenon. The autoencoder infers that symmetrical airfoils have the same behaviour at low angle
of attacks, while in the non-linear regime the thickness plays an important, differentiating role. This is confirmed by the
analysis of the lift curves extracted from the RANS solutions (Fig. 10a), which completely overlap for 𝛼 < 10◦ .
7
1.5 0
Linear regime
1 −1
𝑉1
𝐶𝑙
Non-linear regime −2
0.5 NACA 0012 NACA 0012
NACA 0014 NACA 0014
NACA 0016 NACA 0016
NACA 0018 −3 NACA 0018
0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
𝛼 𝛼
(a) Lift curves, 𝑅𝑒∞ = 5.0 × 106 , 𝑀∞ = 0.15, RANS simulations (b) Latent space for 𝑝 = 1
Fig. 10 Comparison of the lift curves of the airfoils of the non-linear data-set: NACA 0012, 0014, 0016 and
0018 and the latent variable learned by the autoencoder with the same dataset for 𝑝 = 1.
The existance of a strong correlation between the lift coefficient 𝐶𝑙 and the latent variable 𝑉1 for 𝑝 = 1 is clearly
visible also in Fig. 11, where the 𝐶𝑙 is reported as function of 𝑉1 . The different curves for the 4 airfoils collapse almost
perfectly. In particular, the correlation between 𝐶𝑙 and 𝑉1 is greater than 90% in the linear regime.
1.5
1
𝐶𝑙
Fig. 11 Lift coefficient 𝐶𝑙 as function of the latent variable learned by the autoencoder for 𝑝 = 1.
8
Fig. 12 Translation of latent variables as function of the angle of attack 𝛼. Comparison with the latent variables
of the NACA 0012 airfoil. 𝑉1 : —; 𝑉2 : —; 𝑉3 : — .
In Fig. 13, the output of the decoder with the new latent variables is compared with the NACA 0012 baseline in
terms of pressure distributions and airfoil geometry for 4 selected angles of attack (𝛼 = 0◦ , 10◦ , 20◦ and 30◦ ). As a first
observation, it is clear that the synthetic geometry changes with the angle of attack, which can be interpreted as an
indication that changes in the latent space variables cannot be arbitrary.
In the 𝐶 𝑝 plots (Fig. 13) the continuous lines refer to the lower surface of the airfoil, while the dashed lines to the
upper surface. The decoder extracts an airfoil and a 𝐶 𝑝 distribution for 𝛼 = 0◦ that corresponds to a negative angle of
attack for a symmetric airfoil.
Fig. 13 Comparison between the decoder output by giving in input the translated latent variables and the
NACA 0012 baseline for 𝛼 = 0◦ , 10◦ , 20◦ and 30◦ .
The 𝐶 𝑝 curves and the airfoil geometries reported in Fig. 13 are obtained by extracting the first row of the
reconstructed channels (𝐶 𝑝 , 𝑥 and 𝑦). The AE is trained giving in input 170 × 512 × 5 tensors, therefore the airfoil
geometry is the reconstruction of 2 vectors (𝑥 and 𝑦 coordinates) of 512 elements each. This means that the airfoil
geometry is only the 0.23% of the information that the AE is reproducing. In addition, the poor quality of the leading
and trailing edge reconstructions are due to the presence of a thickening of the mesh grid in these regions, so the values
9
of the coordinates are very close to each other and a small error influences the reconstruction quality.
The translation of 𝑉3 seems to correspond to a translation of 𝛼. In particular, considering the slope of 𝑉3 in its linear
part ( Δ𝑉3
Δ𝛼 ≈ 0.037/◦), the translation we applied (Δ𝑉3 = −0.15) corresponds to a new zero-lift angle of attack 𝛼 𝑧𝑙 = 4 .
◦
Fig. 14 shows that this is consistent with the results extracted by the autoencoder for 𝛼 = 4◦ : the pressure coefficient
on the upper and lower surfaces of the synthetic airfoil overlap, returning 𝐶𝑙 = 0.
The behaviour of 𝑉3 resembles the lift curve, and the results corroborate this observation.
The resulting latent variables as function of the angle of attack 𝛼 are reported in Fig. 16 and compared with the
latent variables of the NACA 0012 (in dashed black lines) and NACA 0014 (in dashed red lines) airfoils.
10
Fig. 16 Interpolated latent variables at 𝑡 = 13% as function of the angle of attack 𝛼. Comparison with the
latent variables of the NACA 0012 and NACA 0014 airfoils. 𝑉1 : —; 𝑉2 : —; 𝑉3 : — .
Starting from these interpolated latent variables the decoder generates solutions that can also be compared to the
RANS solutions generated for the same expected profile (NACA 0013).
Fig. 17 Mach number contour of the NACA 0012 and 0014 RANS solutions and NACA 0013 synthetic computed
by the decoder for 𝛼 = 0◦ , 10◦ , 20◦ and 30◦ .
Fig. 17 shows the Mach number contours for 4 angles of attack of the NACA 0012, 0014 and for the synthetic
NACA airfoil generated by the decoder. In order to have a more quantitative analysis of the result, it is interesting to
look at the pressure coefficient 𝐶 𝑝 on the airfoil surface as reported in Fig. 18.
11
Fig. 18 Comparison of the 𝐶 𝑝 on the airfoil surfaces for 𝛼 = 0◦ , 10◦ , 20◦ and 30◦ .
The yellow curves refer to the NACA 0013 RANS solution, while the red curves refer to the decoder output (the
synthetic NACA 0013), and they perfectly match both in the linear regime (with the attached flow in the first two plots
of Fig.18) and the non-linear regime and stall conditions (the last two plots of Fig. 18). Also the geometry computed by
the decoder is closely matching the expected NACA 0013 airfoil, and it is not changed with the angle of attack, showing
that a coherent latent space manipulation leads to a consistent result.
Fig. 19 Extrapolated contour mapping of the latent variables on the database parameters.
We extrapolated the values of the three latent variables at constant thickness 𝑡 = 20%, represented by the continuous
black line in Fig. 19. These are plotted as function of 𝛼 in Fig. 20.
12
Fig. 20 Extrapolated latent variables at 𝑡 = 20% as function of the angle of attack 𝛼. Comparison with the
latent variables of the NACA 0018 airfoil. 𝑉1 : —; 𝑉2 : —; 𝑉3 : — .
Fig. 21 shows the Mach number contours (for 𝛼 = 0◦ , 10◦ , 20◦ and 30◦ ) extracted from the RANS solutions of the
NACA 0018 and 0020 and the ones generated by the decoder with the extrapolated latent variables in input.
Fig. 21 Comparison of the Mach number contour of the NACA 0018, NACA 0020 and NACA synthetic for
𝛼 = 0◦ , 10◦ , 20◦ and 30◦ .
The contour of the Mach number of the synthetic flow-field appear to be in reasonable agreement with the RANS
solution, however, in order to obtain a better evaluation of the results, we reported the 𝐶 𝑝 on the airfoil surface in Fig.
22.
13
Fig. 22 Comparison of the 𝐶 𝑝 on the airfoil surfaces for 𝛼 = 0◦ , 10◦ , 20◦ and 30◦ .
Once again, the output of the decoder is in agreement with the RANS solution for the NACA 0020 for both the
linear and non-linear regimes. In this extrapolatory case the accuracy is somewhat degraded with the respect to the
interpolation case; specifically the maximum expansion region (the negative 𝑐 𝑝 peak) is not exactly captured by the
decoder.
Conclusions
The results we presented in this work show that convolutional autoencoders are a powerful tool for the prediction of
the aerodynamic performance of airfoils in both linear and non-linear aerodynamic regimes. We studied the sensitivity
of the latent space of the trained autoencoder, showing that to pursue a physical interpretation it is necessary to use
low-dimensional latent variables and it is preferable to use a database composed by raw data instead of images.
We showed that by using a randomized training dataset in which airfoil thickness and angle of attack vary, the
autoencoder is able to automatically learn these parameters by organizing the latent space.
The autoencoder is also able to extract other physical information about the stall phenomenon: the latent variables
are linear in the linear part of the phenomenon, and non-linear when stall occurs. Moreover, by using a latent space
dimension 𝑝 = 1, the autoencoder learns that symmetric airfoils have the same behaviour for low angles of attack, and
respond in a different way depending on their thickness towards the stall.
It is possible to interpolate and extrapolate in the latent space learned by the autoencoder in order to generate
accurate aerodynamic predictions and flow-fields for synthetic airfoils not seen in the training process.
References
[1] Duraisamy, K., Iaccarino, G., and Xiao, H., “Turbulence Modeling in the Age of Data,” Annual Review of Fluid Mechanics,
Vol. 51, 2019, pp. 357–377. https://doi.org/10.1146/annurev-fluid-010518-040547.
[2] Duraisamy, K., Zhang, Z. J., and Singh, A. P., “New Approaches in Turbulence and Transition Modeling Using Data-driven
Techniques,” AIAA SciTech, 2015. https://doi.org/https://doi.org/10.2514/6.2015-1284, 53rd AIAA Aerospace Sciences Meeting,
5-9 January 2015, Kissimmee, Florida.
[3] Tracey, B., Duraisamy, K., and Alonso, J. J., “A Machine Learning Strategy to Assist Turbulence Model Development,” AIAA
SciTech, 2015. https://doi.org/10.2514/6.2015-1287, 53rd AIAA Aerospace Sciences Meeting, 5-9 January 2015, Kissimmee,
Florida.
[4] Milano, M., and Koumoutsakos, P., “Neural Network Modeling for Near Wall Turbulent Flow,” Journal of Computational
Physics, Vol. 182, 2002, pp. 1–26. https://doi.org/10.1006/jcph.2002.7146.
[5] Yan, X., Zhu, J., Kuang, M., and Wang, X., “Aerodynamic shape optimization using a novel optimizer based on machine
learning techniques,” Aerospace Science and Technology, Vol. 86, 2019, pp. 826–835. https://doi.org/https://doi.org/10.1016/j.
ast.2019.02.003.
[6] Li, J., Zhang, M., Martins, J. R. R. A., and Shu, C., “Efficient Aerodynamic Shape Optimization with Deep-learning-based
Geometric Filtering,” AIAA Journal, Vol. 58, No. 10, 2020. https://doi.org/10.2514/1.J059254.
14
[7] Noack, B. R., “Closed-Loop Turbulence Control-From Human to Machine Learning (and Retour),” Proceedings of the 4th
Symposium on Fluid Structure-Sound Interactions and Control (FSSIC), edited by Zhou, Y., Kimura, M., Peng, G. Lucey, A.D.,
. Huang, and L., Springer, 2019, pp. 23–32. URL https://hal.archives-ouvertes.fr/hal-02398734.
[8] Verma, S., Novati, G., and Koumoutsakos, P., “Efficient collective swimming by harnessing vortices through deep reinforcement
learning,” Proceedings of the National Academy of Sciences, Vol. 115, No. 23, 2018, pp. 5849–5854. https://doi.org/10.1073/
pnas.1800923115, URL https://www.pnas.org/doi/abs/10.1073/pnas.1800923115.
[9] Matalanis, C. G., Bowles, P. O., Jee, S., Min, B.-Y., Kuczek, A. E., Croteau, P. F., Wake, B. E., Crittenden, T., Glezer, A., and
Lorber, P. F., “Dynamic Stall Suppression Using Combustion-Powered Actuation (COMPACT),” Tech. rep., 2016.
[10] Economon, T. D., Palacios, F., Copeland, S. R., Lukaczyk, T. W., and Alonso, J. J., “SU2: An Open-Source Suite for
Multiphysics Simulation and Design,” AIAA Journal, Vol. 54, No. 3, 2016, pp. 828–846. https://doi.org/10.2514/1.J053813.
[11] Rumsey, C. L., Slotnick, J. P., and Sclafani, A. J., “Overview and Summary of the Third AIAA High Lift Prediction
Workshop,” Journal of Aircraft, Vol. 56, No. 2, 2019, pp. 621–644. https://doi.org/10.2514/1.C034940, URL https:
//doi.org/10.2514/1.C034940.
[12] Lee, K., and Carlberg, K. T., “Model reduction of dynamical systems on nonlinear manifolds using deep convolutional
autoencoders,” Journal of Computational Physics, Vol. 404, 2020, p. 108973. https://doi.org/https://doi.org/10.1016/j.jcp.2019.
108973, URL https://www.sciencedirect.com/science/article/pii/S0021999119306783.
[13] Bhatnagar, S., Afshar, Y., Pan, S., Duraisamy, K., and Kaushik, S., “Prediction of aerodynamic flow fields using convolutional
neural networks,” Computational Mechanics, Vol. 64, No. 2, 2019, pp. 525–545. https://doi.org/10.1007/s00466-019-01740-0,
URL https://doi.org/10.1007/s00466-019-01740-0.
[14] Tangsali, K., Krishnamurthy, V. R., and Hasnain, Z., “Generalizability of Convolutional Encoder–Decoder Networks for
Aerodynamic Flow-Field Prediction Across Geometric and Physical-Fluidic Variations,” Journal of Mechanical Design, Vol.
143, No. 5, 2020. https://doi.org/10.1115/1.4048221, URL https://doi.org/10.1115/1.4048221, 051704.
[15] Agostini, L., “Exploration and prediction of fluid dynamical systems using auto-encoder technology,” Physics of Fluids, Vol. 32,
No. 6, 2020, p. 067103. https://doi.org/10.1063/5.0012906, URL https://doi.org/10.1063/5.0012906.
[16] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison,
A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S.,
“PyTorch: An Imperative Style, High-Performance Deep Learning Library,” Advances in Neural Information Processing Systems
32, Curran Associates, Inc., 2019, pp. 8024–8035. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-
performance-deep-learning-library.pdf.
[17] Newson, A., Almansa, A., Gousseau, Y., and Ladjal, S., “Processing Simple Geometric Attributes with Autoencoders,” Journal
of Mathematical Imaging and Vision, Vol. 62, No. 3, 2020, pp. 293–312. https://doi.org/10.1007/s10851-019-00924-w, URL
https://doi.org/10.1007/s10851-019-00924-w.
15