MIN-400B Report (Final Evaluation) - 2
MIN-400B Report (Final Evaluation) - 2
MIN-400B Report (Final Evaluation) - 2
by
HARSHIT SOLANKI
(20117058)
ARPIT PRITHU
(20117031)
MAY 2024
DECLARATION
We hereby declare that the work carried out in this report entitled, “Improvement of
Temporal Resolution of Schlieren Measurements in Supersonic Jet Flow using
Data-Driven Methods” is presented for the course MIN-400 A/B (B.Tech. Project)
submitted to the Department of Mechanical and Industrial Engineering, Indian
Institute of Technology Roorkee (India), is an authentic record of our work carried
out under the supervision of Dr. Neetu Tiwari and Prof. B.K. Gandhi, MIED, IIT
Roorkee.
We have not submitted the record embodied in this project report for the award of any
other degree or diploma in any institute.
May 2024
IIT Roorkee
Harshit Solanki Arpit Prithu Bhoos Anup Satish Aditya Shankar Shukla
20117058 20117031 20117039 20117012
1
CERTIFICATE
This is to certify that the report submitted by Mr Harshit Solanki, Mr Arpit Prithu, Mr
Bhoos Anup Satish and Mr Aditya Shankar Shukla on “Improvement of Temporal
Resolution of Schlieren Measurements in Supersonic Jet Flow using Data-Driven
Methods” is an authentic record of their project work which they have satisfactorily
completed under my supervision.
This report is submitted as a partial requirement for completing the MIN-400 B.Tech.
Project, which is necessary for the conferment of the bachelor's degree.
May 2024
2
ACKNOWLEDGEMENTS
This project would not have been possible without the kind support and help of our
supervisor, Dr. Neetu Tiwari. She supported us with her time, knowledge and
encouragement throughout the course of this project.
We also want to express our sincere gratitude to the thermal group committee
members Dr. Kirti Bhushan Mishra, Dr. Ajit Kumar Dubey and Dr. Alankrita Singh
for their edifying insights and feedback during the evaluations.
May 2024
IIT Roorkee
Harshit Solanki Arpit Prithu Bhoos Anup Satish Aditya Shankar Shukla
20117058 20117031 20117039 20117012
3
TABLE OF CONTENTS
4
3.1.1 Layered structure with interconnected artificial 25
neurons
3.1.2 Linear transformation followed by the non-linear 25
activation function
3.1.3 Forward pass 26
3.1.4 Backward pass or backpropagation 26
3.1.5 Learning rate 27
3.1.6 Regularisation 27
3.1.7 Optimisation 27
3.2 Linear autoencoders 28
3.2.1 Encoding 28
3.2.2 Decoding 28
3.2.3 Reconstruction 28
4 Convolutional Neural Networks 29
4.1 Architecture of convolutional neural networks 29
4.1.1 Convolutional layers 29
4.1.2 Filters 29
4.1.3 Feature maps 29
4.1.4 Pooling layers 29
4.1.5 Activation layers 30
4.1.6 Fully connected layers 30
4.1.7 Network architecture flow 30
4.2 U-net convolutional neural network 31
4.2.1 Contracting path (encoder) 31
4.2.2 Expansive path (decoder) 31
4.2.3 Skip connections 32
4.2.4 Benefits of U-Net 32
5
4.2.5 Implementation of U-Net 33
5 Alternative Models for Better Reconstruction 34
5.1 General Adversarial Network 34
5.1.1 Architecture of GAN 34
5.2 Variational autoencoder 35
5.2.1 Working of VAE 35
5.2.2 Architecture of VAE 36
6 Results and Discussion 37
6.1 Reconstructed flow snapshots 37
6.2 Root mean square error (RMSE) 41
7 Conclusions 42
7.1 External proper orthogonal decomposition 42
7.2 Commentary on other models 42
8 Future Prospects 44
8.1 Combining CNN and LSTM networks 43
8.1.1 Capturing spatial features with CNN 43
4.1.2 Handling temporal dependencies with LSTM 44
4.1.3 Benefits of this approach 44
4.1.4 Challenges 44
9 References 45
6
LIST OF FIGURES
7
Fig. 17 Schematic of GAN architecture 35
Fig. 18 Original fluid flow snapshot 38
Fig. 19 Reconstructed fluid flow snapshot using EPOD 38
Fig. 20 Reconstructed fluid flow snapshot using ANN 39
Fig. 21 Reconstructed fluid flow snapshot using CNN 39
Fig. 22 Reconstructed fluid flow snapshot using linear autoencoder 40
Fig. 23 Reconstructed fluid flow snapshot using U-Net CNN 40
Fig. 24 Reconstructed fluid flow snapshot using CNN-VAE 41
Fig. 25 Reconstructed fluid flow snapshot using GAN 41
Fig. 26 Schematic representation of CNN-LSTM hybrid network 43
8
LIST OF TABLES
9
GLOSSARY
Abbreviations Meaning
EPOD Extended Proper Orthogonal Decomposition
CNN Convolutional Neural Network
ANN Artificial Neural Network
ReLU Rectified Linear Unit
GAN Generative Adversarial Network
VAE Variational Autoencoder
10
LIST OF SYMBOLS
Symbol Description
U Snapshot matrix
𝛹 Orthogonal matrix of dimensions (𝑚, 𝑚), where 𝑚 represents the
number of spatial resolution points of a particular snapshot
𝛴 Singular diagonal matrix of dimensions (𝑚, 𝑛) wherein the
non-zero elements along its diagonal explain the proportion of
energy inherent in the original snapshots
𝛷 Orthogonal matrix of dimensions (𝑛, 𝑛) where n represents the
number of snapshots
X Snapshot matrix after truncation
𝛯 Temporal correlations matrix
LN Nth layer of a neural network
J The cost function of a neural network
W Weights associated with neurons in the layers of a neural network
α Hyperparameter that controls the learning rate
λ Hyperparameter used for weight decay, also called regularisation
Superscript Description
’ Reconstructed matrix
T Transpose of matrix
i ith entity
Subscript Description
tr Time-resolved matrix
ntr Non-time-resolved matrix
pr Predicted time-resolved matrix
11
est Estimated matrix
pred Value predicted by the neural network
true True value
12
Chapter 1
INTRODUCTION
Accurate fluid flow simulation is crucial in various engineering fields, including fluid
dynamics, aerospace, automotive, and civil engineering. Traditional experimental
methods for flow simulation are often costly, time-consuming, and limited in
capturing complex flow phenomena accurately. In recent years, data-driven
techniques have emerged promising to improve accurate flow simulation by
leveraging computational power and large datasets.
1.1 Motivation
Capturing flow dynamics is a complex task that requires precise and rapid data
acquisition. A critical factor in this process is the sampling rate, which refers to the
frequency at which data points are recorded over time. A high sampling rate is crucial
for flows involving extremely high-speed phenomena to accurately capture the flow
field's dynamic behaviour.
A sampling rate of 100,000 samples per second (105 Hz) is typically necessary to
adequately capture the rapid changes and intricate details in supersonic flow
phenomena. This high sampling rate ensures the camera can record enough data
points within a given time frame to represent the flow dynamics accurately.
However, despite their advanced capabilities, modern high-speed cameras often need
to be improved regarding sampling rates. For example, a common resolution for
high-speed cameras might be 1024 x 1024 pixels, capturing images composed of
1,048,576 individual pixels. Despite this high resolution, the sampling rate of such
cameras typically ranges around 10,000 samples per second (10 kHz), significantly
lower than the ideal rate for capturing flow phenomena.
This discrepancy between the required sampling rate for flow capture and the
practical limitations of modern high-speed cameras arises from various design
constraints inherent to these devices. One crucial constraint is the trade-off between
sampling rate and field of view. Increasing the sampling rate typically requires
sacrificing the captured images' field of view or spatial resolution. Conversely,
maximising the field of view often comes at the expense of the sampling rate.
This trade-off exists due to limitations in camera sensor technology, data processing
capabilities, and physical constraints related to capturing and storing high-speed data.
Another challenge associated with using sensors is their intrusive nature. This means
13
that their installation can perturb the flow properties around the point of installation,
potentially leading to inaccuracies in property measurements. Additionally, sensors
are expensive and cannot be deployed comprehensively throughout the flow to study
their properties.
1.3 Objectives
The prerogative is to develop a method that increases the temporal resolution to the
desired value while keeping a high spatial resolution without relying on extensive
deployment of sensors. The project strives to advance fluid flow simulation
techniques by harnessing the power of various data-driven techniques. The primary
objectives of this project work are listed below:
This research presents an innovative approach for estimating fluid velocities over
time} termed Extended Proper Orthogonal Decomposition (EPOD). EPOD leverages
data from synchronised cameras observing fluid flow dynamics to analyse temporal
changes, providing insights into fluid behaviour beyond spatial patterns.
Demonstrated efficacy in diverse scenarios, including jet flows and flows with
obstacles, underscores EPOD's robustness as a foundational tool in fluid dynamics
analysis.
In this study, we introduce a novel methodology that amalgamates two distinct types
of measurements to enhance understanding of fluid behaviour: (a) low-resolution,
time-resolved images capturing fluid flow dynamics, and (b) high-resolution,
non-time-resolved camera data providing detailed spatial information. By integrating
these measurements, we aim to improve the accuracy of fluid density gradient
estimation. Notably, our approach incorporates a mechanism to address time delays
inherent in the data fusion process. This enables a more comprehensive
characterisation of temporal evolution in fluid density, facilitating more profound
insights into fluid dynamics.
To effectively capture the essential patterns within the dataset and facilitate
dimensionality reduction, we leverage an economy-size Singular Value
Decomposition (SVD) on matrix 𝑈. This SVD operation dissects 𝑈 into three core
constituent matrices: 𝛹, 𝛴, and 𝛷.
(2.1)
15
𝛹 represents an orthogonal matrix of dimensions (𝑚, 𝑚), where 𝑚 characterises the
spatial resolution points encapsulated within each snapshot. 𝛴 manifests as a singular
diagonal matrix of dimensions (𝑚, 𝑛), wherein the non-zero elements along its
diagonal delineate the proportion of inherent energy within the original snapshots. 𝛷,
conversely, manifests as another orthogonal matrix of dimensions (𝑛, 𝑛), with 𝑛
signifying the total number of snapshots.
We discern and retain the most pertinent modes of interest to streamline the data's
dimensionality. This involves selecting the top 𝑘 modes by extracting the first 𝑘
columns from matrices 𝛹 and 𝛷. Such a selective approach enables the capture of
predominant variability modes within the pressure gradient field, thereby reducing
the overall data size.
(2.2)
This process yields a reconstructed matrix, denoted as 𝑈’, with dimensions (𝑚, 𝑛),
while matrices 𝛹’, 𝛴’, and (𝛷T)’ possess dimensions (𝑚, 𝑘), (𝑘, 𝑘), and (𝑘, 𝑛)
respectively. This systematic reduction in dimensionality is instrumental as it
furnishes a concise yet informative representation of the original snapshot data. Such
a methodological framework finds applicability across diverse domains, including
fluid dynamics, structural mechanics, and signal processing, wherein extracting
salient features from high-dimensional datasets is paramount.
This process is valuable because it reduces the dimensionality of the original snapshot
data, providing a concise yet informative representation. Such an analytical
framework can be applied across various domains, including fluid dynamics,
structural mechanics, and signal processing, where extracting relevant features from
high-dimensional datasets is crucial.
16
Fig. 1 Proper orthogonal decomposition through an economy-sized
matrix single-value decomposition (David W. Ashmore et al. (2022))
The experimental setup consists of two high-speed Photron cameras that capture
snapshots of supersonic jet flows within a defined time frame, with data stored in
MRAW format. From these files, matrix data corresponding to both high-resolution
and low-resolution snapshots was extracted. Notably, the frequencies of
time-resolved and non-time-resolved snapshots inherently differed. The matrix
captured snapshots in both scenarios' x and y directions, while the z direction
portrayed temporal variations. Subsequently, this three-dimensional matrix was
reshaped into a two-dimensional matrix, where each column represented a snapshot at
a specific time.
17
(a) (b)
Fig. 2(a) Non-time-resolved fluid flow snapshot and (b) Time-resolved fluid flow
snapshot
The snapshots were visualised by plotting images derived from their corresponding
matrices. Reconstruction efforts focused on capturing the maximum density gradient
variation. Time averaging was computed at each spatial position and subtracted for
time-resolved and non-time-resolved flow data.
The figure below presents time-resolved snapshots originating from three distinct
spatial locations.
(a) (b)
Fig. 3(a) Original non-time-resolved fluid flow snapshot and (b) Fluid flow snapshot
after contraction along the horizontal direction
18
Fig. 4 Fluid flow snapshot following mean subtraction
(2.3)
(2.4)
The subscripts tr and ntr denote time-resolved and non-time-resolved flow fields,
respectively. Using the proper orthogonal decomposition (POD), we represent the
matrices using ‘k’ dominant modes.
(2.5)
(2.6)
(2.7)
(2.8)
19
Fig. 5(a) Time-resolved truncated snapshot and (b) Non-time-resolved truncated
snapshot
Now, we finalise the extended modes (extra snapshots) taken with each common
snapshot for reconstruction.
For each index i, we consider snapshots of X’tr from index i to (i + deltr), where deltr
is the number of extra snapshots considered. We combine these snapshots into a
single-column vector and store it in the ith column of another matrix named ycol.
20
For each index i considered, we consider snapshots of X’tr from index i to (i + deltr)
and combine them into a single-column vector and store them in the ith column of
another matrix named PP, where PP is a zero matrix of size (ps﹡(deltr + 1), nt –
deltr).
(2.10)
The last two terms contain information about the temporal correlation between
non-time-resolved and time-resolved modes.
Knowing the POD spatial modes and singular values of the density gradient and of
the time-resolved snapshot matrix, as well as the temporal correlations matrix 𝛯, it is
possible to estimate the density gradient at a generic instant from a time-resolved data
snapshot PP sampled at that instant:
(2.11)
(a) (b)
21
Fig. 7 Reconstruction difference
22
2.3 Results
23
Fig. 11 Percentage cumulative energy vs number of modes considered
24
Chapter 3
SHALLOW NEURAL NETWORKS
Shallow neural networks are a widely used deep learning technique for fluid flow
reconstruction. These networks are attractive due to their ease of use, fast training
times, and ability to avoid overfitting the data. Unlike deeper networks, shallow
neural networks typically have three layers: an input layer, a single hidden layer, and
an output layer. This more straightforward structure offers several benefits. Firstly, it
keeps the number of parameters in the network low. This translates to less
computational power and memory needed for training, making them ideal for
working with limited datasets. The following section will explore how a shallow
neural network can be a viable alternative to existing reconstruction methods.
25
neuron becomes the input to others in the next layer. This chain reaction allows the
network to progressively extract features and higher-level patterns from the data.
Our shallow neural network consists of 4 neural layers: an input layer, an output
layer, and two hidden layers consisting of 100 and 400 neurons, respectively. The
input layer has 128 x 120 neurons, while the output layer consists of 1024 x 401
neurons.
Layers are denoted by LN, where L0 is the input layer, LH is the output layer and L1 to
LH–1 are the hidden layers.
(2.12)
(2.13)
We have selected ReLU (rectified linear unit) for the activation function for the
following reasons:
26
3.1.3 Forward pass
This refers to how information travels through the network during training. The input
vector (array of data points) enters the network at the input layer. Each neuron in the
layer performs the weighted sum and applies its activation function. The resulting
outputs become the inputs for the next layer, and so on, until the final output layer
produces the network's prediction. It's like passing a baton in a relay race, with each
layer progressively transforming the information.
(2.14)
(2.15)
3.1.6 Regularisation
In machine learning, regularisation encompasses techniques employed to combat
model overfitting. Overfitting occurs when a model excessively aligns with the
training data, memorising specific details and noise rather than capturing the
underlying generalisable patterns. This can lead to poor performance on unseen data.
27
Regularisation techniques achieve their goal by introducing a penalty term to the cost
function. The cost function quantifies the model's prediction error on the training
data. Regularisation modifies this function by adding a term penalising the model's
complexity. This complexity penalty can be based on the model's parameters (weights
and biases) or the overall structure.
L2 regularisation shrinks all weights towards zero but doesn't necessarily eliminate
them. This reduces the overall magnitude of the weights, making the model less
sensitive to specific features and promoting smoother decision boundaries
3.1.7 Optimisation
They are algorithms that iteratively update the weights and biases within the network
to minimise a cost function (also known as a loss function). We have used Adam as
our optimiser with a learning rate of 0.001 and a decay rate of 0.9
3.2.1 Encoding
The input image is fed into the encoder part of the autoencoder. The encoder uses
linear functions (activations) to compress the image data into a lower-dimensional
representation called the latent space. In our study, we used ReLU as the activation
function; the input layer has 15,360 neurons, which decreases for the encoder part.
3.2.2 Decoding
The compressed representation from the latent space is passed to the decoder. The
decoder uses linear functions again to try and reconstruct the original image based on
the information in the latent space. Its output layer has 410,624 neurons reshaped to
the matrix of dimensions 1024*401 to reconstruct the original flow.
28
3.2.3 Reconstruction
The decoder aims to recreate the image by leveraging the learned linear relationships
between pixels in the original image.
29
Chapter 4
CONVOLUTIONAL NEURAL NETWORK
4.1.2 Filters
Each filter learns to detect specific features, like edges, shapes, or colours. Multiple
filters are used within a convolutional layer to capture various features at different
scales and orientations.
30
4.1.5 Activation layers
These layers introduce a crucial element of nonlinearity into the network. Activation
functions like ReLU (Rectified Linear Unit) and sigmoid are frequently employed for
this purpose. This nonlinearity is essential as it empowers the network to grasp
intricate connections between features, enabling it to make precise predictions.
3. Pooling layers: Pooling layers are often inserted between convolutional layers
to reduce dimensionality and control overfitting.
6. Flattening: After the convolutional and pooling stages, the data is typically
flattened into a single vector before feeding into fully connected layers.
31
7. Fully connected layers: These layers perform classification or regression tasks
based on the extracted features. The final output layer produces the network's
prediction. The fully connected output layer has 410,624 (1024 x 401) neurons,
which are later reshaped to a matrix of dimensions 1024 x 401 to get the
desired reconstructed flow.
Fig. 15 Architecture schematic of the convolutional neural network used in this study
32
combines high-level features from the encoder with localised features from earlier
stages. This combination helps maintain precise object boundaries in the
segmentation output.
33
4.2.5 Implementation of U-Net
In our study, we employed a U-Net network to refine further the reconstruction of
non-time-resolved data obtained using the EPOD method. This step aims to achieve a
lower reconstruction error.
First, the input (fluid flow reconstructed using EPOD) passes through multiple
convolutional layers that are responsible for dimensionality reduction and feature
extraction. As shown in the above figure, the input dimensions are reduced as they
pass through the encoder.
Then, the encoded data passes through the decoder part, which consists of the
deconvolution or upsampling layer. In this study, we have used a transposed
convolution layer, also known as a Conv2DTranspose. Unlike a standard
convolutional layer that extracts features from an input, a Conv2DTranspose layer
upsamples the input feature maps and learns filters to increase the data's spatial
resolution (width and height). This makes them particularly useful for tasks needing
to recover or even generate high-resolution outputs.
34
Chapter 5
ALTERNATIVE MODELS FOR BETTER RECONSTRUCTION
The generator tries to generate indistinguishable data from real data, while the
discriminator tries to identify between real and generated data accurately. This
adversarial training process continues until the generator learns to produce synthetic
data that is highly realistic and can fool the discriminator.
The generator network is a deep neural network designed to take in a random noise
vector and produce a synthetic data sample, such as an image, audio, or text.
Typically, the architecture of the generator network comprises multiple layers of
35
upsampling or transposed convolutional layers, which are then followed by
non-linear activation functions.
The generator's objective in a GAN is to produce synthetic samples that are realistic
enough to fool the discriminator. The generator achieves this by minimising its loss
function JG. The loss is minimised when the log probability is maximised.
(2.16)
Discriminator model
During training, the generator and discriminator networks compete against each other.
As the discriminator becomes better at distinguishing between real and fake samples,
the generator must improve its ability to generate more realistic synthetic data to fool
the discriminator.
(2.17)
The "variational" part of VAE comes from how it learns the latent space. Instead of
forcing the encoder to produce a single fixed point in the latent space for each input,
VAEs learn to create a probability distribution (like a range or spread) for each input.
This approach makes the model more flexible and robust.
36
5.2.1 Working of VAE
1. Encoder: The VAE starts with an encoder network that takes an input (like an
image) and converts it into a compact representation called a "latent space."
Think of this latent space as a compressed version of the input data.
2. Latent space: This compressed representation in the latent space captures the
essential features of the input data but in a more concise form. It's like
summarising a long story into a short paragraph.
3. Decoder: The decoder network then takes this compact representation from the
latent space and tries to reconstruct the original input data (like an image). It's
like expanding that short paragraph back into the whole story.
1. Input layer: This layer takes the input data (e.g., an image) and passes it
through the subsequent layers.
2. Hidden layers: Process the input and produce the mean and variance of the
latent space distribution. The encoder produces these mean and variance,
defining the latent space's probability distribution.
3. Sampling layer: Takes the mean and variance from the encoder to sample a
point from the latent space distribution. This sampled point represents the
compact representation of the input data.
Decoding side
1. Input layer: This layer takes the sampled latent vector (z) as input.
2. Dense/upsampling layer: Depending on the input data type, the decoder may
use dense layers or upsampling layers (or a combination of both) to gradually
increase the latent representation's dimensionality.
4. Output layer: The final layer of the decoder produces the reconstructed
output, which ideally should match the original input data as closely as
possible.
37
Chapter 6
RESULTS AND DISCUSSION
39
Fig. 22 Reconstructed fluid flow Fig. 23 Reconstructed fluid flow
snapshot using linear autoencoder snapshot using U-Net CNN
40
Fig. 24 Reconstructed fluid flow Fig. 25 Reconstructed fluid flow
snapshot using CNN-VAE snapshot using GAN
Table 1
Technique/method employed RMSE value
Extended proper orthogonal decomposition 4.197
Artificial neural network 11.521
Convolutional neural network 5.043
Linear autoencoder 8.423
U-Net CNN 1.893
Generative adversarial network 2.037
CNN-VAE 2.178
41
Chapter 7
CONCLUSIONS
2. Reconstruction RMSE (root mean square error) per snapshot is found to vary
between 0 and 0.03
3. For this project, we have taken deltr = 23, number of modes = 4, time-resolved
snapshots = 9600 and non-time-resolved snapshots = 400.
4. RMSE (root mean square error) of the reconstruction is best found to be 4.197
GAN, U-Net CNN, and CNN-VAE are the best-performing models: U-Net
achieved the best reconstruction accuracy (MSE = 3.584). This suggests its potential
for fluid flow reconstruction tasks.
42
Chapter 8
FUTURE PROSPECTS
43
identifying patterns and relationships between neighbouring data points. This allows
the network to learn the underlying spatial structure of the flow.
By combining the CNN's ability to learn spatial features with the LSTM's capability
for handling temporal dependencies, the CNN-LSTM model can effectively
reconstruct the current state of the flow field based on the history provided by the
previous snapshots.
8.1.4 Challenges
Training CNN-LSTM models for fluid flow reconstruction can be challenging due to
the following:
Overall, CNN-LSTM offers a promising technique for fluid flow reconstruction using
previous snapshots. With continued research and development, this approach is
expected to become even more powerful and versatile in various applications.
44
REFERENCES
1. Bo Liu, Jiupeng Tang, Haibo Huang, and Xi-Yun Lu (2020) Deep learning
methods for super-resolution reconstruction of turbulent flows:
https://pubs.aip.org/aip/pof/article/32/2/025105/1060593/Deep-learning-metho
ds-for-super-resolution
2. Weisheng Dong, Lei Zhang, Member, IEEE, Guangming Shi, Senior Member,
IEEE, and Xiaolin Wu, Fellow, IEEE (2020) Image Deblurring and
Super-Resolution by Adaptive Sparse Domain Selection and Adaptive
Regularization: https://ieeexplore.ieee.org/document/5701777
5. Junwei Chen, Marco Raiola, Stefano Discetti (2022) Pressure from data-driven
estimation of velocity fields using snapshot PIV and fast probes:
https://www.sciencedirect.com/science/article/pii/S0894177722000498#:~:text
=TR%20pressure%20fields%20are%20obtained,the%20pressure%20gradient
%20is%20implemented.
45
10.Hamed Alqahtani, Manolya Kavakli, Dr. Gulshan Kumar Ahuja (2019):
https://www.researchgate.net/figure/The-general-architecture-of-GAN_fig5_33
8050169
46
MIN-400B Report (Final Evaluation).pdf
ORIGINALITY REPORT
13 %
SIMILARITY INDEX
7%
INTERNET SOURCES
8%
PUBLICATIONS
7%
STUDENT PAPERS
PRIMARY SOURCES
1
Submitted to Indian Institute of Technology
Roorkee
1%
Student Paper
2
e-archivo.uc3m.es
Internet Source 1%
3
Submitted to City University of Hong Kong
Student Paper 1%
4
Meenal R. Kale, Deepa -, Anil Kumar N, N.
Lakshmipathi Anantha, Vuda Sreenivasa Rao,
<1 %
Sanjiv Rao Godla, E. Thenmozhi. "Enhancing
Cryptojacking Detection Through Hybrid Black
Widow Optimization and Generative
Adversarial Networks", International Journal
of Advanced Computer Science and
Applications, 2024
Publication
5
export.arxiv.org
Internet Source <1 %
6
uia.brage.unit.no
Internet Source <1 %
7
www.journal.esrgroups.org
Internet Source <1 %
8
Junwei Chen, Marco Raiola, Stefano Discetti.
"Pressure from data-driven estimation of
<1 %
velocity fields using snapshot PIV and fast
probes", Experimental Thermal and Fluid
Science, 2022
Publication
9
Submitted to University of Oklahoma
Student Paper <1 %
10
Submitted to University of South Australia
Student Paper <1 %
11
Submitted to University of Lincoln
Student Paper <1 %
12
Santanu Pattanayak. "Pro Deep Learning with
TensorFlow", Springer Science and Business
<1 %
Media LLC, 2017
Publication
13
Fahdah A Almarshad, Ghada Abdalaziz
Gashgari, Abdullah I. A. Alzahrani.
<1 %
"Generative Adversarial Networks-Based
Novel Approach for Fraud Detection for the
European Cardholders 2013 Dataset", IEEE
Access, 2023
Publication
14
Submitted to Harrisburg University of Science
and Technology
<1 %
Student Paper
15
ruor.uottawa.ca
Internet Source <1 %
16
Submitted to Liverpool John Moores
University
<1 %
Student Paper
17
Submitted to University of Edinburgh
Student Paper <1 %
18
Submitted to CSU, San Diego State University
Student Paper <1 %
19
Submitted to SASTRA University
Student Paper <1 %
20
espace.library.uq.edu.au
Internet Source <1 %
21
Submitted to University of Bath
Student Paper <1 %
22
Submitted to University of Leicester
Student Paper <1 %
23
Submitted to Manchester Metropolitan
University
<1 %
Student Paper
24
link.springer.com
Internet Source <1 %
<1 %
25
mafiadoc.com
Internet Source
26
Ayesha Rafique, Tauseef Iftikhar, Nazar Khan.
"Adversarial Placement Vector Learning",
<1 %
2019 2nd International Conference on
Advancements in Computational Sciences
(ICACS), 2019
Publication
27
Pawar, Suraj. "Physics-Guided Machine
Learning for Turbulence Closure and
<1 %
Reduced-Order Modeling", Oklahoma State
University, 2023
Publication
28
"Intelligent Computing and Networking",
Springer Science and Business Media LLC,
<1 %
2023
Publication
29
arxiv.org
Internet Source <1 %
30
Submitted to University College London
Student Paper <1 %
31
Submitted to University of Carthage
Student Paper <1 %
32
aaltodoc.aalto.fi
Internet Source <1 %
33
www.sandiego.gov
<1 %
Internet Source
34
Submitted to Bilkent University
Student Paper <1 %
35
Submitted to Institute of Technology Carlow
Student Paper <1 %
36
Submitted to University of Hong Kong
Student Paper <1 %
37
Submitted to University of Surrey
Student Paper <1 %
38
philpapers.org
Internet Source <1 %
39
di.univ-blida.dz
Internet Source <1 %
40
Submitted to Ganpat University
Student Paper <1 %
41
Insu Choi, Woo Chang Kim. "A Graph-based
Approach to Predicting Downside Risks of
<1 %
Global Financial Market Indices Using
Dependencies on Inflation Rate-Adjusted
Condition", Research in International Business
and Finance, 2023
Publication
42
core-cms.prod.aop.cambridge.org
Internet Source <1 %
43
www.medrxiv.org
Internet Source <1 %
44
NONG YE, GAVRIEL SALVENDY. "Cognitive
engineering based knowledge representation
<1 %
in neural networks", Behaviour & Information
Technology, 1991
Publication
45
dspace.vutbr.cz
Internet Source <1 %
46
Submitted to Berlin School of Business and
Innovation
<1 %
Student Paper
47
Lecture Notes in Computer Science, 2015.
Publication <1 %
48
Submitted to University of Strathclyde
Student Paper <1 %
49
Yuta Kawachi, Yuma Koizumi, Noboru Harada.
"Complementary Set Variational Autoencoder
<1 %
for Supervised Anomaly Detection", 2018 IEEE
International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2018
Publication
50
Zhengguang Liu, Huang Chen, Zhengyong
Ren, Jingtian Tang, Zhimin Xu, Yuanpeng
<1 %
Chen, Xu Liu. "Deep learning audio
magnetotellurics inversion using residual-
based deep convolution neural network",
Journal of Applied Geophysics, 2021
Publication
51
d197for5662m48.cloudfront.net
Internet Source <1 %
52
github.com
Internet Source <1 %
53
hal.archives-ouvertes.fr
Internet Source <1 %
54
pdffox.com
Internet Source <1 %
55
tnsroindia.org.in
Internet Source <1 %
56
"Image Analysis and Processing – ICIAP
2019", Springer Science and Business Media
<1 %
LLC, 2019
Publication
57
Santi Seguí, Michal Drozdzal, Guillem Pascual,
Petia Radeva, Carolina Malagelada, Fernando
<1 %
Azpiroz, Jordi Vitrià. "Generic feature learning
for wireless capsule endoscopy analysis",
Computers in Biology and Medicine, 2016
Publication
58
Shah, Prasham. "Design Space Exploration of
Convolutional Neural Networks for Image
<1 %
Classification.", Purdue University, 2023
Publication