Stad 937

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

MNRAS 522, 1504–1520 (2023) https://doi.org/10.

1093/mnras/stad937
Advance Access publication 2023 March 29

Predicting light curves of RR Lyrae variables using artificial neural


network based interpolation of a grid of pulsation models

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Nitesh Kumar ,1 ‹ Anupam Bhardwaj ,2 Harinder P. Singh,1 Susmita Das ,3 Marcella Marconi ,2
Shashi M. Kanbur4 and Philippe Prugniel5
1 Department of Physics and Astrophysics, University of Delhi, Delhi 110007, India
2 INAF-Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, I-80131 Naples, Italy
3 Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Eötvös Loránd Research Network (ELKH), Konkoly-Thege Miklós út 15-17,

H-1121, Budapest, Hungary


4 Department of Physics and Earth Science, State University of Newyork at Oswego, Oswego, NY 13126, USA
5 Université de Lyon, Université Lyon 1, 69622 Villeurbanne; CRAL, Observatoire de Lyon, CNRS UMR 5574, F-69561 Saint-Genis Laval, France

Accepted 2023 March 23. Received 2023 March 23; in original form 2022 October 17

ABSTRACT
We present a new technique to generate the light curves of RRab stars in different photometric bands (I and V bands) using
artificial neural networks (ANN). A pre-computed grid of models was used to train the ANN, and the architecture was tuned
using the I-band light curves. The best-performing network was adopted to make the final interpolators in the I and V bands. The
trained interpolators were used to predict the light curve of RRab stars in the Magellanic Clouds, and the distances to the Large
Magellanic Cloud and Small Magellanic Cloud were determined based on the reddening independent Wesenheit index. The
estimated distances are in good agreement with the literature. The comparison of the predicted and observed amplitudes, and
Fourier amplitude ratios showed good agreement, but the Fourier phase parameters displayed a few discrepancies. To showcase
the utility of the interpolators, the light curve of the RRab star EZ Cnc was generated and compared with the observed light
curve from the Kepler mission. The reported distance to EZ Cnc was found to be in excellent agreement with the updated
parallax measurement from Gaia EDR3. Our ANN interpolator provides a fast and efficient technique to generate a smooth grid
of model light curves for a wide range of physical parameters, which is computationally expensive and time-consuming using
stellar pulsation codes.
Key words: methods: data analysis – techniques: photometric – stars: variables: RR Lyrae.

due to their age and have been detected in different galactic (Vivas &
1 I N T RO D U C T I O N
Zinn 2006; Drake et al. 2013; Zinn et al. 2014; Pietrukowicz et al.
RR Lyrae stars (RRLs) are old (≥10 Gyr), population II stars that 2015) and extra-galactic (Moretti et al. 2009; Soszyski et al. 2009;
are in the core helium-burning evolutionary phase of low-mass stars. Fiorentino et al. 2010; Cusano et al. 2013) environments. They have
They are located at the intersection of the horizontal branch (HB) also been identified in several globular clusters (Coppola et al. 2011;
and the instability strip (IS) in the HR diagram. They are low- Di Criscienzo et al. 2011; Kuehn et al. 2013; Kunder et al. 2013).
mass (∼ 0.5M ), metal-poor stars that pulsate primarily in the Advancements in the theoretical modelling of stellar pulsation
fundamental mode (RRab) and in the first overtone (RRc) mode with have made it possible to generate grids of light curves that correspond
periods ranging from ∼0.2 to 1.0 d (see e.g. Bhardwaj 2022). Many to a set of physical parameters of the variables using non-linear
RRL variables are also classified as double mode pulsators (RRd) 1D hydrodynamical codes, such as those described by Stellingwerf
because they pulsate simultaneously in the fundamental and first (1984), Bono et al. (1997), Bono, Marconi & Stellingwerf (1999),
overtone modes (Sandage, Katem & Sandage 1981; Jurcsik et al. Marconi et al. (2015), and De Somma et al. (2020, 2022). A grid
2015; Soszyski et al. 2017). RRLs exhibit a well-defined period– of non-linear convective hydrodynamical models was produced by
luminosity–metallicity (PLZ) relation especially in the near-infrared Marconi et al. (2015) to study the pulsation properties of RRLs. This
bands (Longmore, Fernley & Jameson 1986; Bono et al. 2001; grid was created using horizontal-branch (HB) evolutionary models
Catelan, Pritzl & Smith 2004; Sollima et al. 2006; Muraveva et al. (for more information, see Pietrinferni et al. 2006, and its references)
2015; Bhardwaj et al. 2021) and thus constitute as primary standard and takes into consideration different chemical compositions, lumi-
candles of the cosmic distance ladder. RRLs are also excellent nosity levels, and stellar masses. The models were calculated using
probes to gain deeper insights into the theory of stellar evolution the hydrodynamical codes of Stellingwerf (1982; updated later by
and pulsation (e.g. Marconi et al. 2015; Das et al. 2018; Marconi Bono, Caputo & Marconi 1998; Bono et al. 1999) and were based
et al. 2018). RRLs are excellent tracers of the old stellar populations on the numerical and physical assumptions detailed in Bono et al.
(1998, 1999) and Marconi et al. (2003, 2011).
Additionally, the radial stellar pulsations of Smolec & Moskalik
 E-mail: [email protected] (2008) available with the Modules for Experiments in Stellar As-

© 2023 The Author(s)


Published by Oxford University Press on behalf of Royal Astronomical Society
Generating RRab model light curves using ANN 1505
trophysics (MESA; Paxton et al. 2011, 2013, 2015, 2018, 2019) can
also be used to generate the light curves of radially pulsating variable
stars.
The theoretical light curves generated from the available pulsation
models have been utilized to investigate pulsation properties, derive
PLZ relations, and provide a quantitative comparison with the

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
observed light curves of Cepheid and RR Lyrae variables (Bono,
Castellani & Marconi 2000b; Keller & Wood 2002; Caputo et al.
2004; Marconi & Clementini 2005; Marconi & Degl’Innocenti 2007;
Natale, Marconi & Bono 2008; Marconi et al. 2013, 2015, 2017;
Bhardwaj et al. 2017a; Das et al. 2018; Ragosta et al. 2019; Bellinger
et al. 2020; Das et al. 2020).
However, generating a model light curve from the given input
parameters is still a computationally expensive problem, since it
involves solving complex hydrodynamical equations. The processing
time has been reduced significantly with modern computational
facilities, but theoretical light curves for a fine grid of pulsation Figure 1. A representation of an ANN with n inputs, two hidden layers
models covering the entire parameter space are still not feasible. having n1 and n2 neurons, respectively, and an output layer with m neurons.
A smooth grid of pulsation models of RR Lyrae and Cepheids is a0 represents the bias terms in respective layers. The summation and non-
crucial to predict the physical parameters of these variables either linearity nodes are omitted for the sake of clarity.
through a model-fitting approach (Marconi et al. 2013) or using
an automated comparison with observed light curves (Bellinger 2 METHODOLOGY
et al. 2020). Bellinger et al. (2020) obtained a catalogue of physical
The theoretical grid of models can be thought of as being generated
parameters for observed stars by applying machine-learning methods
via a function f defined as
to the available Cepheid and RR Lyrae models, but the accuracy
of physical parameters was limited due to the small number of f ( x ; w ) = [light-curve].
models in the grid. These authors trained a neural network with light-
curve structure parameters like I- and V-band amplitudes, acuteness, where x is a combination of physical parameters of the stars, such as
skewness, and Fourier coefficients A1 , A2 , A3 (see e.g. Deb & Singh M, L, Teff , X (hydrogen abundance ratio) and Z (metal abundance
2009; Bhardwaj et al. 2015) along with the period as input to predict ratio), the period and a few other parameters including mixing
the physical parameters of the model including mass (M), luminosity length, radiative cooling, convective and turbulent flux parameters.
(L), radius (R), effective temperature (T eff ). The theoretical light Interested readers may refer to Marconi et al. (2015) and references
curves computed by Marconi et al. (2015) & Das et al. (2018) were therein for a better understanding of the input parameters required
used for this analysis. They choose the input parameters based on for the generation of light curves. Here, ware fitting parameters.
the feature importance study using another ML algorithm, namely If we assume that the function ‘f’ exists and is continuous and
Random Forest (RF). The error on the derived parameters is estimated differentiable in parameter space: an ANN with one or more hidden
by perturbing the light-curve parameters with random noise 100 times layers can reproduce this function. Theoretically, Cybenko (1989),
and passing them through the ANN. Hornik, Stinchcombe & White (1989), and Hornik (1991) showed
In this study, we utilized a previously generated grid of models and that ANN with a non-linearly activated hidden layer can approximate
employed modern automated methods to infer light curves. This was any continuous function with arbitrary accuracy when provided with
done by training an artificial neural network (ANN) using models enough neurons in the hidden layer. In addition, ANNs are flexible
from Marconi et al. (2015) and Das et al. (2018) to predict light in the choice of architecture, and optimization algorithms, and are
curves based on a set of input parameters. This approach eliminates simple to implement. They also provide the possibility transfer
the need to solve complex time-dependent equations and can generate learning and therefore is useful in the case of re-training the model
predictions much more efficiently. with updated data.
The paper is organized as follows. The introduction of ANNs and
quantitative Fourier analysis is presented in Section 2. The input and
2.1 Artificial neural network
observational data sets used for network training and comparison are
described in Section 3. The tuning of the network architecture and We employ the simplest neural network which is a feed-forward,
training of the final interpolators in both I and V bands is discussed fully connected neural network. The smallest unit of the network,
in Section 4. The validity of the trained interpolators is tested by the neuron (or a perceptron) is a mathematical unit that calculates
comparing the newly generated model light curves with the ANN- the weighted sum of all the neurons that are previously connected
predicted models in Section 4.5. In Section 5, the comparison of to it and feeds this output to all the neurons in the next layer after
Fourier parameters between observed and predicted light curves of applying a non-linear activation function (σ ) (see Fig. 1). The value
LMC and SMC in both I and V bands is discussed, and the distances of ith neuron in kth layer is calculated by
to these galaxies are estimated. The applications of the trained ⎛ ⎞
k −1
n
interpolators are explored in Section 6, including a comparison of the
observed and predicted V band light curve of a variable star (EZ Cnc) ai = σ (k) ⎝
(k)
wij aj ⎠ , for 1 ≤ i ≤ nk ,
( k−1) ( k−1)
(1)
j =0
and the determination of its distance in Section 6.1. A smooth grid
of light curve templates in I and V bands is generated using the ANN with,
interpolators in Section 6.2. The results of the study are summarized
in Section 7. a0(k) = 1, (2)

MNRAS 522, 1504–1520 (2023)


1506 N. Kumar et al.
where σ (k) is the activation function for the kth layer, which is usually 1, which is calculated using the following equation:
a non-linear function. A few examples of widely used activation    
t − t0 t − t0
functions are rectified linear unit (or relu), sigmoid, and hyperbolic = − Int , (7)
P P
tangent (tanh). The weights of jth neuron that is connected with ith
neuron in kth layer are represented by wij(k) and are optimized to obtain where t0 is the epoch of maximum brightness and P is the pulsation
a minimum (or maximum) value of a certain objective function during period of the star/model.

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
the training process. The weights are updated in an iterative process Ak and φ k are used to calculate Fourier amplitude ratios (Rk1 ) and
where in each iteration the errors obtained at the output layer are phase differences (φ k1 ):
propagated back to the previous layers and thus making the network Ak
‘better’ at every iteration. This algorithm was first conceptualized Rk1 = , (8)
A1
by Rumelhart, Hinton & Williams (1986) and is known as the
backpropagation algorithm. This backpropagation algorithm is the φk1 = φk − kφ1 , (9)
backbone of modern stochastic gradient descent (SGD) algorithms where k > 1 and 0 ≤ φ k1 ≤ 2π .
(for a review of modern SGD algorithms, interested readers may
go through Ruder 2016) that optimize the weights of the network
based on the quantity that you want to optimize in order to train the 3 DATA
network. Fig. 1 represents a schematic of a network with two hidden
layers. We have assumed that at for input layer, k ≡ 0 and a(0) ≡ x 3.1 Training data
and at the output layer k ≡ L and a(L) ≡ yˆ (yˆ is the output of the To train the neural network, we adopted the theoretical light curves
network). of fundamental mode RR Lyrae (or RRab) models as described
If the predicted and given absolute magnitude value of ith model in Das et al. (2018). We used a total of 274 RRab light curves
corresponding to jth phase is yˆij and yij , the mean square error (MSE) corresponding to a grid of physical parameters, out of which 166
for the ith model is calculated using were initially computed by Marconi et al. (2015) and the additional
108 by Das et al. (2018) using the same non-linear, time-dependent
1 
Ns
Ei ≡ (MSE)i = (yij − yˆij )2 , (3) convective hydrodynamical models. The model light curves were
Ns j =1 generated in bolometric magnitudes and they were later transformed
into Johnson Cousin photometric bands. A summary of theoretical
where Ns is the total number of magnitude bin values for each model. RRab models is presented in Fig. 2. The models contain seven distinct
The average MSE for all models in the grid can be calculated using chemical compositions ranging from Z = 0.001 to Z = 0.02, with
⎛ ⎞ a primordial He abundance, Y = 0.245, and a constant helium-
1  1 ⎝ 1 
N N Ns
to-metals enrichment ratio, [He/M] = 1.4 to replicate the initial
E ≡ Avg. MSE = (MSE)i = (yij − yˆij )2 ⎠ .
N i=1 N i=1 Ns j =1 helium abundance of the sun (Serenelli & Basu 2010). The pulsation
models were constructed with different sets of stellar masses and
(4) luminosities, which were fixed according to detailed central He-
where N is the total number of models in the training data set. At burning horizontal-branch evolutionary models. The range of Z is
every training iteration, for the simplest case of gradient descent broad enough for the comparison with the observed RRLs in the
learning, the weights are updated according to the following rule: satellite galaxies of the Milky Way, the Small Magellanic Cloud
(SMC), and the Large Magellanic Cloud (LMC; Clementini et al.
∂E 2003). A few models have pulsation periods greater than one day,
wij(k) = wij(k) − η × , (5)
∂wij(k) thereby including the possibility of evolved RRLs in the training data
set. Out of the 274, six overlapping models were removed from the
where η is a scaling factor typically referred to as learning parameter input data set leaving 268 unique RRab models to train the ANN.
that determines the size of the gradient descent steps. However, we The ANN models are described in detail in Section 4.
used a modified version of the gradient descent algorithm known as
the adaptive moment optimization algorithm (adam; Kingma & Ba
2014), where the learning parameter is also updated at each iteration 3.2 Observational data used for comparison
to reach the minimum of the objective function efficiently. We used the observed light curves for a comparison with the ANN
predicted light curves. Since the training light curves are in the
Johnson Cousin filters bandpass, we need to have the observed
2.2 Fourier parameters light curves in the same filters. We used the I and V band light
curves from the IVth phase of the Optical Gravitational Lensing
The analysis of light curves using Fourier analysis has been discussed Experiment (OGLE1 ) catalogue of RR Lyrae variables in LMC and
extensively in literature (Deb & Singh 2009; Bhardwaj et al. 2015; SMC (Soszyski et al. 2016).
Das et al. 2018; and references therein). A Fourier series of sines is However, the observed magnitudes suffer from interstellar extinc-
fitted to the theoretical and observed light curves and the parameters tion and reddening due to the interaction of light with interstellar dust.
are deduced: We need to account for interstellar extinction and perform extinction
corrections in magnitudes at a given wavelength. Using the positions

N
m = m0 + Ak sin (2π k + φk ). (6) (RA/Dec.), we obtain the excess colour values, E(V − I), for RRLs in
k=1 the LMC and SMC from the reddening maps of Haschke, Grebel &
Here, Ak and φ k are Fourier amplitude and phase coefficients,
respectively, and  is the pulsation phase that ranges from 0 to 1 http://ogle.astrouw.edu.pl/

MNRAS 522, 1504–1520 (2023)


Generating RRab model light curves using ANN 1507

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 2. A visual representation of the input data set of RRLs models described in Table 1. A few additional models were computed in this work for the
validation of the trained ANN interpolators and are marked with A, B, and C.

Duffau (2011). We apply extinction corrections using Schlegel’s list Each input parameter has a different numerical range, we need
of conversion factors2 : to transform each parameter to have the same numerical range as
other input parameters. The input parameters are scaled in such a
AV = 2.4 × E(V − I ) = 3.32 × E(B − V ), (10)
way that each input parameter has zero mean and unity standard
and, deviation over the whole grid. At the output layer, we provide the
corresponding (I or V band) light curve of the model that consists of
AI = 1.41 × E(V − I ) = 1.94 × E(B − V ). (11) the absolute magnitudes at a given series of phases. We have a total
of 1000 mag values per model corresponding to phase values from 0
4 A N N I N T E R P O L AT O R F O R R R LY R A E to 1 in steps of 0.001. We used a neural network that has one input
MODELS layer with six input neurons, a few hidden layers (not more than three
to keep the network small), and one output layer with 1000 output
We used the ANN to interpolate the light curve for the input neurons containing the absolute magnitude corresponding to phase
parameters within the grid. We built a feed-forward fully connected values between 0 and 1. We used a linear activation function (σ (L) ≡
neural network and optimized its network architecture. The input 1) between the last hidden layer and the output layer because we do
layer of this network takes six parameters, which are not want to constrain the output values in any particular range. We
  have a total of 268 models, and hence the training matrix at the input
M L
x≡ , log , Teff , X, Z, log(P) , layer has [268 × 6] and at the output layer is [268 × 1000].
M L
where M and L are the mass and luminosity of the model. Teff is
the effective temperature of the model in Kelvin. X and Z are the 4.1 Network architecture and hyperparameter optimization
hydrogen and metal abundances of the model and the parameter P is
the period in days. We have included the period as one of the inputs To have a generalized network that does not overfit/underfit the
to facilitate ease for the user who might want to generate a model given data set, we need to choose a suitable architecture (the
corresponding to a specific period. number of hidden layers, the number of neurons in the hidden
The hydrodynamic models presented in Marconi et al. (2015) layer, activation functions) as well as the learning hyperparameters
were generated using the same physical and numerical assumptions such as the loss optimization algorithm and parameters therein, etc.
employed in earlier works, such as Bono et al. (1998, 1999) and (Elsken, Metzen & Hutter 2019). Each individual hyperparameter
Marconi et al. (2003, 2011). For instance, the radiative opacities has a significant role in the training process and a different value
used in their models were taken from the OPAL radiative opacities of a hyperparameter significantly affects the result of the training.
provided by the Lawrence Livermore National Laboratory (Igle- However, there are no explicit ‘rules’ for selecting these attributes in
sias & Rogers 1996), while the molecular opacities were obtained such a way that the ANN model does not become trapped in a local
from Alexander & Ferguson (1994). Since the physical conditions, solution. This is a crucial problem in the field of machine learning
including the radiative and molecular opacities were kept constant, (Guo et al. 2008).
they were not considered as inputs to the ANN. The choice of architecture and hyperparameters usually depends
on the intuition of the expert and hand tuning. Typically, the trial-
and-error approach like grid search and random search (Bergstra &
2 https://dc.zah.uni-heidelberg.de/mcextinct/q/cone/info Bengio 2012) is used to determine these characteristics. We created

MNRAS 522, 1504–1520 (2023)


1508 N. Kumar et al.
Table 1. A summary of 268 fundamental mode RR Lyrae models. 4.2 Training of the I-band Interpolator

Z X M
log L
Teff (K) No. of RRab stars To construct the final interpolator for the I band, we used the
M L
architecture that performed best in the architecture optimization step,
0.02 0.71 0.51 1.69 5700–6800 6 i.e. architecture 1. In the previous step, we note that the training
1.78 5600–6800 7 loss (MSE) decreases globally at large epochs, but the individual
0.54 1.49 6000–6800 5 updates are quite big and the loss oscillates rapidly (see Fig. 3).

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
1.94 5200–6600 8 High learning rates cause such oscillations in the learning curve.
0.008 0.736 0.55 1.62 6000–7000 10
The oscillations can be reduced by reducing the learning parameter;
1.72 5800–6900 12
however, we do not want to start the training with a low η because it
0.56 1.6 5900–6900 11
1.7 5800–6900 10 impairs the performance of the network (see the comparison between
0.57 1.58 6000–6800 5 architecture no. 1 and 4: the low learning rate causes the network to
2.02 5400–6680 6 return comparatively higher MSE). Hence, we employ ‘piece-wise
0.004 0.746 0.53 1.81 5700–6800 7 decay’ of the learning rate to mitigate these oscillations of loss at
0.55 1.71 6000–6900 9 higher epochs. To do this, we begin with the best-performing network
1.81 5700–6900 13 of the previous step and re-train it with a decreasing learning rate. We
0.56 1.65 6000–6900 10 reduce the current learning rate by dividing it by a constant number
1.75 5800–6800 10 (‘δ’) when the epoch crosses successive powers of 10. We chose δ =
0.57 1.63 6000–6900 10
5, based on the intuition we gained after experimenting with a few
1.73 5900–6800 10
values of δ. This guarantees that the loss diminishes gradually as the
0.59 1.61 6000–6900 10
2.02 5700–6700 7 training epochs increase. We trained this network for 11 000 epochs
0.001 0.754 0.58 1.87 5900–6900 7 and obtained a minimum MSE of 9.07 × 10−6 . The final learning
0.64 1.67 6000–6800 9 curve (epoch versus loss curve) is shown in Fig. 4 along with the
1.99 5700–6800 10 learning parameter (η) in the top panel. This final network is adopted
0.0006 0.7544 0.6 1.89 5700–6900 9 as the I-band ANN interpolator for fundamental mode RR Lyrae
0.67 1.69 6000–6800 9 models.
2.01 5800–6800 9
0.0003 0.7547 0.65 1.92 5800–6800 6
0.72 1.72 6000–6700 8
1.99 5700–6700 10
4.3 Training of the V-band Interpolator
0.0001 0.7549 0.72 1.96 5800–6800 7 The problem of training the network for interpolating the light curve
0.8 1.76 6000–6700 8 in a different band is a similar problem that we tackled in the
1.97 5800–6700 10
previous step and hence we used the same network architecture
that performed best during the I-band interpolator training, to train
the V-band interpolator. We started the training with a neural
Table 2. The hyperparameter search space for the network. network with three hidden layers that contain 64, 128, 128 neurons,
respectively. We started to train the network with the same ‘adam’
S. no. Name of hyperparameter Possible values
optimization algorithm with the same initial learning rate parameter
1 No. of hidden layers [1, 2, 3] (η = 1.7941 × 10−3 ). After the initial training for 1000 epochs, we
2 No of neurons in one hidden layer [16, 32, 64, 128] encounter the same problem of loss oscillations, and hence we treat
3 Optimizer ‘ adam’ (Kingma & Ba the learning rate in the same manner as we did in the case of the
2014) I-band interpolator. The learning curve for the V-band interpolator
4 Learning rate [10−2 − 10−4 ](log along with the adopted learning rate parameter is shown in the right-
sampling) hand panel of Fig. 4.
5 Activation function [‘ relu’, ‘ tanh’]
6 Weights initialization ‘ GlorotUniform’

4.4 Training statistics for interpolators

a grid of possible hyperparameters by choice and intuition that we We calculated the statistics between the original model light curves
gained working with the data set, which is tabulated in Table 2. Out of and the ANN generated/predicted light curves. We determined the
various possible combinations, we chose a set of 100 hyperparameter average mean squared error (MSE), mean absolute error (MAE),
combinations at random. We then trained the network for fixed 1000 and the coefficient of determination (R2 ) between the original and
epochs with the L2 norm (MSE) as the objective function using the predicted magnitude values for a quantitative comparison (Steel,
‘adaptive moment SGD (or adam: Kingma & Ba 2014)’ algorithm Torrie & others 1960; Draper & Smith 1998; Glantz & Slinker 2001).
with a default batch size of 32 samples. The network architecture We have already discussed the MSE in Section 2.1. The MAE for
was optimized using the I-band light curves. For this procedure, we one model is defined by
utilized Keras tuner (O’Malley et al. 2019).
1 
Ns
We show the result of the top 10 performing network architectures MAE = |yj − yˆj |.
in Table 3. We observe that a network with 3 hidden layers with 64, Ns j =1
128, 128 neurons in successive hidden layers with an initial learning
rate(η) ∼ 1.8 × 10−3 , reaches the minimum MSE in 1000 epochs of and R2 is defined by
learning. Fig. 3 depicts the learning curves for the top-10 network
(yj − yˆj )2
architectures listed in Table 3. R2 = 1 − .
(yj − ȳ )2

MNRAS 522, 1504–1520 (2023)


Generating RRab model light curves using ANN 1509
Table 3. The performance of different neural network architectures trained with the different combinations of hyperparameters.

Architecture no. No. of hidden layers H layer 1 H layer 2 H layer 3 Activation Learning rate(η) Min MSE

1 3 64 128 128 relu 1.7941 × 10−3 8.7585 × 10−5


2 3 32 64 128 relu 2.5664 × 10−3 1.3403 × 10−4
3 3 64 32 128 relu 3.3848 × 10−3 1.8869 × 10−4

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
4 3 64 128 128 relu 8.2402 × 10−4 2.1454 × 10−4
5 3 128 64 128 relu 1.1231 × 10−3 2.1862 × 10−4
6 2 128 128 – relu 3.2495 × 10−3 2.5605 × 10−4
7 3 32 32 128 relu 6.1855 × 10−3 2.7093 × 10−4
8 3 32 128 32 relu 6.9244 × 10−3 4.3660 × 10−4
9 2 128 128 – relu 8.5086 × 10−4 4.4778 × 10−4
10 2 64 128 – relu 1.2916 × 10−3 4.6174 × 10−4

indicate that the light curves predicted by the ANN are consistent
with the model light curves. The validation models produced an
average MSE of 2.316 × 10−3 in the I band, corresponding to an
average R2 value of 0.95, and an MSE of 2.658 × 10−3 in the V
band, corresponding to an average R2 value of 0.97. This agreement
between the ANN generated light curves and the new model light
curves demonstrates the validity of our ANN models and confirms
that they can be used to generate light curves in both V and I bands.

5 C O M PA R I S O N O F O B S E RV E D L I G H T
C U RV E S W I T H T H E G E N E R AT E D L I G H T
C U RV E S U S I N G I N T E R P O L AT O R S
Theoretical models are used to complement observational data and
are tested on the observed light curves of stars for which the
physical parameters are already known. To accurately determine
the physical parameters of RRLs, a detailed spectroscopic and
photometric analysis is often required (see e.g. Wang et al. 2021).
However, in some cases, it is not feasible to obtain spectroscopic
measurements of such stars, so data-driven methods are necessary
to infer their physical properties. One such method is presented in
the study by Bellinger et al. (2020). The authors have predicted the
physical parameters (mass, luminosity, effective temperature, and
Figure 3. Training curves of the network architectures described in Table 3. radius) of stars in the LMC and SMC using the OGLE-IV survey,
We seek such an architecture that converges to the minimum rapidly in the which provides the I and V band light curves of various types of
first 1000 epochs. We find that the network with architecture 1 reaches the
variable stars, including RR Lyrae. The authors used an ANN to
minimum fastest.
derive the parameters by training it with the relationship between
light-curve structure (including amplitudes, acuteness, skewness,
where yj and yˆj is the original and predicted magnitude value
y and coefficients of the Fourier series) and physical parameters of
corresponding to the jth phase and ȳ = Nsj . Ns is the total number the models. By comparing the ANN generated light curve based on
of sample points for the light curve. We quote the average of each these parameters to the observed light curves, the validity of the
quantity over the whole data set. derived parameters can be evaluated. This comparison was done by
The training statistics for both I and V bands are given in Table comparing the amplitudes, Fourier parameters, and their distribution
4. For the I-band interpolator, we achieve a minimum MSE of with the period of pulsation of the ANN generated light curves to the
∼9.076 × 10−6 , with corresponding MAE of ∼2.162 × 10−3 and R2 observed light curves. We stress that the ANN used by Bellinger et al.
∼ 0.9987. For the V-band interpolator, we achieve a minimum MSE (2020) to estimate the physical parameters based on the light curves
of 1.175 × 10−5 , with corresponding MAE of ∼2.503 × 10−3 and and the one we use to predict the light curves based on the physical
R2 ∼ 0.9994. parameters were trained on the exact same set of hydrodynamical
models.
4.5 Validation of interpolators  The ANN model requires a total of six input parameters:
M L
M
, log L
, Teff , X, Z, log(P) to predict the light curve. We
To validate our method, we computed three additional models
using the same hydrodynamical code and compared the light curves used the ‘Lomb–Scargle’ algorithm (Lomb 1976; Scargle 1982) to
predicted by the ANN with the ones obtained from these models. The determine the period from the observed magnitudes in un-evenly
comparison between the ANN generated light curves and the model spaced observations and we calculated the Z values for the Bellinger
light curves in I and V bands can be seen in Figs 5 and 6, respectively. et al. (2020) stars from photometric metallicities ([Fe/H]) provided
The physical parameters for the validation models were selected from by Skowron et al. (2016). The steps for transformation from [Fe/H]
a relatively scarce parameter space. The results, shown in Table 5, to Z are provided as follows: If the composition of a star is solar

MNRAS 522, 1504–1520 (2023)


1510 N. Kumar et al.

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 4. The learning curve for both the I and V band is shown in the respective blocks along with the learning rate parameter. The blue curve in each plot
represents the training curve till the initial 1000 epochs trained with the best-performing network architecture obtained using random search. We then transfer
the trained weights and re-train the network with piece-wise decay of the learning rate. The orange curve represents the training curve in this phase.

Table 4. The statistics of the interpolators on training data. scaled, the following relation holds (Piersanti, Straniero & Cristallo
2007)
Band No. of models MSE MAE R2
[Fe/H] = log(Z/X) − log(Z /X ). (12)
I 268 9.076 × 10−6 2.162 × 10−3 0.99879
V 268 1.175 × 10−5 2.503 × 10−3 0.99940 For Sun, we adopted X = 0.7392, Y = 0.2486, Z = 0.0122
from Asplund, Grevesse & Sauval (2005). Also, we fixed Y = 0.245

(a) (b) (c)

Figure 5. The comparison of the original I band light curve, represented by the blue background line, and the ANN reconstructed light curve, shown as the
foreground red line, is displayed for three models. These light curves were generated using the same hydrodynamical code for validation purposes. The input
parameters for each light-curve plot can be found in the upper panel. The difference between the two light curves is depicted in the bottom panel with a black
line, with the magenta line representing the 3σ bounds and the green line indicating the mean deviation between the predicted and actual light curves. The
goodness of fit parameter R2 is also calculated for each plot and displayed.

MNRAS 522, 1504–1520 (2023)


Generating RRab model light curves using ANN 1511

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
(a) (b) (c)

Figure 6. Same as Fig. 5 but for V band.

for the RRL stars and determined the values of X and Z for each star Table 5. The statistics of the interpolators on validation data.
by cross-matching Bellinger et al. and Skowron et al.
We managed to compile the required input parameters for the 7789 No. of
RRab stars of LMC and 676 stars of SMC. With the adopted input Band models MSE MAE R2
parameters, we generated the light curves and determined the peak- I 3 2.316 × 10−3 3.375 × 10−2 0.95123
to-peak amplitude (A) and Fourier parameters (R21 , R31 , R41 , R51 , V 3 2.658 × 10−3 3.5001 × 10−2 0.96911
φ 21 , φ 31 , φ 41 , φ 51 ) by fitting a Fourier series defined by equation (6)
with N = 5. The calculated mean magnitudes, amplitudes, Fourier
amplitude ratios, and phase parameters for ANN predicted and 1996; Nemec et al. 2013; Mullen et al. 2021) and such systematic
observed light curves are provided in Table 6. may also arise from the uncertainties in the input photometric
Fig. 7 displays the peak-to-peak amplitudes for the observed and metallicities.
predicted light curves of RR Lyrae in the Magellanic Clouds in the I It should be noted that the predicted physical parameters for these
and V bands, respectively. We find that the predicted amplitudes are in variables from Bellinger et al. (2020) are not highly precise, and their
good agreement with the observed amplitudes of RR Lyrae variables. accuracy is limited by the lack of a fine grid of models. The derived
For the LMC variables, there seem to be two amplitude sequences physical parameters are used to generate the light curve using the
for the longer period RRab (log P  −0.22) stars. The theoretical trained interpolators. A good match between ANN generated and
models reproduce relatively larger amplitudes for Cepheids and observed light curves is expected since the same theoretical models
RR Lyrae than the observations in optical bands (Bhardwaj et al. were employed to train the ANN used by us and the ANN trained
2017b; Das et al. 2018), and the higher amplitude sequence can by Bellinger et al. (2020). However, it should be noted that the
be attributed to this systematic in the models for specific mass- phase parameters were not included in the training input used to
luminosity levels. The discrepancy is known to be related to the derive the physical parameters in Bellinger et al. (2020). Despite
treatment of super adiabatic convection as the considered pulsation including convection in the hydrodynamical models, it remains
models, and Marconi et al. (2015), assume a single value for the difficult to match the observed Fourier phase parameters of RRLs
mixing length parameter that is used in the code to close the non- with theoretical models (Feuchtinger 1999; Paxton et al. 2019).
linear equation system. Nevertheless, the majority of the predicted Comparative studies of the theoretical RR Lyrae models generated
and the observed amplitudes are in good agreement. by Marconi et al. (2015) with the observations show an offset in
the value of Fourier phase parameters. The models predict higher
Fourier phases than the observations (Das et al. 2018). The same
5.1 Comparison of the Fourier parameters of models with effect gets propagated through the ANN, and the phase parameters
observations derived from the light curves generated using the ANN interpo-
Figs 8 and 9 display the Fourier parameters of the predicted and lator are slightly higher than the observed values for both LMC
observed light curves of LMC and SMC in both I and V bands, and SMC.
respectively. For RRab in both the clouds, the Fourier amplitude
ratio values from the predicted light curves agree well with the
5.2 Distance modulus to the Magellanic Clouds
observations, but the predicted phase parameters show a systematic
offset and a larger dispersion. We also note that phase parameters The predicted light curves generated by the ANN model can be
exhibit a strong correlation with the metallicity (Jurcsik & Kovacs employed to estimate the distance modulus of the Magellanic Clouds.

MNRAS 522, 1504–1520 (2023)


1512 N. Kumar et al.

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 7. Peak to peak amplitude against log (P) for Observed and ANN generated light curves of LMC and SMC stars. The extended panel in each plot shows
the histograms of periods on the x-axis and amplitudes on the y-axis.

We adopted the reddening-independent Wesenheit index (Madore estimates are simply a result of accurately predicted light curves of
1982) to determine the distances, which is defined as W = I − observed stars in the Magellanic Clouds.
1.55 × (V − I). The Wesenheit index is a commonly used method
for estimating distances (Soszyski et al. 2018). We calculated the
Wesenheit index-based distance moduli (Wm − WM ) to estimate the 6 A P P L I C AT I O N S O F A N N I N T E R P O L AT O R S
distances to the Magellanic Clouds.
The distance moduli of individual RRab stars in the Magellanic 6.1 Light curve comparison of EZ Cnc
Clouds are computed based on the Wesenheit index and are depicted
As an application to the ANN interpolator, we generated and com-
in Fig. 10. By removing outliers beyond 5σ , the average distance
pared the light curve of an RRab star ‘EZ Cnc’ or ‘EPIC 212182292’
modulus of RRab stars in the LMC and SMC are determined to
with the observed light curve. It is a non-Blazko RRab variable
be μLMC = 18.567 ± 0.135 mag and μSMC = 18.93 ± 0.17 mag,
star which has been observed extensively in both photometric and
respectively. These estimates are consistent with previously pub-
spectroscopic observing regimes. Wang et al. (2021) used 55 high-
lished distances of the Magellanic Clouds based on eclipsing binaries
quality Large Sky Area Multi-Object Fiber Spectroscopic Telescope
(μLMC = 18.476 ± 0.024 mag and μSMC = 18.95 ± 0.07 mag;
(LAMOST; Luo et al. 2015) spectra of medium resolution to deter-
Graczyk et al. 2014; Pietrzyski et al. 2019). It is important to note
mine the atmospheric parameters (Teff , log g, and [M/H]). Starting
that the methodology employed in this study does not presuppose
from these parameters, they generated a grid of theoretical models
any prior period–magnitude relationship, and the obtained distance
using MESA, applying the time-dependent turbulent convective

MNRAS 522, 1504–1520 (2023)


Generating RRab model light curves using ANN 1513

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 8. This plot displays the comparison between the Fourier amplitude ratios and phase differences of the light curves predicted by ANN and the actual
observations for LMC, in both the I and V bands, as a function of period.

models. They searched for the optimum parameters for which the After extracting the V-band light curve, we did extinction cor-
modelled light curve matched with the observed light curve from the rection using the reddening maps of Schlegel, Finkbeiner &
K2 mission (Howell et al. 2014) of the Kepler spacecraft for which Davis (1998); Schlafly & Finkbeiner (2011) and found E(B
the light curve was processed by EPIC Variability Extraction and − V) =0.027 ± 0.0006 and the corresponding AV =
Removal for Exoplanet Science Targets Pipeline (EVEREST; Luger 3.32 × 0.0272 ± 0.0006 = 0.0903 ± 0.00203 using equation (10).
et al. 2016). The estimated parameters of the star are given in Table We generated a light curve using the V-band interpolator by
7. The given flux was converted to the Kepler magnitude (Kp) by providing the parameters given in Table 7 as input. The error bars
formula given by (Nemec et al. 2011) for the predicted magnitude are derived from the given uncertainties
in the physical parameters. The resulting ANN generated light curve
Kp = m0 − 2.5 log(Flux), is plotted against the observed V-band light curve from the Kepler
telescope in Fig. 11. For the purpose of comparison, both light curves
where m0 = 25.4 is derived by taking the difference between the are normalized to mean magnitudes. We also did the quantitative
instrument magnitude and the mean of Kp (Wang et al. 2021). We analysis by determining and comparing the Fourier amplitude and
converted the Kp to the V-band magnitude using the relation given
by (Nemec et al. 2011)
3 The
values are determined from NASA/IPAC INFRARED SCIENCE
V = (1.45 ± 0.24)Kp − (5.97 ± 3.20). ARCHIVE (https://irsa.ipac.caltech.edu/applications/DUST/).

MNRAS 522, 1504–1520 (2023)


1514 N. Kumar et al.

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 9. Same as Fig. 8 but for SMC.

phase parameter ratio for the Kepler V band, and ANN predicted pc) calculated using the μ = 5log (d) − 5 gives d = 1806 ± 2 pc.
V- band light curve. The derived parameters including amplitudes The calibrated Gaia DR2 distance for this star is 1840+ 192
−161 pc (Bailer-
and Fourier parameter ratios are given in Table 8. The peak-to- Jones et al. 2018), which is recently updated by Gaia EDR3 to
peak amplitude (A) predicted by the ANN V-band interpolator is 1775+ 70
−70 pc (Bailer-Jones et al. 2021). The estimated distance to the
higher than the observed V- band light curve. This is due to a EZ Cnc remarkably matches with the published distance estimations
rather low mixing length parameter adopted in the considered grid of in the literature.
models. For a successful model fitting applications to RRLs (see e.g.
Marconi & Clementini 2005; Marconi & Degl’Innocenti 2007), an 6.2 Generating a grid of models using the trained ANN
increased mixing length value is required to match the light curves interpolators
of fundamental pulsators. The Fourier amplitude and phase ratio of
To get the precise physical parameters of pulsating variable stars,
the K2 and ANN generated light curves match each other within the
a grid of model light curves is compared with the observed light
1σ errors.
curve. However, a pre-computed grid of models is usually very
A direct application of this analysis is to estimate the distance to
coarse and non-uniform in the parameter space. This is due to
the star. Since we have calculated the mean absolute magnitude from
the high computation cost and time consuming process of solving
the ANN generated light curve and the apparent magnitude from the
time-dependent hydrodynamical equations of the stellar atmosphere.
Kepler telescope, the distance modulus is calculated as μEZ Cnc =
Moreover, to constrain the parameters like mass, surface gravity and
11.988(2) − 0.703(2) = 11.284(3) mag, and hence the distance d (in
metallicity, we need to rely on spectroscopic data which is usually

MNRAS 522, 1504–1520 (2023)


Generating RRab model light curves using ANN 1515

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 10. The graph presents a histogram of the distance modulus calculated through the Wesenheit index. The left side shows the distribution of the distance
modulus for stars in the LMC, while the right side displays the distribution for the SMC. In each plot, on the top left corner, a box lists the weighted average
distance modulus and the number of stars (N) considered after outliers were removed using a 5σ threshold in sigma clipping.

not available with photometric data. Hence, a fine grid of models is the predicted light curve may not resemble an RRab light curve. The
required to pin down the physical and atmospheric parameters of the reason for this can be traced back to the scarcity of models in this
star. region of the training dataset or the lack of stable RRab stars with
We generated a fine grid of light curve templates in both I and V these combinations of physical parameters.
bands using the trained interpolators. The choice of input parameters
for the new grid is limited by the parameter space of the original grid
of models. We choose a finer and more uniform grid than the original 7 S U M M A RY
models. We generated the grid for three helium abundance ratios Y We present a new technique to generate the light curve of RRab
= 0.245, 0.25, and 0.265 and four different Z values ranging from models in different photometric bands using ANN. We built and
metal-poor to metal-rich stars. For any combination of Y and Z, the trained an artificial neural network for interpolating the light curve
hydrogen abundance ratio (X) can be calculated using X = 1 − Y − within a pre-computed grid of models. The ANN has been trained
Z. Mass (M) varies from 0.52 to 0.79 M , with a constant step size with the physical parameters-light curve grid. We used the models
of 0.03 M . The luminosity parameter log(L/L ) varies from 1.54 generated by Marconi et al. (2015) and used in Das et al. (2018),
to 2.02 dex, with a step size of 0.04 dex, the effective temperature which were computed by solving the hydrodynamical conservation
(Teff ) ranges from 5300 to 7000 K with a step size of 100 K. The equations simultaneously with a nonlocal, time-dependent treatment
period of an RRL is closely related to its temperature, luminosity, of convective transport (Stellingwerf 1982; Bono & Stellingwerf
and mass van Albada & Baker (1971). The van Albada-Baker (vAB) 1994; Bono, Castellani & Marconi 2000a; Marconi 2009). For the
relation describes this relationship. We have used a modern version validation of the trained interpolators, light curves for a few new
of the vAB relation, which includes the effect of metallicity on the models were generated and then compared with the ANN predicted
pulsation period, from Marconi et al. (2015). We used the relation light curves.
for the fundamental mode RRLs. The architecture of the neural network is tuned using the I-
band light curves. A random search for the hyperparameters is
M L
log P = −(0.58 ± 0.02) log + (0.860 ± 0.003) log performed within a grid of inferred hyperparameter combinations.
M L The architecture and trained weights of the best-performing network
− (3.40 ± 0.03) log(Teff ) + (0.013 ± 0.002) log(Z) are then adopted for making the final interpolators in I and V bands.
+ (11.347 ± 0.006). (13) As an application of the trained interpolators, we generated and
compared the light curves of the RRab stars in the Magellanic clouds
We end up with a grid of 37 800 individual parameter combinations (LMC/SMC). The physical parameters [M/M , log(L/L ), Teff ] of
for which the template light curves are generated in the I and V RRab stars in LMC and SMC are adopted from Bellinger et al.
bands using the interpolators. Fig. 12 represents the distribution of (2020). Z is calculated from the metallicity estimates provided by
Marconi et al. (2015) parameter space with the new grid parameters Skowron et al. (2016) and X = 1 − Y − Z; is calculated using a
that we have computed (see Table 9 for the parameter ranges). A fixed primordial helium abundance (Y = 0.245). Lastly, the period is
complete distribution of all parameters is shown in Fig. A1. The determined from the observed light curve using the ‘Lomb–Scargle’
light curve templates of six random models of the new grid are method. The interpolators are then used to predict the light curve
shown in Fig. 13 in both I and V bands. We observe that the predicted from the given physical parameters. Both observed and predicted
light curves exhibit the same structure and features as an RRab light light curves are then fitted with a Fourier sine series (see equation 6)
curve. However, for certain combinations of the input parameters, with N = 5. The comparison of ANN predicted amplitudes, Fourier

MNRAS 522, 1504–1520 (2023)


Table 6. The table displays the calculated ANN predicted mean magnitudes, amplitudes, and Fourier parameters, along with the observations. The complete table can be accessed online in a machine-readable
1516

format.
 
M
OGLE ID λ M
log L L Teff (K) X (dex) Z (dex) P (d) mobs (mag) MANN (mag) Aobs (mag) AANN (mag) R21obs R21ANN φ21obs φ21ANN R31obs R31ANN φ31obs φ31ANN R41obs R41ANN φ41obs φ41ANN R51obs R51ANN φ51obs φ51ANN

LMC-00008 I 0.605 ± 0.054 1.778 ± 0.044 6490 ± 110 0.753971 0.001029 0.7877152 ± 3.1e−06 18.679 ± 0.002 −0.155 ± 0.001 0.462 ± 0.076 0.663 ± 0.234 0.385 ± 0.024 0.419 ± 0.005 3.257 ± 0.078 3.267 ± 0.015 0.253 ± 0.024 0.119 ± 0.005 0.334 ± 0.114 6.173 ± 0.042 0.046 ± 0.023 0.045 ± 0.005 3.834 ± 0.506 2.089 ± 0.105 0.024 ± 0.024 0.024 ± 0.005 1.518 ± 0.946 3.454 ± 0.199

LMC-00010 I 0.676 ± 0.059 1.667 ± 0.044 6380 ± 110 0.754428 0.000572 0.5940743 ± 1.6e−06 18.819 ± 0.002 0.094 ± 0.001 0.765 ± 0.095 0.563 ± 0.166 0.447 ± 0.026 0.397 ± 0.01 2.858 ± 0.068 3.437 ± 0.029 0.345 ± 0.024 0.275 ± 0.009 5.918 ± 0.096 0.336 ± 0.043 0.178 ± 0.023 0.213 ± 0.009 2.78 ± 0.159 3.812 ± 0.056 0.057 ± 0.022 0.16 ± 0.009 5.101 ± 0.436 0.88 ± 0.073
LMC-00027 I 0.714 ± 0.056 1.828 ± 0.045 6270 ± 110 0.753610 0.001390 0.7938821 ± 3e−06 18.507 ± 0.002 −0.294 ± 0.001 0.551 ± 0.065 0.438 ± 0.221 0.426 ± 0.023 0.206 ± 0.006 3.157 ± 0.068 3.322 ± 0.028 0.244 ± 0.023 0.13 ± 0.005 0.424 ± 0.109 3.758 ± 0.045 0.111 ± 0.023 0.103 ± 0.005 3.617 ± 0.21 0.402 ± 0.057 0.07 ± 0.022 0.107 ± 0.005 1.601 ± 0.334 3.693 ± 0.057
LMC-00072 I 0.693 ± 0.055 1.698 ± 0.045 6360 ± 110 0.754293 0.000707 0.6261742 ± 2e−06 18.886 ± 0.002 0.027 ± 0.001 0.558 ± 0.068 0.554 ± 0.169 0.415 ± 0.021 0.373 ± 0.01 2.923 ± 0.062 3.4 ± 0.031 0.28 ± 0.021 0.275 ± 0.01 6.155 ± 0.09 0.187 ± 0.044 0.144 ± 0.02 0.198 ± 0.01 3.379 ± 0.156 3.687 ± 0.06 0.042 ± 0.019 0.14 ± 0.009 0.744 ± 0.481 0.771 ± 0.082
LMC-00082 I 0.665 ± 0.058 1.715 ± 0.043 6690 ± 110 0.754716 0.000284 0.5654446 ± 8e−07 18.741 ± 0.003 0.082 ± 0.001 0.808 ± 0.072 0.905 ± 0.321 0.465 ± 0.02 0.474 ± 0.006 2.623 ± 0.055 3.018 ± 0.016 0.377 ± 0.02 0.395 ± 0.006 5.485 ± 0.074 5.776 ± 0.022 0.275 ± 0.019 0.252 ± 0.006 2.292 ± 0.101 2.249 ± 0.031 0.151 ± 0.018 0.175 ± 0.006 5.466 ± 0.154 5.248 ± 0.042

. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
SMC-0001 I 0.595 ± 0.057 1.661 ± 0.043 6640 ± 110 0.754758 0.000242 0.5588145 ± 9e−07 19.065 ± 0.003 0.166 ± 0.001 0.899 ± 0.084 0.871 ± 0.184 0.444 ± 0.019 0.462 ± 0.005 2.643 ± 0.052 3.069 ± 0.013 0.341 ± 0.018 0.382 ± 0.005 5.359 ± 0.072 6.153 ± 0.018 0.249 ± 0.017 0.222 ± 0.005 1.968 ± 0.098 2.591 ± 0.027 0.165 ± 0.017 0.13 ± 0.005 5.01 ± 0.131 5.583 ± 0.042
SMC-0002 I 0.581 ± 0.054 1.604 ± 0.043 6420 ± 100 0.754461 0.000539 0.594794 ± 1.9e−06 19.011 ± 0.003 0.202 ± 0.001 0.689 ± 0.071 0.519 ± 0.211 0.441 ± 0.028 0.396 ± 0.009 2.837 ± 0.077 3.545 ± 0.027 0.304 ± 0.027 0.219 ± 0.009 5.906 ± 0.114 0.232 ± 0.047 0.148 ± 0.026 0.253 ± 0.009 3.073 ± 0.202 3.413 ± 0.048 0.091 ± 0.026 0.201 ± 0.009 0.351 ± 0.303 0.355 ± 0.06
SMC-0003 I 0.648 ± 0.05 1.723 ± 0.044 6570 ± 110 0.754293 0.000707 0.6506795 ± 3.3e−06 19.158 ± 0.002 0.019 ± 0.001 0.911 ± 0.24 0.797 ± 0.183 0.323 ± 0.027 0.444 ± 0.005 2.938 ± 0.099 3.186 ± 0.015 0.179 ± 0.027 0.344 ± 0.005 6.173 ± 0.168 6.072 ± 0.02 0.052 ± 0.026 0.176 ± 0.005 3.502 ± 0.52 2.67 ± 0.034 0.018 ± 0.027 0.086 ± 0.005 0.495 ± 1.459 5.644 ± 0.062
SMC-0008 I 0.697 ± 0.05 1.675 ± 0.042 6270 ± 100 0.754268 0.000732 0.6328767 ± 2.1e−06 19.154 ± 0.002 0.052 ± 0.001 0.788 ± 0.103 0.5 ± 0.145 0.416 ± 0.027 0.407 ± 0.012 2.911 ± 0.078 3.625 ± 0.036 0.281 ± 0.026 0.305 ± 0.012 6.175 ± 0.115 0.71 ± 0.051 0.109 ± 0.026 0.264 ± 0.012 3.156 ± 0.247 4.485 ± 0.063 0.048 ± 0.024 0.208 ± 0.012 0.595 ± 0.535 1.672 ± 0.079

SMC-0011 I 0.754651 0.000349


N. Kumar et al.

0.669 ± 0.051 1.697 ± 0.042 6530 ± 100 0.5957643 ± 1.8e−06 19.17 ± 0.003 0.075 ± 0.001 0.962 ± 0.134 0.738 ± 0.189 0.359 ± 0.029 0.419 ± 0.005 2.649 ± 0.094 3.171 ± 0.015 0.268 ± 0.029 0.321 ± 0.005 5.682 ± 0.129 6.065 ± 0.021 0.149 ± 0.028 0.194 ± 0.005 2.501 ± 0.209 2.779 ± 0.032 0.099 ± 0.028 0.109 ± 0.005 5.522 ± 0.303 5.897 ± 0.052

MNRAS 522, 1504–1520 (2023)


. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
LMC-00008 V 0.605 ± 0.054 1.778 ± 0.044 6490 ± 110 0.753971 0.001029 0.7877152 ± 3.1e−06 19.419 ± 0.013 0.384 ± 0.001 0.583 ± 0.079 0.828 ± 0.377 0.377 ± 0.089 0.339 ± 0.005 2.798 ± 0.256 2.752 ± 0.018 0.179 ± 0.071 0.193 ± 0.005 5.868 ± 0.557 5.965 ± 0.031 0.105 ± 0.065 0.018 ± 0.005 3.083 ± 0.883 1.997 ± 0.287 0.096 ± 0.072 0.032 ± 0.005 0.003 ± 0.906 0.847 ± 0.163
LMC-00010 V 0.676 ± 0.059 1.667 ± 0.044 6380 ± 110 0.754428 0.000572 0.5940743 ± 1.6e−06 19.466 ± 0.012 0.659 ± 0.002 0.774 ± 0.043 0.87 ± 0.19 0.408 ± 0.076 0.399 ± 0.01 2.386 ± 0.21 2.842 ± 0.029 0.375 ± 0.073 0.276 ± 0.01 5.167 ± 0.271 5.583 ± 0.043 0.299 ± 0.079 0.18 ± 0.009 1.701 ± 0.34 2.103 ± 0.063 0.099 ± 0.071 0.142 ± 0.009 4.029 ± 0.832 5.161 ± 0.079

LMC-00072 V 0.693 ± 0.055 1.698 ± 0.045 6360 ± 110 0.754293 0.000707 0.6261742 ± 2e−06 19.563 ± 0.011 0.588 ± 0.002 0.672 ± 0.039 0.833 ± 0.181 0.424 ± 0.067 0.388 ± 0.01 2.429 ± 0.175 2.867 ± 0.031 0.252 ± 0.056 0.257 ± 0.01 5.347 ± 0.279 5.576 ± 0.047 0.17 ± 0.061 0.175 ± 0.01 2.014 ± 0.368 2.165 ± 0.067 0.094 ± 0.064 0.132 ± 0.01 5.655 ± 0.634 5.127 ± 0.086
LMC-00079 V 0.623 ± 0.055 1.66 ± 0.044 6590 ± 110 0.754575 0.000425 0.5647915 ± 1.4e−06 19.602 ± 0.018 0.676 ± 0.001 0.858 ± 0.044 1.141 ± 0.31 0.57 ± 0.117 0.458 ± 0.003 2.474 ± 0.305 2.596 ± 0.009 0.3 ± 0.095 0.321 ± 0.003 5.308 ± 0.443 5.404 ± 0.013 0.293 ± 0.105 0.18 ± 0.003 1.712 ± 0.563 1.733 ± 0.021 0.276 ± 0.072 0.079 ± 0.003 4.812 ± 0.714 4.646 ± 0.041
LMC-00082 V 0.665 ± 0.058 1.715 ± 0.043 6690 ± 110 0.754716 0.000284 0.5654446 ± 8e−07 19.314 ± 0.019 0.58 ± 0.002 1.001 ± 0.035 1.343 ± 0.306 0.393 ± 0.077 0.49 ± 0.006 2.278 ± 0.258 2.625 ± 0.015 0.276 ± 0.085 0.328 ± 0.005 4.938 ± 0.34 5.176 ± 0.022 0.201 ± 0.071 0.234 ± 0.005 1.496 ± 0.528 1.339 ± 0.03 0.123 ± 0.071 0.15 ± 0.005 3.158 ± 0.786 4.025 ± 0.042

. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
SMC-0001 V 0.595 ± 0.057 1.661 ± 0.043 6640 ± 110 0.754758 0.000242 0.5588145 ± 9e−07 19.594 ± 0.014 0.667 ± 0.001 1.052 ± 0.074 1.25 ± 0.301 0.385 ± 0.052 0.489 ± 0.003 2.04 ± 0.184 2.612 ± 0.009 0.342 ± 0.06 0.333 ± 0.003 4.461 ± 0.209 5.602 ± 0.013 0.277 ± 0.055 0.189 ± 0.003 0.997 ± 0.272 1.841 ± 0.02 0.091 ± 0.047 0.081 ± 0.003 3.5 ± 0.492 4.666 ± 0.041
SMC-0002 V 0.581 ± 0.054 1.604 ± 0.043 6420 ± 100 0.754461 0.000539 0.594794 ± 1.9e−06 19.615 ± 0.007 0.797 ± 0.002 1.014 ± 0.143 0.767 ± 0.254 0.484 ± 0.048 0.408 ± 0.011 2.48 ± 0.097 2.681 ± 0.032 0.252 ± 0.038 0.23 ± 0.01 5.515 ± 0.184 5.455 ± 0.054 0.127 ± 0.035 0.176 ± 0.01 2.102 ± 0.34 2.025 ± 0.071 0.107 ± 0.039 0.14 ± 0.01 5.791 ± 0.366 4.528 ± 0.089
SMC-0003 V 0.648 ± 0.05 1.723 ± 0.044 6570 ± 110 0.754293 0.000707 0.6506795 ± 3.3e−06 19.767 ± 0.004 0.539 ± 0.001 0.794 ± 0.152 1.041 ± 0.334 0.331 ± 0.03 0.447 ± 0.005 2.568 ± 0.117 2.612 ± 0.013 0.22 ± 0.03 0.309 ± 0.005 5.338 ± 0.168 5.413 ± 0.02 0.077 ± 0.028 0.149 ± 0.005 2.339 ± 0.413 1.503 ± 0.035 0.046 ± 0.028 0.053 ± 0.004 0.287 ± 0.617 4.14 ± 0.087
SMC-0008 V 0.697 ± 0.05 1.675 ± 0.042 6270 ± 100 0.754268 0.000732 0.6328767 ± 2.1e−06 19.742 ± 0.003 0.639 ± 0.002 0.986 ± 0.154 0.943 ± 0.241 0.422 ± 0.024 0.424 ± 0.014 2.436 ± 0.068 3.006 ± 0.04 0.227 ± 0.022 0.359 ± 0.014 5.445 ± 0.117 6.068 ± 0.053 0.116 ± 0.022 0.269 ± 0.013 2.048 ± 0.211 2.899 ± 0.071 0.012 ± 0.022 0.238 ± 0.013 5.254 ± 1.699 6.175 ± 0.085
SMC-0011 V 0.669 ± 0.051 1.697 ± 0.042 6530 ± 100 0.754651 0.000349 0.5957643 ± 1.8e−06 19.81 ± 0.008 0.605 ± 0.001 1.054 ± 0.102 1.043 ± 0.251 0.436 ± 0.037 0.426 ± 0.005 2.048 ± 0.115 2.647 ± 0.013 0.215 ± 0.036 0.284 ± 0.004 4.605 ± 0.201 5.3 ± 0.02 0.102 ± 0.036 0.17 ± 0.004 0.84 ± 0.357 1.685 ± 0.03 0.05 ± 0.035 0.081 ± 0.004 2.629 ± 0.718 4.565 ± 0.057

. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Z
X

φ 51
φ 41
φ 31
φ 21
R51
R41
R31
R21
A (mag)
Mass (M)
Parameter

Period (P)a
Luminosity (L)

mean mag (mag)


Wang et al. (2021).

Effective temperature (Teff )

trained V-band interpolator.


04 (Lenz & Breger 2005).

K2 and ANN predicted light curve.

V band)
K2 light curve

5.024 ± 0.018
2.214 ± 0.013
5.719 ± 0.007
2.761 ± 0.005
0.104 ± 0.002
0.153 ± 0.002
0.327 ± 0.002
0.516 ± 0.002
1.263 ± 0.148
11.988 ± 0.002
(Kp band converted to
Value

42 ± 2 L
6846 ± 50 K
0.48 ± 0.03 M

0.006 ± 0.002 dex


0.741 ± 0.004 dex

(V band)
0.545740 ± 0.000007 d

5.119 ± 0.034
2.170 ± 0.027
5.718 ± 0.020
2.783 ± 0.013
0.178 ± 0.005
0.223 ± 0.005
0.325 ± 0.005
0.542 ± 0.005
0.703 ± 0.002
1.584 ± 0.107
ANN predicted

agreement with observations, except for a few exceptions where R41


amplitude ratios and Fourier phase differences with the observations
Notes.a The period is determined using the Kepler light-curve with PERIOD-

in equation 9) are consistently shifted, particularly φ 21 and φ 31 . The


and R51 exhibit an additional feature in the ANN predicted models at
around log (P) ≈ −0.22. The Fourier phase differences (φ k1 , defined
Table 7. The stellar parameters of EZ Cnc (EPIC 212182292) adopted from

the original models. The Fourier amplitude ratios (Rk1 , as defined in


be traced back to the low mixing length that was used to compute
a bit larger than the observed amplitudes and the reason for this can
Table 8. Comparison of the Fourier parameters of EZ Cnc star derived using
(converted to V band) along with the ANN generated light curve using the
Figure 11. This plot represents the observed light curve from the K2 survey

equation 8) of the light curves predicted by the ANN are in great


is shown in Figs 7–9. We observe that ANN predicted amplitudes are
Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Generating RRab model light curves using ANN 1517

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 12. The parameter space of the new grid along with original grid from Marconi et al. (2015). A more detailed description of both grids can be found in
Fig. A1. The ANN generated light curves of the labelled models are shown in Fig. 13.

Table 9. The parameter space of the new grid generated using the trained each light curve. Additionally, the size of the trained interpolator file
ANN Interpolators. is much smaller, making it easy to store and access. To complement
existing theoretical model grids in the literature, a smooth grid of
Parameter Range Step model light curves was generated using the trained interpolator. The
M 0.52–0.79 M 0.03 M grid of templates can be used in techniques such as template fitting
log(L/L ) 1.54–2.02 dex 0.04 dex to estimate the parameters of observed light curves. We generated
Teff 5300–7000 K 100 K over 30 000 model light curves in both the I and V bands, resulting in
Y [0.245, 0.25, 0.265] approximately 2 GB of data. However, if one has access to the trained
Z [0.00011, 0.00668, 0.01324, 0.01980] interpolator file, which is much smaller in size (around 3.7 MB for
each interpolator file in our case), it is also possible to generate a
light curve by inputting the parameters. Generating each light curve
cause of this discrepancy is not understood and it should be noted
takes only a few milliseconds (approximately 55 ms) for both I and
that even the state-of-the-art hydrodynamical code, MESA-rsp, is
V bands.
unable to accurately reproduce the Fourier phase differences (Paxton
It is worth noting that our approach is dependent on the models
et al. 2019). We also determine the distances to the LMC and SMC
used, and any errors or uncertainties in the models will be reflected
based on the reddening independent Wesenheit index. The distances
in our results. However, this analysis will provide valuable insights
found (μLMC = 18.567 ± 0.135 mag, and μSMC = 18.93 ± 0.17
into the stellar population model and has the potential to improve
mag) are in excellent agreement with the published distances based
our understanding of these stars. The results can be improved by
on eclipsing binaries (μLMC = 18.476 ± 0.024 mag, and μSMC =
expanding the number of models or by using a more comprehensive
18.95 ± 0.07 mag; Graczyk et al. 2014; Pietrzyski et al. 2019).
grid of models. Additionally, the trained ANN models can be
To showcase the utility of the interpolators, we generated and
retrained on new or additional models to enhance the accuracy of the
compared the light curve of the RRab star EZ Cnc. The physical
predicted light curves. In this way, our approach can be continuously
parameters of this star were determined by Wang et al. (2021)
refined and improved as more data and models become available.
using medium-resolution spectroscopic observations from LAMOST
and time-series photometric data from the Kepler mission. We
transformed the Kepler light curve into the V-band light curve and S O F T WA R E
then compared it with the light curve predicted by the ANN both
We utilized various PYTHON libraries in our study including NUMPY
qualitatively and quantitatively. The reported distance to this star,
(Harris et al. 2020), PANDAS (McKinney 2010; team 2020), ASTROPY4
μEZ Cnc = 11.284(3), or d = 1806 ± 2 pc, is in excellent agreement
(Astropy Collaboration 2013, 2018), TENSORFLOW (Martı́n Abadi
with the recently updated parallax measurement from Gaia EDR3 of
et al. 2015), MATPLOTLIB (Hunter 2007) and SEABORN (Waskom
1775+ 70
−70 pc (Bailer-Jones et al. 2021). 2021). NUMPY and PANDAS were used for data manipulation, while
The generation of a grid of model light curves using traditional
ASTROPY, a community-developed package for Astronomy, was also
methods can be computationally expensive and time-consuming, but
utilized. TENSORFLOW was employed to implement the ANN, while
by using the trained ANN interpolators, it is possible to generate
MATPLOTLIB and SEABORN were used for creating visual plots.
a more dense grid of model light curves much more efficiently.
The trained interpolators can generate a light curve given the input
parameters, and the process is fast, taking only a few milliseconds for 4 http://www.astropy.org

MNRAS 522, 1504–1520 (2023)


1518 N. Kumar et al.

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure 13. The ANN generated I and V band light curves for six models labelled in Fig. 12.

AC K N OW L E D G E M E N T S Bailer-Jones C. A. L., Rybizki J., Fouesneau M., Mantelet G., Andrae R.,
2018, AJ, 156, 58
NK acknowledges the financial assistance from the Council of Bailer-Jones C. A. L., Rybizki J., Fouesneau M., Demleitner M., Andrae R.,
Scientific and Industrial Research (CSIR), New Delhi, as the Senior 2021, AJ, 161, 147
Research Fellowship (SRF) file no. 09/45(1651)/2019-EMR-I. AB Bellinger E. P., Kanbur S. M., Bhardwaj A., Marconi M., 2020, MNRAS,
acknowledges funding from the European Union’s Horizon 2020 491, 4752
research and innovation programme under the Marie Skodowska- Bergstra J., Bengio Y., 2012, J. Mach. Learn. Res., 13, 281
Curie grant agreement no. 886298. SD acknowledges the KKP- Bhardwaj A., 2022, Universe, 8, 122
137523 ‘SeismoLab’ Élvonal grant of the Hungarian Research, Bhardwaj A., Kanbur S. M., Singh H. P., Macri L. M., Ngeow C.-C., 2015,
Development and Innovation Office (NKFIH). HPS acknowledges a MNRAS, 447, 3342
Bhardwaj A., Macri L. M., Rejkuba M., Kanbur S. M., Ngeow C.-C., Singh
grant from the Council of Scientific and Industrial Research (CSIR)
H. P., 2017a, AJ, 153, 154
India, file no. 03(1428)/18-EMR-II. Bhardwaj A., Kanbur S. M., Marconi M., Rejkuba M., Singh H. P., Ngeow
C.-C., 2017b, MNRAS, 466, 2805
Bhardwaj A. et al., 2021, ApJ, 909, 200
DATA AVA I L A B I L I T Y Bono G., Stellingwerf R. F., 1994, ApJS, 93, 233
Bono G., Caputo F., Castellani V., Marconi M., 1997, A&AS, 121, 327
Interested readers can use the trained interpolator to generate the Bono G., Caputo F., Marconi M., 1998, ApJ, 497, L43
predicted light curves of RRab stars, using their input physical param- Bono G., Marconi M., Stellingwerf R. F., 1999, ApJS, 122, 167
eters. The generated grid of light curves is available on request and Bono G., Castellani V., Marconi M., 2000a, ApJ, 529, 293
also through a web interface on https://ann-interpolator.web.app/. Bono G., Castellani V., Marconi M., 2000b, ApJ, 532, L129
Bono G., Caputo F., Castellani V., Marconi M., Storm J., 2001, MNRAS,
326, 1183
REFERENCES Caputo F., Castellani V., Degl’Innocenti S., Fiorentino G., Marconi M., 2004,
A&A, 424, 927
Alexander D. R., Ferguson J. W., 1994, ApJ, 437, 879 Catelan M., Pritzl B. J., Smith H. A., 2004, ApJS, 154, 633
Asplund M., Grevesse N., Sauval A. J., 2005, in Barnes Thomas G. I., Bash Clementini G., Gratton R., Bragaglia A., Carretta E., Fabrizio L. D., Maio
F. N., eds, ASP Conf. Ser. Vol. 336, Cosmic Abundances as Records of M., 2003, AJ, 125, 1309
Stellar Evolution and Nucleosynthesis. Astron. Soc. Pac., San Francisco, Coppola G. et al., 2011, MNRAS, 416, 1056
p. 25 Cusano F. et al., 2013, ApJ, 779, 7
Astropy Collaboration, 2013, A&A, 558, A33 Cybenko G., 1989, Math. Cont. Sign. Syst., 2, 303
Astropy Collaboration, 2018, AJ, 156, 123

MNRAS 522, 1504–1520 (2023)


Generating RRab model light curves using ANN 1519
Das S., Bhardwaj A., Kanbur S. M., Singh H. P., Marconi M., 2018, MNRAS, Moretti M. I. et al., 2009, ApJ, 699, L125
481, 2000 Mullen J. P. et al., 2021, ApJ, 912, 144
Das S. et al., 2020, MNRAS, 493, 29 Muraveva T. et al., 2015, ApJ, 807, 127
De Somma G., Marconi M., Molinaro R., Cignoni M., Musella I., Ripepi V., Natale G., Marconi M., Bono G., 2008, ApJ, 674, L93
2020, ApJS, 247, 30 Nemec J. M. et al., 2011, MNRAS, 417, 1022
De Somma G., Marconi M., Molinaro R., Ripepi V., Leccia S., Musella I., Nemec J. M., Cohen J. G., Ripepi V., Derekas A., Moskalik P., Sesar B.,
2022, ApJS, 262, 25 Chadid M., Bruntt H., 2013, ApJ, 773, 181

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Deb S., Singh H. P., 2009, A&A, 507, 1729 O’Malley T. et al. 2019, KerasTuner. Available at: https://github.com/keras-t
Di Criscienzo M. et al., 2011, AJ, 141, 81 eam/keras-tuner
Drake A. J. et al., 2013, ApJ, 763, 32 Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011,
Draper N. R., Smith H., 1998, Applied Regression Analysis, Vol. 326. John ApJS, 192, 3
Wiley and Sons, New York Paxton B. et al., 2013, ApJS, 208, 4
Elsken T., Metzen J. H., Hutter F., 2019, J. Mach. Learn. Res., 20, 1997 Paxton B. et al., 2015, ApJS, 220, 15
Feuchtinger M. U., 1999, A&AS, 136, 217 Paxton B. et al., 2018, ApJS, 234, 34
Fiorentino G. et al., 2010, ApJ, 708, 817 Paxton B. et al., 2019, ApJS, 243, 10
Glantz S. A., Slinker B. K., 2001, Primer of Applied Regression and Analysis Piersanti L., Straniero O., Cristallo S., 2007, A&A, 462, 1051
of Variance. McGraw-Hill, Inc., New York Pietrinferni A., Cassisi S., Salaris M., Castelli F., 2006, ApJ, 642, 797
Graczyk D. et al., 2014, ApJ, 780, 59 Pietrukowicz P. et al., 2015, ApJ, 811, 113
Guo X., Yang J., Wu C., Wang C., Liang Y., 2008, Neurocomputing, 71, 3211 Pietrzyski G. et al., 2019, Nature, 567, 200
Harris C. R. et al., 2020, Nature, 585, 357 Ragosta F. et al., 2019, MNRAS, 490, 4975
Haschke R., Grebel E. K., Duffau S., 2011, AJ, 141, 158 Ruder S., 2016, An Overview of Gradient Descent Optimization Algorithms,
Hornik K., 1991, Neural Netw., 4, 251 preprint (arXiv:1609.04747)
Hornik K., Stinchcombe M., White H., 1989, Neural Netw., 2, 359 Rumelhart D. E., Hinton G. E., Williams R. J., 1986, Nature, 323, 533
Howell S. B. et al., 2014, PASP, 126, 398 Sandage A., Katem B., Sandage M., 1981, ApJS, 46, 41
Hunter J. D., 2007, Comp. Sci. Eng., 9, 90 Scargle J. D., 1982, ApJ, 263, 835
Iglesias C. A., Rogers F. J., 1996, ApJ, 464, 943 Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
Jurcsik J., Kovacs G., 1996, A&A, 312, 111 Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525
Jurcsik J. et al., 2015, ApJS, 219, 25 Serenelli A. M., Basu S., 2010, ApJ, 719, 865
Keller S. C., Wood P. R., 2002, ApJ, 578, 144 Skowron D. M. et al., 2016, AcA, 66, 269
Kingma D. P., Ba J., 2014, preprint (arXiv:1412.6980) Smolec R., Moskalik P., 2008, AcA, 58, 193
Kuehn C. A. et al., 2013, preprint (arXiv:1310.0553) Sollima A., Borissova J., Catelan M., Smith H. A., Minniti D., Cacciari C.,
Kunder A. et al., 2013, AJ, 146, 119 Ferraro F. R., 2006, ApJ, 640, L43
Lenz P., Breger M., 2005, Commun. Asteros., 146, 53 Soszyski I. et al., 2009, AcA, 59, 1
Lomb N. R., 1976, Ap&SS, 39, 447 Soszyski I. et al., 2016, AcA, 66, 131
Longmore A. J., Fernley J. A., Jameson R. F., 1986, MNRAS, 220, 279 Soszyski I. et al., 2017, AcA, 67, 297
Luger R., Agol E., Kruse E., Barnes R., Becker A., Foreman-Mackey D., Soszyski I. et al., 2018, AcA, 68, 89
Deming D., 2016, AJ, 152, 100 Steel R. G. D. et al., 1960, Principles and Procedures of Statistics. McGraw-
Luo A.-L. et al., 2015, Res. Astron. Astrophys., 15, 1095 Hill Book Company, Inc., New York
Madore B. F., 1982, ApJ, 253, 575 Stellingwerf R. F., 1982, ApJ, 262, 339
Marconi M., 2009, in Guzik J. A., Bradley P. A., eds, Am. Inst. Phys. Conf. Stellingwerf R. F., 1984, ApJ, 284, 712
Ser. Vol. 1170, Stellar Pulsation: Challenges for Theory and Observation. The pandas development team, 2020, Zenodo, pandas-dev/pandas: Pandas.
Am. Inst. Phys., New York, p. 223 Available at: https://doi.org/10.5281/zenodo.3509134
Marconi M., Clementini G., 2005, AJ, 129, 2257 van Albada T. S., Baker N., 1971, ApJ, 169, 311
Marconi M., Degl’Innocenti S., 2007, A&A, 474, 557 Vivas A. K., Zinn R., 2006, AJ, 132, 714
Marconi M., Caputo F., Di Criscienzo M., Castellani M., 2003, ApJ, 596, 299 Wang J., Fu J.-N., Zong W., Wang J., Zhang B., 2021, MNRAS, 506,
Marconi M., Bono G., Caputo F., Piersimoni A. M., Pietrinferni A., Stelling- 6117
werf R. F., 2011, ApJ, 738, 111 Waskom M. L., 2021, J. Open Source Softw., 6, 3021
Marconi M., Molinaro R., Ripepi V., Musella I., Brocato E., 2013, MNRAS, Zinn R., Horowitz B., Vivas A. K., Baltay C., Ellman N., Hadjiyska E.,
428, 2185 Rabinowitz D., Miller L., 2014, ApJ, 781, 22
Marconi M. et al., 2015, ApJ, 808, 50
Marconi M. et al., 2017, MNRAS, 466, 3206
Marconi M., Bono G., Pietrinferni A., Braga V. F., Castellani M., Stellingwerf APPENDIX: INPUT DISTRIBUTION
R. F., 2018, ApJ, 864, L13
Martı́n A. et al., 2015, Tensor Flow: Large-Scale Machine Learning on A detailed input distribution of the input parameters is shown in Fig.
Heterogeneous Systems A1. We have also shown the detailed distribution of parameters of
McKinney W., 2010, in Walt S. v. d., Millman J., eds, Proceedings of the 9th the new grid for which the template light curves are generated.
Python in Science Conference. p. 56

MNRAS 522, 1504–1520 (2023)


1520 N. Kumar et al.

Downloaded from https://academic.oup.com/mnras/article/522/1/1504/7093416 by University of Delhi,Central Science Library user on 01 November 2023
Figure A1. A pair plot with KDE (Kernel Density Estimation) plots in the upper triangle shows the relationship between six variables, namely mass, luminosity,
effective temperature, X, Z, and log(P). The contours represent the parameter space in 2D for the given combination of six parameters. The plot is divided into
subplots, with each subplot representing a pair-wise scatter plot of the six variables. The diagonal of the subplots shows the distribution of each variable using
KDE plots. The red cross (‘×’) marker is assigned for the original models of Marconi et al. (2015) and the blue plus (‘+’) marker is assigned for the models
generated in this work.

This paper has been typeset from a TEX/LATEX file prepared by the author.

MNRAS 522, 1504–1520 (2023)

You might also like