SIM HUMaLaOR2
SIM HUMaLaOR2
SIM HUMaLaOR2
3138948
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Image licensed by Ingram Publishing
indoor monitoring applications [11]–[13]. The essential func- available: 1) marker-based and 2) marker-less. The real
tionality in all these works has been classifying human activi- advantage of using marker-based technology is capturing
ties based on their micro-Doppler signatures. More recently, more accurate, more realistic, and complex human motions.
there has been spurred growth in the usage of deep learning- Several markers are placed on the live actor’s body parts,
assisted solutions in radar signal processing due to the greater such as the head, torso, arms, and legs, to capture their 3D
availability of memory capacity and ever-increasing process- time-varying positions in space in a marker-based motion
ing speeds of computers [14]–[16]. The performances of capture system. Although the marker-based system is accu-
these algorithms are generally tied to a large amount of high- rate, the hardware restriction limits it to a single laboratory
quality training data. However, the volumes of data captured environment. In addition, wearing a bodysuit fitted with LED
are often limited and unbalanced due to the following rea- markers further increases the overhead in the data collection
sons. First, collecting real-world micro-Doppler data can be process, limiting the massive collection of data and various
laborious and costly. Second, various environmental condi- investigation scenarios. Ram and Ling [20] first presented a
tions, sensor parameters, and target characteristics affect the complete end-to-end active radar simulator of humans using
performance of the data, affecting deep learning algorithms’ a marker-based motion capture technique. The radar scatter-
performance. Therefore, it becomes necessary to simulate ings were simulated by integrating the animation data of
radar returns in indoor sensing scenarios that would generate humans with primitive shapes-based electromagnetic model-
large volumes of training data. The simulation data can be ing. Alternatively, Erol and Gurbuz [21] and Singh et al. [22]
used to preliminary evaluate different algorithms and study gathered animation data using a marker-less motion capture
the effects of radar phenomenology. It can provide a form of technology based on Microsoft’s Kinect. The Kinect sensors
virtual prototyping and move in stages toward a real system. are relatively compact, easy to carry, and set up to capture
The benefits of this incremental “divide and conquer” data compared to the marker-based system. Freely available
approach include the fact that the performance can be investi- databases of motion capture data from CMU, University of
gated safely for a range of operating conditions. Pennsylvania, and Ohio State are available at [23]–[25].
There exist multiple methods to simulate human micro- While previous works presented an active radar simulator
Doppler data. The earliest method models the human leg as a for generating target returns, no simulation tool exists for
double pendulum structure [17]. However, this model does generating radar returns in passive WiFi scenarios. Moreover,
not simulate radar returns from other human body parts, such none of the simulators presented in the prior works are pub-
as the torso and arms, contributing significantly to the micro- licly available. Therefore, there are no means of generating
Doppler returns. The second method uses a human walking synthetic databases that can augment otherwise limited mea-
model derived from biomechanical experiments [18]. Here, surement data and address the cold-start problem in radars. In
12 analytical expressions govern the motion trajectories of response to open science practices accelerating, improving,
17 reference points on the human body as a function of the and accumulating scientific knowledge for others to reuse
human’s height and relative velocity. Finally, Manfredi et al. and build upon, we have publicly released the animation
[19] presented a similar analytical model-driven simulation data-driven simulation tool capable of modeling the human
tool to characterize the radar cross-section of a pedestrian in radar signatures in passive WiFi radar (PWR) scenarios. It
a near field. However, this approach is based on a constant will assist the sensing communities in generating large vol-
velocity model. Therefore, it cannot capture variations in umes of high-quality and diverse radar datasets and bench-
more complex motions, such as falling, sitting, and jumping. mark future algorithms. The simulator’s development will
The third technique uses animation data from motion reduce the labor and expense of field testing by imitating a
capture systems to model more realistic and complex human real-world system under different operating conditions, such
motions. There are two types of motion capture technology as environmental conditions, sensor parameters, and human
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
characteristics. More importantly, the human micro-Doppler comprising high-quality micro-Doppler spectro-
data generated using the simulator can be used to augment grams for activity recognition applications.
limited experimental data. The simulator’s reliability is
7) In addition, the study demonstrates that the synthe-
improved by validating it with experimental data gathered
sized signatures can be used for data augmentation
using an in-house-built hardware prototype. The standalone
purposes to solve the practical problem associated
executable file for the simulator is available at https://uwsl.
with insufficient or unbalanced micro-Doppler training
co.uk/.
data.
To summarize, our contributions in this article are the
following.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Figure 1.
SimHumalator’s distribution package.
target, and radar signal processing parameters is briefly methodology is depicted in Figure 3. The target reflections
presented in the following sections. are simply the time-delayed and Doppler-shifted replica of
the direct WiFi transmissions. The time delay is directly pro-
portional to the target range, Doppler shift to the target’s
RADAR SIMULATION FRAMEWORK velocity, and the complex reflectivity to the target’s size,
shape, and material. The direct and the reflected signals are
A typical passive WiFi sensing setup is shown in Figure 2. then cross-correlated in the delay-Doppler plane to generate
It comprises a reference and surveillance antenna and a sig- the cross-ambiguity function (CAF).
nal processing unit. The reference antenna is a directional
antenna that captures the direct signal from the WiFi AP.
WIFI SIGNAL MODEL
On the other hand, the surveillance antenna is omnidirec-
tional to capture the reflected signals of the human targets SimHumalator uses MATLAB’s WLAN toolbox to simu-
present anywhere in the sensing area. The signals reflected late three IEEE standard-compliant waveforms: 1) IEEE
off the targets are time-delayed and Doppler-shifted direct 802.11 g at 2.4 GHz, 2) IEEE 802.11ax at 5.8 GHz, and 3)
signals. The radar signal processing unit aims to estimate IEEE 802.11ad at 60 GHz. The physical layer of IEEE
these parameters using direct and reflected signals. 802.11 Standards uses a packet-based protocol where each
SimHumalator integrates the IEEE 802.11 Standard- transmission packet [a physical layer conformance proce-
compliant WiFi signal and the human animation data to gen- dure protocol data unit (PPDU)] comprises a preamble
erate the radar scatterings off the humans. The hybridization and data, as shown in Figure 4.
Figure 2.
Typical PWR scenario comprising transmissions from WiFi access points (APs) and targets undergoing motion in the same propagation
environment.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
Figure 3.
Radar signal model after integration of the target model with WiFi transmissions.
The preamble field is embedded with three subfields, packets with staggered packet repetition interval, that is, the
each comprising several time-domain samples in the leg- transmission of packets with different data payloads.
acy short training field (L-STF), legacy long training field
(L-LTF), and legacy signal field (L-SIG). L-STF possesses DYNAMIC TARGET MODEL
excellent correlation properties and is therefore used to
SimHumalator uses the animation files gathered using two
detect the start of the packet, L-LTF field for communica-
motion capture technologies: 1) a marker-based motion cap-
tion channel estimation, and the third preamble field L-
ture system called PhaseSpace and 2) a marker-less Micro-
SIG to indicate the amount of data transmitted (in octets).
soft’s Kinect v2 sensor system. The real advantage of using
On the other hand, the data field contains user payload,
motion capture technology is capturing more accurate, more
medium access control headers, and cyclic redundancy
realistic, and complex human motions. The data include 3D
check bits. We use only the preamble bits to form our dis-
time-varying skeletal information of the human body,
crete-time sequence yRef ½n. The simulated IEEE 802.11 g
including 25 joints, such as head center location, knee joints,
packet structures mimic real WiFi transmission formats at
elbow joints, and shoulder joints. It assumes that the radar
the 2.4 GHz band with a channel bandwidth of BW ¼
scattering centers are lying on the center of the bones, result-
20 MHz. In comparison, IEEE 802.11ax and IEEE
ing in 19 scatterers on the human body. The adopted simula-
802.11ad packet structures mimic real WiFi transmission
tion methodology is presented in Figure 6.
formats at 5.8 and 60 GHz. The channel bandwidth for
The human skeleton is embodied with elementary
IEEE 802.11ad WiFi transmissions is fixed to 1.76 GHz,
shapes to model different body parts, such as the torso,
while the bandwidth for IEEE 802.11ax transmissions can
arms, and legs using ellipsoids and the head using a sphere.
be chosen among BW ¼ 20 ; 40; 80 ; 160 MHz.
The radar scattering centers are assumed to be lying approxi-
A general formulation of WiFi transmission signal
mately at the center of these primitive shapes. SimHumala-
yRef ðtÞ is shown in (1). It comprises a continuous stream
tor uses these data to gather time-varying range rb ðtÞ and
of P transmission packets each of duration TP s at a car-
Doppler information of fDb ðtÞ of each scatterer. It then com-
rier frequency of fC , as shown in Figure 5
putes the reflectivities ab ðtÞ of each of these B primitive
1 X P X N
shapes and takes into account various factors, such as the
YRef ðtÞ ¼ pffiffi yRef ½nej2pdfðtpTP Þ ej2pfc t (1)
ðP Þ p¼1 n¼1 aspect angle ub ðtÞ, and the relative position rb ðtÞ, of the scat-
tering center on the primitive shape with respect to the radar.
where TP corresponds to the packet repetition interval,
The reflectivity of a primitive b, at any time instant t is
which is equal to the pulse repetition interval in our case, N is
given by
the total number of time-domain samples in one transmission pffiffiffiffiffiffiffiffiffiffi
packet, ts ¼ ð1=BWÞs is the sampling period, and df is zðtÞ s b ðtÞ
ab ðtÞ ¼ : (2)
OFDM subcarrier spacing. We have used uniform packet rep- r2b ðtÞ
etition interval for our simulations. However, the future devel- Here, zðtÞ subsumes propagation effects, such as attenuation,
opment of the simulator would include transmissions of antenna directivity, and processing gains, s b ðtÞ is the radar
cross-section of the primitives. The radar cross section (RCS)
of primitive shapes is well characterized at microwave fre-
quencies [26]. The RCS of an ellipsoid of length Lb and
radius Rb is given by
Figure 4. pffiffiffi p 4 2
4 Rb Lb
s b ðtÞ ¼ G 1 2
: (3)
IEEE 802.11 g Standard-complaint OFDM transmission packet R2b sin 2 u b ðtÞ þ 4 Lb cos 2 ub ðtÞ
structure.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Figure 5.
IEEE 802.11 g Standard-complaint continuous packet transmission.
Figure 6.
Simulation framework for generating an electromagnetic human primitive scattering model from motion capture animation data.
The dielectric properties of human skin are incorporated into CAF PROCESSING
the RCS estimation through the Fresnel reflection coefficient
The CAF processing is implemented over the received
G. The human is assumed to be a single-layer dielectric with
data and the direct reference signal data to compute the
a dielectric constant of 80 and conductivity of 2 S/m.
delay t b and Doppler information fDb of the target. The
adopted CAF processing is shown in Figure 7. Match-fil-
HYBRID ELECTROMAGNETIC RADAR SCATTERING FROM tering is performed along the fast time samples and fast
DYNAMIC HUMANS Fourier transform along the slow time samples to generate
CAFs. The CAF processing is implemented as
The received signal is simply the attenuated, time-delayed
t b , and Doppler-shifted fDb , version of the direct transmit- Z Ti
ted signal YRef ðtÞ. Ignoring multipath, the baseband CAFðt; fd Þ ¼ Ysur ðtÞYref ðt tÞej2pfd t dt (5)
0
received signal on the surveillance channel can be repre-
sented as
where denotes complex conjugation, t is the time delay,
fd is the Doppler shift, and Ti is the integration time.
P X
X B
Note that the direct signal component is always strong
YSur ðtÞ ¼ ab ðtÞYRef ðt t b pTp Þej2pfDb pTp
p¼1 b¼1
and can mask the target returns present in the CAFs.
Therefore, the CLEAN algorithm is used to remove direct
þ zðtÞ: (4) signal interference [9]. The CLEAN algorithm subtracts
the self-ambiguity function (generated using only the ref-
Here, c ¼ 3 108 m/s is the speed of light, and zðtÞ is the erence signal) from the CAF calculated in (5) thereby sup-
additive circular-symmetric white noise. pressing stronger direct signals.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
Figure 7.
CAF generation through the cross-correlation between the direct WiFi transmission and the reflected signals off the targets.
Multiple CAFs spanning a duration of TTotal are proc- complementary codes that hold perfect autocorrelation
essed to generate the Doppler-time spectrogram, as shown properties. Therefore, cross-correlation operation results
in Figure 8. in high sidelobe levels, significantly affecting the quality
Here, for each CPI, the peaks along the range axis are of range-time signatures. Even if the waveforms have per-
coherently added for each Doppler bin. fect autocorrelation properties, the sidelobe levels are zero
only for static target scenarios. As soon as the target starts
moving, high sidelobes appear in the range domain.
GRAPHICAL USER INTERFACE (GUI) SimHumalator offers users the freedom to select anima-
tion data files from 11 healthcare-related human motion clas-
SimHumalator allows the user to select many input param- ses. These include human rotating his body (HBR) while
eters to specify the target motion characteristics, sensor standing in a fixed position, human kicking (HK), human
operating conditions, and signal processing parameters. punching (HP), human grabbing an object (HG), human
The simulator receives the simulation inputs with the help walking back and forth in front of the radar (HW), human
of a GUI window, as shown in Figure 9. In addition, the standing up from a chair, human sitting down on a chair,
user’s manual can help visualize the parameters within the human stand up from the chair to walk, human walk to sit on
simulation inputs window represent and how changing a chair, human walk to fall on the ground, and a human stand-
these parameters affects the radar signatures. ing up from the ground to walk. The number of animation
SimHumalator can generate a massive signature files for each of these activities is presented in Table 1.
library of radar signature profiles of various human activi- Some files of the first five activities, 1) HBR, 2) HK,
ties, including CAFs and Doppler-time plots. Since we are 3) HP, 4) HG, and 5) HW, have been captured before the
simulating IEEE 802.11 g WiFi transmissions, the signal COVID-19 pandemic using a marker-based active track-
bandwidth is limited to 20 MHz, insufficient to locate tar- ing phase-space system [27]. The remaining six activities
gets in most indoor scenarios. Therefore, we focus on and some data for the first five activities were captured
extracting only the time-varying micro-Doppler informa- using Kinect v2 sensor during the COVID-19 pan-
tion in the joint time–frequency space. Moreover, the stan- demic [28]. We will continue to build our database and
dard waveforms IEEE 802.11ax and IEEE 802.11ad, provide the user community with a platform to generate
which offer a good resolution range, do not comprise substantial radar databases.
Figure 8.
Simulation methodology to generate micro-Doppler spectrograms using multiple CAFs spanning the entire duration of motion.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Figure 9.
SimHumalator’s GUI for choosing PWR system parameters.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
Table 1. Table 2.
Figure 10.
(a)–(d) Radar micro-Doppler signatures of human walking at four aspect angles 0 , 60 , 120 , and 180 with respect to the monostatic radar,
respectively. (e)–(h) Radar micro-Doppler signatures of human walking at aspect angle 0 to the radar at four bistatic radar configurations
with the following bistatic angles 0 , 60 , 120 , and 180 , respectively.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Figure 11.
Micro-Doppler spectrograms of a human walking motion for (a) IEEE 802.11 g Standard at 2.4 GHz, (b) IEEE 802.11ax Standard at
5.8 GHz, and (c) IEEE 802.11ad Standard at 60 GHz, respectively.
Figure 12.
Radar micro-Doppler signatures for a human undergoing (a) body rotation motion, (b) kicking motion, (c) punching motion, (d) grabbing an
object motion, and (e) walking in the direction of the monostatic configuration of PWR radar at the 2.4 GHz band.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
Table 3.
These motions are periodic and thus have alternating 1.5 s with an overlapping time of 0.5 s was used over the
positive and negative micro-Doppler features. Therefore, full signature of duration 4.5 s. It resulted in nine spectro-
it becomes challenging for any classifier to discern the grams, each of duration 1.5 s from every motion capture
correct motion class. file. Table 3 summarizes the entire simulation database.
This section studies the different classification algo- We use handpicked features [31], Cadence velocity
rithms’ robustness to classify micro-Dopplers in more features (CVD) [32], and automatically extracted sparse
complex scenarios, such as varying aspect angles and features [33],[34] from the micro-Doppler signatures to
bistatic angles. Note that the human motions in these test the performance of a classical machine learning-based
repeated measurements were unrestricted, and therefore, support vector machine classifier [35]. We then compared
the micro-Doppler signatures vary due to differences in their performances with a deep convolutional neural net-
gait patterns in every simulation. The duration of each work (DCNN) that has a joint feature extraction and clas-
measurement was 4.5 s. A sliding window of duration sification framework within the same network. We
Table 4.
Classification Accuracies of Multiple Algorithms for a Simulation Database (Captured for a Fixed Aspect Angle of the
Target)
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Table 5.
Classification Accuracies for a Simulation Database (Captured for Varying Aspect Angle of the Target With Respect to the
Radar Receiver)
designed a 24-layered deep neural network comprising considered in the study share common features in
three components (convolution layer, pooling layer, and the micro-Doppler feature space because of the
activation functions). We also tested some pretrained deep proximity between different motion categories.
neural networks, such as AlexNet, GoogLeNet, and The resulting classification accuracies are pre-
ResNet18. We used 70% of each target’s spectrograms as sented in Table 4. The results demonstrate that all
the training dataset, 15% as the validation set, and the deep neural networks outperform the classical
remaining 15% as the test dataset. machine learning-based methods and achieve an
The following four classification scenarios were con- average classification accuracy of 99%. This is
sidered to give readers a better understanding of the sensi- because the deep neural networks, a cascaded
tivity of the algorithm’s performance to the simulation structure of neurons, can learn any complex func-
database. tion to create a decision boundary even for nonlin-
ear data considered in the study. The classical
Case 1a—Train using data from a fixed zero aspect machine learning algorithms, on the other hand,
angle: The algorithms’ performances using a simu- are not capable of learning these complex discern-
lation database generated for a fixed 0 aspect ing boundaries, resulting in poor classification
angle of the target. Note that the five target classes performances.
Table 6.
Classification Accuracies for a Simulation Database (Captured Under Varying Bistatic Circular Configurations.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
Figure 14.
Experimental setup comprising two synchronized systems: 1) a motion capture Kinect v2 sensor and 2) a PWR system for monitoring human
activities in indoor scenarios.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Figure 15.
Velocity–time profile using motion capture data, simulated and measured Doppler spectrograms of a human undergoing (a)–(c) sitting fol-
lowed by standing up motion, (d)–(f) stand up from chair and starts walking, (g)–(i) walk to fall motion, respectively.
infrared motion capture Kinect v2 sensor and 2) a noncon- slightly in the radar direction while sitting down. How-
tact physical activity monitoring PWR system. ever, the strength of the micro-Doppler is low compared
The PWR system was set up using two Yagi antennas, to bulk body motion. After a 5 s delay, the human sub-
each with a gain of 14 dBm, two National Instruments ject stands up from the chair, resulting in primarily posi-
(NI) USRP-2921 [37] and a Netgear R6300 transmitter tive Dopplers. Similarly, Figure 15(d)–(f) presents
acting as the WiFi AP. The WiFi AP was configured to spectrograms corresponding to a human standing up
transmit an 802.11 g Standard-compliant waveform at a from a chair and before walking. Figure 15(g)–(i),
center frequency of 2.472 GHz. The PWR system used presents spectrograms corresponding to a human transi-
one antenna, a reference antenna, to capture direct WiFi tioning from walking to falling. The strength of the sig-
transmissions from the AP. Simultaneously, the second is nals of the spectrograms shown in Figure 15(a)–(i) may
used as a surveillance antenna to capture signals reflected not be the same; however, the envelope of the velocity–
off targets in the propagation environment. The reference time profile is visually very similar across all. The simu-
WiFi signal and the surveillance signal were then cross- lations do not synthesize environmental factors like
correlated to generate the radar micro-Doppler signatures noise, propagation loss, occlusions, and multipath. There-
in real time. fore, the simulated spectrograms are clean compared to
Figure 15 shows the qualitative comparison between the measured signatures.
the micro-Doppler spectrograms generated using the PWR
measurement system and the SimHumalator.
Figure 15(a)–(c) shows the velocity–time profile
(generated using motion capture data) and simulated and
EXPERIMENTAL DATA AUGMENTATION
the measured spectrograms, respectively, for a human SimHumalator can effectively generate high-quality
undergoing a motion of first sitting down on a chair fol- micro-Doppler spectrograms from the motion capture
lowed by standing up from the chair. The qualitative data. Therefore, it can be used to synthesize signatures for
similarity between all the signatures is evident from the data augmentation purposes to solve the practical problem
figure. As the human sits down, there is a negative Dopp- associated with insufficient or unbalanced micro-Doppler
ler due to the bulk body motion. The positive micro- training data [36],[38]. This section presents classification
Doppler arises due to an arm motion and legs moving results from various data augmentation schemes. The
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
Figure 16.
Augmentation study: The training dataset changes with the percentage of simulated data (s) augmented with the measurement training data.
The percentage of simulation data is varied to study the impact of data augmentation on classification performance. s=0 is a special case
where training and test data only comprise measured spectrograms.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
Vishwakarma et al.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.
SimHumalator: An Open-Source End-to-End Radar Simulator for Human Activity Recognition
[29] F. Restuccia, “IEEE 802.11 BF: Toward ubiquitous Wi-Fi [34] S. Vishwakarma and S. S. Ram, “Dictionary learning with
sensing,” 2021, arXiv:2103.14918. low computational complexity for classification of human
[30] D. Dhulashia, M. Ritchie, S. Vishwakarma, and K. Chetty, micro-Dopplers across multiple carrier frequencies,” IEEE
“Human micro-Doppler signature classification in the pres- Access, vol. 6, pp. 29793–29805, 2018.
ence of a selection of jamming signals.” in Proc. IEEE [35] S. S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K. R.
Radar Conf., 2021, pp. 1–6. Murthy, “A fast iterative nearest point algorithm for sup-
[31] Y. Kim and H. Ling, “Human activity classification based port vector machine classifier design,” IEEE Trans. Neural
on micro-Doppler signatures using a support vector Netw., vol. 11, no. 1, pp. 124–136, Jan. 2000.
machine,” IEEE Trans. Geosci. Remote Sens., vol. 47, [36] C. Tang, S. Vishwakarma, W. Li, R. Adve, S. Julier, and
no. 5, pp. 1328–1337, May 2009. K. Chetty, “Augmenting experimental data with simula-
[32] R. Ricci and A. Balleri, “Recognition of humans based on tions to improve activity classification in healthcare mon-
radar micro-Doppler shape spectrum features,” IET Radar, itoring,” in Proc. IEEE Radar Conf., 2021, pp. 1–6.
Sonar Navigation, vol. 9, no. 9, pp. 1216–1223, 2015. [37] NI USRP 2921. [Online]. Available: http://sine.ni.com/
[33] G. Li, R. Zhang, M. Ritchie, and H. Griffiths, “Sparsity- nips/cds/view/p/lang/en/nid/212995
driven micro-Doppler feature extraction for dynamic hand [38] S. Vishwakarma, C. Tang, W. Li, K. Woodbridge, R.
gesture recognition,” IEEE Trans. Aerosp. Electron. Syst., Adve, and K. Chetty, “GAN based noise generation to aid
vol. 54, no. 2, pp. 655–665, Apr. 2018. activity recognition when augmenting measured WiFi
radar data with simulations.” in Proc. IEEE Int. Conf.
Commun., 2021, pp. 1–6.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY DURGAPUR. Downloaded on October 20,2022 at 16:38:46 UTC from IEEE Xplore. Restrictions apply.