Papers by Edmond Nurellari
26th European Signal Processing Conference (EUSIPCO 2018), 2018
—Small scale fading makes the wireless channel gain vary significantly over small distances and i... more —Small scale fading makes the wireless channel gain vary significantly over small distances and in the context of classical communication systems it can be detrimental to performance. But in the context of mobile robot (MR) wireless communications, we can take advantage of the fading using a mobility diversity algorithm (MDA) to deliberately locate the MR at a point where the channel gain is high. There are two classes of MDAs. In the first class, the MR explores various points, stops at each one to collect channel measurements and then locates the best position to establish communications. In the second class the MR moves, without stopping, along a continuous path while collecting channel measurements and then stops at the end of the path. It determines the best point to establish communications. Until now, the shape of the continuous path for such MDAs has been arbitrarily selected and currently there is no method to optimize it. In this paper, we propose a method to optimize such a path. Simulation results show that such optimized paths provide the MDAs with an increased performance, enabling them to experience higher channel gains while using less mechanical energy for the MR motion.
26th European Signal Processing Conference (EUSIPCO 2018), 2018
—We consider the problem of energy balancing in a clustered wireless sensor network (WSN) deploye... more —We consider the problem of energy balancing in a clustered wireless sensor network (WSN) deployed randomly in a large field and aided by a mobile robot (MR). The sensor nodes (SNs) are tasked to monitor a region of interest (ROI) and report their test statistics to the cluster heads (CHs), which subsequently report to the fusion center (FC) over a wireless fading channel. To maximize the lifetime of the WSN, the MR is deployed to act as an adaptive relay between a subset of the CHs and the FC. To achieve this we develop a multiple−link mobility diversity algorithm (MDA) executed by the MR that will allow to compensate simultaneously for the small-scale fading at the established wireless links (i.e., the MR-to-FC as well as various CH-to-MR communication links). Simulation results show that the proposed MR aided technique is able to significantly reduce the transmission power required and thus extend the operational lifetime of the WSN. We also show how the effect of small-scale fading at various wireless links is mitigated by using the proposed multiple − link MDA executed by a MR equipped with a single antenna.
—We address the problem of centralized detection of a binary event in the presence of β fraction ... more —We address the problem of centralized detection of a binary event in the presence of β fraction falsifiable sensor nodes (SNs) (i.e., controlled by an attacker) for a bandwidth-constrained under − attack spatially uncorrelated distributed wireless sensor network (WSN). The SNs send their one-bit test statistics over orthogonal channels to the fusion center (FC), which linearly combines them to reach to a final decision. Adopting the modified deflection coefficient as an alternative function to be optimized, we first derive in a closed-form the FC optimal weights combining. But as these optimal weights require a − priori knowledge that cannot be attained in practice, this optimal weighted linear FC rule is not implementable. We also derive in a closed-form the expressions for the attacker "flipping probability" (defined in paper) and the minimum fraction of compromised SNs that makes the FC incapable of detecting. Next, based on the insights gained from these expressions, we propose a novel and non-complex reliability-based strategy to identify the compromised SNs and then adapt the weights combining proportional to their assigned reliability metric. In this way, the FC identifies the compromised SNs and decreases their weights in order to reduce their contributions towards its final decision. Finally, simulation results illustrate that the proposed strategy significantly outperforms (in terms of FC's detection capability) the existing compromised SNs identification and mitigation schemes.
IEEE Sensors Journal
We address the problem of centralized detection of a binary event in the presence of falsifiable ... more We address the problem of centralized detection of a binary event in the presence of falsifiable sensor nodes (SNs) (i.e., controlled by an attacker) for a bandwidth-constrained under−attack spatially uncorrelated distributed wireless sensor network (WSN). The SNs send their quantized test statistics over orthogonal channels to the fusion center (FC), which linearly combines them to reach a final decision. First (considering that the FC and the attacker do not act strategically), we derive (i) the FC optimal weight combining; (ii) the optimal SN to FC transmit power, and (iii) the test statistic quantization bits that maximize the probability of detection (P d). We also derive an expression for the attacker strategy that causes the maximum possible FC degradation. But in these expressions, both the optimum FC strategy and the attacker strategy require a − priori knowledge that cannot be obtained in practice. The performance analysis of sub-optimum FC strategies is then characterized, and based on the (compromised) SNs willingness to collaborate, we also derive analytically the sub-optimum attacker strategies. Then, considering that the FC and the attacker now act strategically, we re-cast the problem as a minimax game between the FC and the attacker and prove that the Nash Equilibrium (NE) exists. Finally, we find this NE numerically in the simulation results and this gives insight into the detection performance of the proposed strategies.
EURASIP Journal on Advances in Signal Processing, 2016
—We consider the problem of distributed soft decision fusion in a bandwidth-constrained spatially... more —We consider the problem of distributed soft decision fusion in a bandwidth-constrained spatially uncorrelated wireless sensor network (WSN). The WSN is tasked with the detection of an intruder transmitting an unknown signal over a fading channel. Existing distributed consensus-based fusion rules algorithms only ensure equal combining of local data and in the case of bandwidth-constrained WSNs, we show that their performance is poor and does not converge across the sensor nodes (SNs). Motivated by this fact, we propose a two-step distributed quantized fusion rule algorithm where in the first step the SNs collaborate with their neighbors through error-free, orthogonal channels (the SNs exchange quantized information matched to the channel capacity of each link). In the second step, local 1-bit decisions generated in the first step are shared among neighbors to yield a consensus. A binary hypothesis testing is performed at any arbitrary SN to optimally declare the global decision. Simulations show that our proposed quantized two-step distributed detection algorithm approaches the performance of the unquantized centralized (with a fusion center) detector and its power consumption is shown to be 50% less than the existing (unquantized) conventional algorithm.
The problem of fully distributed detection of an unknown
deterministic signal observed by a wirel... more The problem of fully distributed detection of an unknown
deterministic signal observed by a wireless sensor network
(WSN) is addressed. We propose a two-step distributed
consensus-based detection algorithm where in the first
step the sensor nodes (SNs) collaborate with their neighbors
through error-free, orthogonal channels (the SNs
exchange quantized information matched to the channel
capacity of each link). In the second step, local 1-bit
decisions generated in the first step are shared among
neighbors to yield a consensus. Simulations show that our
proposed quantized two-step distributed detection algorithm
approaches the performance of the unquantized
centralized (fusion center) detector.
IEEE VTC2015, May 10, 2015
We address the optimal transmit power allocation
problem (from the sensor nodes (SNs) to the fus... more We address the optimal transmit power allocation
problem (from the sensor nodes (SNs) to the fusion center (FC))
for the decentralized detection of an unknown deterministic
spatially uncorrelated signal which is being observed by a
distributed wireless sensor network. We propose a novel fully
distributed algorithm, in order to calculate the optimal transmit
power allocation for each sensor node (SN) and the optimal
number of quantization bits for the test statistic in order to match
the channel capacity. The SNs send their quantized information
over orthogonal uncorrelated channels to the FC which linearly
combines them and makes a final decision. What makes this
scheme attractive is that the SNs share with their neighbours
just their individual transmit powers at the current states. As a
result, the SN processing complexity is further reduced.
IEEE Sensor Signal Processing for Defence SSPD2014, Sep 8, 2014
We consider the problem of soft decision fusion in
a bandwidth-constrained wireless sensor netwo... more We consider the problem of soft decision fusion in
a bandwidth-constrained wireless sensor network (WSN). The
WSN is tasked with the detection of an intruder transmitting an
unknown signal over a fading channel. A binary hypothesis testing
is performed using the soft decision of the sensor nodes (SNs).
Using the likelihood ratio test, the optimal soft fusion rule at the
fusion center (FC) has been shown to be the weighted distance
from the soft decision mean under the null hypothesis. But as
the optimal rule requires a-priori knowledge that is difficult to
attain in practice, suboptimal fusion rules are proposed that are
realizable in practice. We show how the effect of quantizing
the test statistic can be mitigated by increasing the number of
SN samples, i.e., bandwidth can be traded off against increased
latency. The optimal power and bit allocation for the WSN is also
derived. Simulation results show that SNs with good channels are
allocated more bits, while SNs with poor channels are censored.
IEEE Proc. EUSIPCO 2014, Sep 1, 2014
We consider the decentralized detection of an unknown deterministic signal in a spatially uncorre... more We consider the decentralized detection of an unknown deterministic signal in a spatially uncorrelated distributed wireless sensor network. N samples from the signal of interest are gathered by each of the M spatially distributed sensors, and the energy is estimated by each sensor. The sensors send their quantized information over orthogonal channels to the fusion center (FC) which linearly combines them and makes a final decision. We show how by maximizing the modified deflection coefficient we can calculate the optimal transmit power allocation for each sensor and the optimal number of quantization bits to match the channel capacity.
In this work, a concatenated forward error correction (FEC) scheme together with Orthogonal Frequ... more In this work, a concatenated forward error correction (FEC) scheme together with Orthogonal Frequency Division Multiplexing (OFDM) have been used for effective transmission of data/images over additive and fading channels. With a Bose Chaudhuri Hocquenghem (BCH) code as the outer code and a Low Density Parity Check (LDPC) code as the inner code, the transmission has been simulated over both the Gilbert-Elliot and ITU Rayleigh fading channels. The FEC parameters assumed throughout the simulations were obtained from the DVB-T2 standard and the Base Band (BB) frames were created by making use of shortening and zero-padding concepts. The results which have been presented in terms of BER and psycho-visual performances show the resilience of the FEC schemes and OFDM to channel impairments. The BER performances attained over the Gilbert-Elliot Channel (a channel that introduces burst errors when in the bad state) using LDPC only and BCH-LDPC concatenated coding indicated that the outer BCH coding will start to achieve a much lower BER after an SNR of 5 dB. Over the ITU-A Rayleigh fading channel it was observed that the performance increment due to the outer BCH encoder only become apparent after 6 dB when compared to the rate ¼ LDPC only coded system BER performance. Over the Gilbert-Elliot channel a BCH-LDPC coded QPSK-OFDM system would provide a BER of 3×10-4 at 6 dB while the same BER for the ITU Vehicular-A channel was possible at 6.6 dB.
IEEE SIU 2012, May 2012
In this work, a concatenated forward error correction (FEC) scheme together with Orthogonal Frequ... more In this work, a concatenated forward error correction (FEC) scheme together with Orthogonal Frequency Division Multiplexing (OFDM) have been used for effective transmission of data/images over additive and fading channels. With a Bose Chaudhuri Hocquenghem (BCH) code as the outer code and a Low Density Parity Check (LDPC) code as the inner code, the transmission has been simulated over both the Gilbert-Elliot and ITU Rayleigh fading channels. The FEC parameters assumed throughout the simulations were obtained from the DVB-T2 standard and the Base Band (BB) frames were created by making use of shortening and zero-padding concepts. The results which have been presented in terms of BER and psycho-visual performances show the resilience of the FEC schemes and OFDM to channel impairments. The BER performances attained over the Gilbert-Elliot Channel (a channel that introduces burst errors when in the bad state) using LDPC only and BCH-LDPC concatenated coding indicated that the outer BCH coding will start to achieve a much lower BER after an SNR of 5 dB. Over the ITU-A Rayleigh fading channel it was observed that the performance increment due to the outer BCH encoder only become apparent after 6 dB when compared to the rate ¼ LDPC only coded system BER performance. Over the Gilbert-Elliot channel a BCH-LDPC coded QPSK-OFDM system would provide a BER of 3×10-4 at 6 dB while the same BER for the ITU Vehicular-A channel was possible at 6.6 dB.
Lambert academic publishing , Jun 2012
Since the invention of Information Theory by Shannon in 1948, coding theorists have been trying t... more Since the invention of Information Theory by Shannon in 1948, coding theorists have been trying to come up with coding schemes that will achieve capacity dictated by Shannon’s
Theorem. The most successful two coding schemes among many are the LDPCs and Turbo codes. In this thesis we focus on LDPC codes and in particulary their usage by the second
generation terrestrial digital video broadcasting (DVB-T2), second generation satellite digital video broadcasting (DVB-S2) and IEEE 802.16e mobile WiMAX. Low Density Parity
Check (LDPC) block codes were invented by Gallager in 1962 and they can achieve near Shannon limit performances on a wide variety of fading channels. LDPC codes are included
in the DVB-T2 and DVB-S2 standards because of their excellent error-correcting capabilities. LDPC coding has also been adopted as an optional error correcting scheme in IEEE 802.16e
mobile WiMAX. This thesis focuses on the bit error rate (BER) and PSNR performance analysis of DVB-T2, DVB-S2 and IEEE 802.16e transmission using LDPC coding under additive white Gaussian noise (AWGN) and Rayleigh Fading channel scenarios. The power delay profile for all transmissions
was adopted from the ITU channel model. For modelling the fading environment Jakes fading channel model[7] together with ITU Vehicular-A and ITU Pedestrian-B[13] power delay profile parameters were used considering also the Doppler effect. The three scenarios presented in this thesis are the following: (i) simulation of LDPC coding for DVB-S2 standard, (ii) optional LDPC coding as suggested by the WiMAX standard and (iii) simulation of DVB-T2 using LDPC without outer BCH encoder and with outer BCH encoder. During the simulations the encoding algorithm used was Forward Substitution algorithm. Even though the second generation DVB standards and WiMAX standard has been out since 2009, not much comprative results have been published for BCH and LDPC concatenated
coding schemes making use of either a normal FEC frame or a shortened FEC frame. By carrying out the work presented here we tried to contribute towards this end. Throughout the simulations we have considered two different size images as the source of information to transmit. Performances analysis have been given by making comparisons between BER and PSNR values and psychovisually.
Keywords: Low Density Parity Check Coding; BCH coding; OFDM; WiMAX; Digital
Video Broadcasting; Rayleigh Fading Channel; Shortening; Zero-Padding; Digital Image
Processing; Iterative decoding.
iv
In this paper we present effective means of digital image transmission by means of Forward Error ... more In this paper we present effective means of digital image transmission by means of Forward Error Correcting (FEC) schemes and Orthogonal Frequency Division Multiplexing (OFDM). The transmission was simulated over the AWGN and a Rayleigh fading channel whose power delay profile was adopted from the ITU channel model. The FEC and OFDM parameters were adopted from the DVB-T, WiMAX, and DVB-T2 standards. The results presented herein are in terms of BER, PSNR and visual performances. It is evident from the presented results that effective FEC schemes are necessary for reliable transmission of digital media in a mobile wireless scenario.
Thesis Chapters by Edmond Nurellari
This thesis addresses the problem of detection of an unknown binary event. In particular, we cons... more This thesis addresses the problem of detection of an unknown binary event. In particular, we consider centralized detection, distributed detection, and network security in wireless sensor networks (WSNs). The communication links among SNs are subject to limited SN transmit power, limited bandwidth (BW), and are modeled as orthogonal channels with path loss, flat fading and additive white Gaussian noise (AWGN). We propose algorithms for resource allocations, fusion rules, and network security.
In the first part of this thesis, we consider the centralized detection and calculate the optimal transmit power allocation and the optimal number of quantization bits for each SN. The resource allocation is performed at the fusion center (FC) and it is referred as a centralized approach. We also propose a novel fully distributed algorithm to address this resource allocation problem. What makes this scheme attractive is that the SNs share with their neighbors just their individual transmit power at the current states. Finally, the optimal soft fusion rule at the FC is derived. But as this rule requires a-priori knowledge that is difficult to attain in practice, suboptimal fusion rules are proposed that are realizable in practice.
The second part considers a fully distributed detection framework and we propose a two-step distributed quantized fusion rule algorithm where in the first step the SNs collaborate with their neighbors through error-free, orthogonal channels. In the second step, local 1-bit decisions generated in the first step are shared among neighbors to yield a consensus. A binary hypothesis testing is performed at any arbitrary SN to optimally declare the global decision. Simulations show that our proposed quantized two-step distributed detection algorithm approaches the performance of the unquantized centralized (with a FC) detector and its power consumption is shown to be 50% less than the existing (unquantized) conventional algorithm.
Finally, we analyze the detection performance of under-attack WSNs and derive attacking and defense strategies from both the Attacker and the FC perspective. We re-cast the problem as a minimax game between the FC and Attacker and show that the Nash Equilibrium (NE) exists. We also propose a new non-complex and efficient reputation-based scheme to identify these compromised SNs. Based on this reputation metric, we propose a novel FC weight computation strategy ensuring that the weights for the identified compromised SNs are likely to be decreased. In this way, the FC decides how much a SN should contribute to its final decision. We show that this strategy outperforms the existing schemes.
Uploads
Papers by Edmond Nurellari
deterministic signal observed by a wireless sensor network
(WSN) is addressed. We propose a two-step distributed
consensus-based detection algorithm where in the first
step the sensor nodes (SNs) collaborate with their neighbors
through error-free, orthogonal channels (the SNs
exchange quantized information matched to the channel
capacity of each link). In the second step, local 1-bit
decisions generated in the first step are shared among
neighbors to yield a consensus. Simulations show that our
proposed quantized two-step distributed detection algorithm
approaches the performance of the unquantized
centralized (fusion center) detector.
problem (from the sensor nodes (SNs) to the fusion center (FC))
for the decentralized detection of an unknown deterministic
spatially uncorrelated signal which is being observed by a
distributed wireless sensor network. We propose a novel fully
distributed algorithm, in order to calculate the optimal transmit
power allocation for each sensor node (SN) and the optimal
number of quantization bits for the test statistic in order to match
the channel capacity. The SNs send their quantized information
over orthogonal uncorrelated channels to the FC which linearly
combines them and makes a final decision. What makes this
scheme attractive is that the SNs share with their neighbours
just their individual transmit powers at the current states. As a
result, the SN processing complexity is further reduced.
a bandwidth-constrained wireless sensor network (WSN). The
WSN is tasked with the detection of an intruder transmitting an
unknown signal over a fading channel. A binary hypothesis testing
is performed using the soft decision of the sensor nodes (SNs).
Using the likelihood ratio test, the optimal soft fusion rule at the
fusion center (FC) has been shown to be the weighted distance
from the soft decision mean under the null hypothesis. But as
the optimal rule requires a-priori knowledge that is difficult to
attain in practice, suboptimal fusion rules are proposed that are
realizable in practice. We show how the effect of quantizing
the test statistic can be mitigated by increasing the number of
SN samples, i.e., bandwidth can be traded off against increased
latency. The optimal power and bit allocation for the WSN is also
derived. Simulation results show that SNs with good channels are
allocated more bits, while SNs with poor channels are censored.
Theorem. The most successful two coding schemes among many are the LDPCs and Turbo codes. In this thesis we focus on LDPC codes and in particulary their usage by the second
generation terrestrial digital video broadcasting (DVB-T2), second generation satellite digital video broadcasting (DVB-S2) and IEEE 802.16e mobile WiMAX. Low Density Parity
Check (LDPC) block codes were invented by Gallager in 1962 and they can achieve near Shannon limit performances on a wide variety of fading channels. LDPC codes are included
in the DVB-T2 and DVB-S2 standards because of their excellent error-correcting capabilities. LDPC coding has also been adopted as an optional error correcting scheme in IEEE 802.16e
mobile WiMAX. This thesis focuses on the bit error rate (BER) and PSNR performance analysis of DVB-T2, DVB-S2 and IEEE 802.16e transmission using LDPC coding under additive white Gaussian noise (AWGN) and Rayleigh Fading channel scenarios. The power delay profile for all transmissions
was adopted from the ITU channel model. For modelling the fading environment Jakes fading channel model[7] together with ITU Vehicular-A and ITU Pedestrian-B[13] power delay profile parameters were used considering also the Doppler effect. The three scenarios presented in this thesis are the following: (i) simulation of LDPC coding for DVB-S2 standard, (ii) optional LDPC coding as suggested by the WiMAX standard and (iii) simulation of DVB-T2 using LDPC without outer BCH encoder and with outer BCH encoder. During the simulations the encoding algorithm used was Forward Substitution algorithm. Even though the second generation DVB standards and WiMAX standard has been out since 2009, not much comprative results have been published for BCH and LDPC concatenated
coding schemes making use of either a normal FEC frame or a shortened FEC frame. By carrying out the work presented here we tried to contribute towards this end. Throughout the simulations we have considered two different size images as the source of information to transmit. Performances analysis have been given by making comparisons between BER and PSNR values and psychovisually.
Keywords: Low Density Parity Check Coding; BCH coding; OFDM; WiMAX; Digital
Video Broadcasting; Rayleigh Fading Channel; Shortening; Zero-Padding; Digital Image
Processing; Iterative decoding.
iv
Thesis Chapters by Edmond Nurellari
In the first part of this thesis, we consider the centralized detection and calculate the optimal transmit power allocation and the optimal number of quantization bits for each SN. The resource allocation is performed at the fusion center (FC) and it is referred as a centralized approach. We also propose a novel fully distributed algorithm to address this resource allocation problem. What makes this scheme attractive is that the SNs share with their neighbors just their individual transmit power at the current states. Finally, the optimal soft fusion rule at the FC is derived. But as this rule requires a-priori knowledge that is difficult to attain in practice, suboptimal fusion rules are proposed that are realizable in practice.
The second part considers a fully distributed detection framework and we propose a two-step distributed quantized fusion rule algorithm where in the first step the SNs collaborate with their neighbors through error-free, orthogonal channels. In the second step, local 1-bit decisions generated in the first step are shared among neighbors to yield a consensus. A binary hypothesis testing is performed at any arbitrary SN to optimally declare the global decision. Simulations show that our proposed quantized two-step distributed detection algorithm approaches the performance of the unquantized centralized (with a FC) detector and its power consumption is shown to be 50% less than the existing (unquantized) conventional algorithm.
Finally, we analyze the detection performance of under-attack WSNs and derive attacking and defense strategies from both the Attacker and the FC perspective. We re-cast the problem as a minimax game between the FC and Attacker and show that the Nash Equilibrium (NE) exists. We also propose a new non-complex and efficient reputation-based scheme to identify these compromised SNs. Based on this reputation metric, we propose a novel FC weight computation strategy ensuring that the weights for the identified compromised SNs are likely to be decreased. In this way, the FC decides how much a SN should contribute to its final decision. We show that this strategy outperforms the existing schemes.
deterministic signal observed by a wireless sensor network
(WSN) is addressed. We propose a two-step distributed
consensus-based detection algorithm where in the first
step the sensor nodes (SNs) collaborate with their neighbors
through error-free, orthogonal channels (the SNs
exchange quantized information matched to the channel
capacity of each link). In the second step, local 1-bit
decisions generated in the first step are shared among
neighbors to yield a consensus. Simulations show that our
proposed quantized two-step distributed detection algorithm
approaches the performance of the unquantized
centralized (fusion center) detector.
problem (from the sensor nodes (SNs) to the fusion center (FC))
for the decentralized detection of an unknown deterministic
spatially uncorrelated signal which is being observed by a
distributed wireless sensor network. We propose a novel fully
distributed algorithm, in order to calculate the optimal transmit
power allocation for each sensor node (SN) and the optimal
number of quantization bits for the test statistic in order to match
the channel capacity. The SNs send their quantized information
over orthogonal uncorrelated channels to the FC which linearly
combines them and makes a final decision. What makes this
scheme attractive is that the SNs share with their neighbours
just their individual transmit powers at the current states. As a
result, the SN processing complexity is further reduced.
a bandwidth-constrained wireless sensor network (WSN). The
WSN is tasked with the detection of an intruder transmitting an
unknown signal over a fading channel. A binary hypothesis testing
is performed using the soft decision of the sensor nodes (SNs).
Using the likelihood ratio test, the optimal soft fusion rule at the
fusion center (FC) has been shown to be the weighted distance
from the soft decision mean under the null hypothesis. But as
the optimal rule requires a-priori knowledge that is difficult to
attain in practice, suboptimal fusion rules are proposed that are
realizable in practice. We show how the effect of quantizing
the test statistic can be mitigated by increasing the number of
SN samples, i.e., bandwidth can be traded off against increased
latency. The optimal power and bit allocation for the WSN is also
derived. Simulation results show that SNs with good channels are
allocated more bits, while SNs with poor channels are censored.
Theorem. The most successful two coding schemes among many are the LDPCs and Turbo codes. In this thesis we focus on LDPC codes and in particulary their usage by the second
generation terrestrial digital video broadcasting (DVB-T2), second generation satellite digital video broadcasting (DVB-S2) and IEEE 802.16e mobile WiMAX. Low Density Parity
Check (LDPC) block codes were invented by Gallager in 1962 and they can achieve near Shannon limit performances on a wide variety of fading channels. LDPC codes are included
in the DVB-T2 and DVB-S2 standards because of their excellent error-correcting capabilities. LDPC coding has also been adopted as an optional error correcting scheme in IEEE 802.16e
mobile WiMAX. This thesis focuses on the bit error rate (BER) and PSNR performance analysis of DVB-T2, DVB-S2 and IEEE 802.16e transmission using LDPC coding under additive white Gaussian noise (AWGN) and Rayleigh Fading channel scenarios. The power delay profile for all transmissions
was adopted from the ITU channel model. For modelling the fading environment Jakes fading channel model[7] together with ITU Vehicular-A and ITU Pedestrian-B[13] power delay profile parameters were used considering also the Doppler effect. The three scenarios presented in this thesis are the following: (i) simulation of LDPC coding for DVB-S2 standard, (ii) optional LDPC coding as suggested by the WiMAX standard and (iii) simulation of DVB-T2 using LDPC without outer BCH encoder and with outer BCH encoder. During the simulations the encoding algorithm used was Forward Substitution algorithm. Even though the second generation DVB standards and WiMAX standard has been out since 2009, not much comprative results have been published for BCH and LDPC concatenated
coding schemes making use of either a normal FEC frame or a shortened FEC frame. By carrying out the work presented here we tried to contribute towards this end. Throughout the simulations we have considered two different size images as the source of information to transmit. Performances analysis have been given by making comparisons between BER and PSNR values and psychovisually.
Keywords: Low Density Parity Check Coding; BCH coding; OFDM; WiMAX; Digital
Video Broadcasting; Rayleigh Fading Channel; Shortening; Zero-Padding; Digital Image
Processing; Iterative decoding.
iv
In the first part of this thesis, we consider the centralized detection and calculate the optimal transmit power allocation and the optimal number of quantization bits for each SN. The resource allocation is performed at the fusion center (FC) and it is referred as a centralized approach. We also propose a novel fully distributed algorithm to address this resource allocation problem. What makes this scheme attractive is that the SNs share with their neighbors just their individual transmit power at the current states. Finally, the optimal soft fusion rule at the FC is derived. But as this rule requires a-priori knowledge that is difficult to attain in practice, suboptimal fusion rules are proposed that are realizable in practice.
The second part considers a fully distributed detection framework and we propose a two-step distributed quantized fusion rule algorithm where in the first step the SNs collaborate with their neighbors through error-free, orthogonal channels. In the second step, local 1-bit decisions generated in the first step are shared among neighbors to yield a consensus. A binary hypothesis testing is performed at any arbitrary SN to optimally declare the global decision. Simulations show that our proposed quantized two-step distributed detection algorithm approaches the performance of the unquantized centralized (with a FC) detector and its power consumption is shown to be 50% less than the existing (unquantized) conventional algorithm.
Finally, we analyze the detection performance of under-attack WSNs and derive attacking and defense strategies from both the Attacker and the FC perspective. We re-cast the problem as a minimax game between the FC and Attacker and show that the Nash Equilibrium (NE) exists. We also propose a new non-complex and efficient reputation-based scheme to identify these compromised SNs. Based on this reputation metric, we propose a novel FC weight computation strategy ensuring that the weights for the identified compromised SNs are likely to be decreased. In this way, the FC decides how much a SN should contribute to its final decision. We show that this strategy outperforms the existing schemes.