Papers by Prashant K Shah
Signal, Image and Video Processing
The Covid-19 pandemic is one of the most significant global health concerns that have emerged in ... more The Covid-19 pandemic is one of the most significant global health concerns that have emerged in this decade. Intelligent healthcare technology and techniques based on speech signal and artificial intelligence make it feasible to provide a faster and more efficient timely detection of Covid-19. The main objective of our study is to design speech signal-based noninvasive, low-cost, remote diagnosis of Covid-19. In this study, we have developed system to detect Covid-19 from speech signal using Mel frequency magnitude coefficients (MFMC) and machine learning techniques. In order to capture higher-order spectral features, the spectrum is divided into a larger number of subbands with narrower bandwidths as MFMC, which leads to better frequency resolution and less overall noise. As a consequence of an improvement in frequency resolution as well as a decrease in the quantity of noise that is included with the extraction of MFMC, the higher-order MFMCs are able to identify Covid-19 from speech signals with an increased level of accuracy. The procedures for machine learning are often less complicated than those for deep learning, and they may commonly be carried out on regular computers. However, deep learning systems need extensive computing power and data storage. Twelve, twenty-four, thirty, and forty spectral coefficients are obtained using MFMC in our study, and from these coefficients, performance is accessed using machine learning classifiers, such as random forests and K-nearest neighbor (KNN); however, KNN has performed better than the other model with having AUC score of 0.80.
Lecture Notes in Electrical Engineering, 2021
---------------------------------------------------------------------***-------------------------... more ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract To do accurate forecasting of any process, we should have to understand the process and according to that, we can develop the model. This can help to decide the best possible values of forecast. To do the best forecast, we can use past values as well as present values. We can divide available data information into data points with respect to appropriate time, that data is called as time series data. For predicting time series data various models are available that can help to forecast, i.e Stochastic, Neural networks and SVM based models. All these models have their own merits and demerits. In this paper, we will talk about stochastic model (ARIMA) modeling with help of wavelet transform decomposition; the designed hybrid model is applied in the application related to finance. For that we have taken previous six months data (daily basis) of S&P BSE information technology, based on this available data we have predicted the 10 days step ahead future values, and comparative analysis has been done between ordinary model and proposed models.
Multidimensional systems have been very interesting area from several past decades due to its inc... more Multidimensional systems have been very interesting area from several past decades due to its increasing demands in various areas like digital signal processing, Image processing, Gas absorption, temperature measurements etc. As systems aremulti-dimensional in nature stability of those systems decreases due to more than one dimension. These systems are difficult to represent, such systems can be represented by discrete state space models. There are several state space models are available among them FM1, FM2, GR, and R are very famous models even though they developed before four-five decade. In this paper our focus is to derive stability conditions for two-dimensional (2-D) Periodically Shift Varying (PSV) filters using GR models. As PSV filters has applicable in various places like Image Scrambling, video scrambling, design multiplier less filters etc. To check whether PSV system is stable or not, we have derived twosufficient stability conditions for PSV filter which helps to tel...
This paper presents an optimal design of digital low pass finite impulse response (FIR) filter us... more This paper presents an optimal design of digital low pass finite impulse response (FIR) filter using Particle Swarm Optimization (PSO). The design target of FIR filter is to approximate the ideal filters on the request of a given designing specifications. The traditional based optimization techniques are not efficient for digital filter design. The filter specification to be realized PSO algorithm generatesthe best coefficients and try to meet the ideal frequency response. Particle swarm optimization (PSO) proposes a new equation for the velocity vector and updating the particle vectors and hence the solution quality is improved. The PSO technique enhances its search capability that leads to a higher probability of obtaining the optimal solution. In this paper for the given problem the realization of the FIR filter has been performed. The simulation results have been performed by using the particle swarm optimization (PSO) method.
n-D system has been very interesting areas from several past decades, due to its various applicat... more n-D system has been very interesting areas from several past decades, due to its various applications. These n-D systems are not easy to represent compare to 1D system. There are several methods that has been used to overcome any problem but among them state space model based methods has accurate solution compare to others. In past few years, Two-dimensional (2-D) discrete Systems have attracted due to their effectiveness in describing systems in several modern engineering fields like image data processing and transformation, thermal processes, water stream heating, etc, out of available space state models.This paper review and compare different 2-D state space models. From comparative analysis we are going to show which model is best usable for which application. And also we have done stability analysis by using our derived criteria and try to conclude the best model in terms of stability. Indexed Terms —Models; Two-dimensional; Control; State-Space; Givone-Roesser(GR); Fornasni-Ma...
2020 IEEE International Conference for Innovation in Technology (INOCON)
Functional coverage helps to examine test scenarios, by providing metrics that give information a... more Functional coverage helps to examine test scenarios, by providing metrics that give information about the test case or design-under-test (DUT) reached states. System Verilog language provides dedicated syntax to make it possible, such as cover group and cover point objects. Under cover group we have cover points and cross points, again under them we have cover bins and cross bins respectively. However using cover points we can directly ensure certain values/set of values of the signals and using the cross points we can cover a crossing between two or more individual bins, where we can directly ensure the piece of the scenario only. But using these cover points and cross points we cannot directly and clearly ensure whether an end to end scenario (qualified data traversing from initial state to final state with in stipulated clock cycles) has been exercised by the stimulus. To achieve that here we are proposing the “scenario cover points” which can directly in a single shot, will ensure end to end scenario exercised in the simulation. The constrained random simulation (CRS) is widely used during the pre-silicon verification to cover width and depth of the design. But improving functional coverage efficiently in a verification environment based on CRS could be time consuming, since we need to run verification tests multiple times by increasing the seed number until we cover everything (all corner states of the design). On the other hand manually crafting direct test patterns completely is not practical since we have exhaustive state space to verify. So here we are proposing the method “blend of focused and constrained random stimulus with scenario bins on top of usual bins (bins under cover points and cross points)” to exercise the primary end to end scenarios quickly and to cover the width and depth of the design. To the best of our knowledge, in the domain of pre-silicon functional verification this is the first work which proposes on blend of focused and constrained random stimulus with scenario bins on top of usual bins.
2020 12th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), 2020
FSO communication systems have received huge attention in recent years due to low cost, huge band... more FSO communication systems have received huge attention in recent years due to low cost, huge bandwidth and secure alternative to current technologies. However, the performance is critically impaired by atmospheric turbulence. This paper presents the wavelength diversity as a mitigating technique, based on the fact that refractive index variation is different at different wavelengths. The strong atmospheric conditions are modeled with negative exponential distribution. We derive the closed-form expressions for outage probability and average BER. Numerical results for outage probability and average BER at different diversity orders have been presented to demonstrate our suggestions for improving the system performance.
2020 IEEE International Conference for Innovation in Technology (INOCON), 2020
Formal verification (FV) has been widely accepted as a verification approach for catching corner ... more Formal verification (FV) has been widely accepted as a verification approach for catching corner logic design issues, it also fastens the verification process of any subsystem. Usage of formal verification for any RTL verification is an easy task compared to the traditional simulation method. In this paper, we discuss the approaches of verifying a DUT by formal verification method, and how it will reduce the time of the overall verification cycle. In addition to that, I'll also discuss the flow of verification to test any DUT under the formal verification method. In this test case, I used an assertion-based verification methodology to test the DUT and compare it with traditional simulation-based verification methodology.
2020 IEEE International Conference for Innovation in Technology (INOCON), 2020
Functional coverage helps to examine test scenarios, by providing metrics that give information a... more Functional coverage helps to examine test scenarios, by providing metrics that give information about the test case or design-under-test (DUT) reached states. System Verilog language provides dedicated syntax to make it possible, such as cover group and cover point objects. Under cover group we have cover points and cross points, again under them we have cover bins and cross bins respectively. However using cover points we can directly ensure certain values/set of values of the signals and using the cross points we can cover a crossing between two or more individual bins, where we can directly ensure the piece of the scenario only. But using these cover points and cross points we cannot directly and clearly ensure whether an end to end scenario(qualified data traversing from initial state to final state with in stipulated clock cycles) has been exercised by the stimulus. It is difficult task to ensure end to end scenarios using usual bins (bins under cover points and cross points) because we need to write many bins and also need to ensure whether all bins are covering in the subsequent clock cycles. To achieve that here we are proposing the “scenario cover points” which can directly in a single shot, will ensure end to end scenario whether exercised or not in the simulation. To the best of our knowledge this is the first paper which proposes on scenario cover points in the domain of nre-silicon functional verification.
Today’s DSP systems are well suited to VLSI implementation and if implemented using VLSI technolo... more Today’s DSP systems are well suited to VLSI implementation and if implemented using VLSI technologies they are often economically viable or technically feasible. The focus of this paper is to the design an FIR filter with efficient VLSI architecture to reduce the power consumption and the hardware complexity to optimize the filter area, delay and power. Basically optimization of”filter’area, delay and power is done by using add & shift method for.multiplication, but here power dissipation of’the’filter will increase. Complexity of the filter can be reduced by representing the coefficients in canonical signed digit’(CSD) representation as it is more efficient than the traditional”binary representation. In this paper, we will see different multiplication techniques for filter design such as, add and shift method, Vedic multiplier, Booth Multiplier and Wallace tree’(WT) multiplier for the multiplication of'filter coefficients with filter input. MATLAB is used for designing of Finit...
Lecture Notes in Electrical Engineering, 2021
OFDM is an extinguishlyused next-generation wireless transmission technique because of its easy e... more OFDM is an extinguishlyused next-generation wireless transmission technique because of its easy equalization. OFDM can send over hundreds of parallel data streams via multiple carriers which makes the possibility of large data transfer. OFDM has two major drawbacks namely inter-symbol interference (ISI) problem and peak-to-average power ratio (PAPR). Coherent addition of independently modulated carriers, of an OFDM signal can give a large peak-to-average power ratio .Among those PAPR reduction methods, Clipping method is the most simplest method to use . However, using Clipping causes both in-band distortion and out-of-band distortion which are undesirable but this can be reduce by using filtering after Clipping. Another Possibility of PAPR reduction can be reserving several tones to generate a peak-cancelling signal (PCS).However, this method suffers from high computational complexity. To address this problem, this paper proposes a simple algorithm using residual noise reduction wh...
2021 2nd International Conference for Emerging Technology (INCET), 2021
Data security is one of such concern which cannot be avoided in today’s world. Protecting the sen... more Data security is one of such concern which cannot be avoided in today’s world. Protecting the sensitive data and personal information from unauthorised access and preventing it from being misused is important and can be done using various password protected software or systems, but is also possible to overcome these protecting software and systems through hacking and various other attacks. It is possible to store the sensitive data in an encrypted as well as secretly hidden way which will double the security level as first of all the attacker would have no idea where the secret data is hidden, even if in some way the attacker may find the secret data the it will be in an encrypted format which will be of no use to them. Therefore, it is necessary to introduces more than one level of security for data security. Hence in this paper one such approach has been proposed to use more than one technique for data security simultaneously namely cryptography and steganography, both of them are very popular techniques. The former is used to encrypt the data and the later is used to hide the data, together using both of them could help in increasing the security in a better way. This paper intends to improvise the technique of data security with a combination of the above two mentioned techniques and for a more secure approach, the data will be first encrypted using a key, and this key will be generated using the Diffie-Hellman key exchange algorithm and the key will be modified by key expansion, reduction and rotation techniques to increase the randomness for each round of encryption. Later this encrypted data will hidden into another image file, using the Least Significant bit steganography method by embedding encrypted data into lower order bits of the carrier image. In this paper the data hiding capacity is improved by increasing more than one least significant bits of the image and hence hence it will also reduce the number of clock cycles used for embedding and retrieving process. And the quality of the obtained output image will be benchmarked by calculating its PSNR (Peak signal to noise ratio) and MSE (Mean Square Error). The tools used for implementation are XILINX VIVADO, using verilog code for the proposed Architecture and MATLAB for image characterization and key generation.
International Journal of VLSI & Signal Processing, 2017
In this paper an analog to digital converter architecture is proposed. The proposed design is bas... more In this paper an analog to digital converter architecture is proposed. The proposed design is based on a mixed approach of flash type ADC combining with Successive Approximation Register type ADC. This new design gives lesser number of comparators compared to conventional flash ADC architecture and so, less power consumption with much low circuit complexity. For a 4-bit design it takes 7 comparators and if you go one bit higher you need two more comparators with some extra digital logic and so on. Based on this design, a 4-bit ADC is done and simulated in Cadence virtuoso Tool using 14nm CMOS technology with power supply voltage of 1.0V. The Proposed ADC consumes 242uW of power and the measured INL and DNL are 0.35 LSB and 0.38 LSB respectively.
International Journal of VLSI & Signal Processing, 2017
Optimization of power can be done at different levels of abstraction e.g., system level, RTL leve... more Optimization of power can be done at different levels of abstraction e.g., system level, RTL level, Circuit level, Layout level. With continuous scaling of the technology node optimization of power and overall power management on SoC are the key challenges in addition to meeting the performance requirements. This paper gives an idea of various techniques at circuit level to reduce power consumption without affecting the performance of the chip.
2014 IEEE Geoscience and Remote Sensing Symposium, 2014
In this paper, we propose a new pan-sharpening method with better noise rejection capabilities. I... more In this paper, we propose a new pan-sharpening method with better noise rejection capabilities. It is based on consistent combination of large and small scale features obtained from the decomposition of high spectral resolution multi-spectral (MS) and high spatial resolution panchromatic (PAN) images. In the decomposition process, MS and PAN images are used to extract the features with the help of joint and dual bilateral filters, respectively. These filters take care of the relationship between MS and PAN images and decompose them into a base layer (large-scale) and a detail layer (small-scale). Since the joint bilateral filter (JBF) preserves the edges of the auxiliary image, it is used for decomposition of MS images where the different layers are computed by using the PAN image as an auxiliary image. In a similar manner, the different layers of the PAN image are obtained by using dual bilateral filter (DBF) which preserves the edges of both (MS and PAN) input images. This process is extended to multistage decomposition to obtain a bilateral image pyramid. The base and detail layers of both MS and PAN images obtained at various stages are combined using a weighted sum, respectively. Finally, the computed weighted sum of detail layer (small-scale) of the PAN image is fused separately to the weighted base layers (large-scale) of the MS images. The proposed method is tested over the images obtained from Quickbird and Worldview-2 satellite sensors. The experimental results are compared with the widely popular methods and recently proposed bilateral filter based methods. The experimental results demonstrate that the proposed method performs better for all satellite sensors.
2011 International Conference on Electrical and Control Engineering, 2011
The stability of two-dimensional (2-D) periodically shift varying (PSV) filters is considered. Th... more The stability of two-dimensional (2-D) periodically shift varying (PSV) filters is considered. The considered system is represented in state space by the first model of Fornasini-Marchesini (FM) with periodic coefficients. The stability of this model is then studied. Two sufficient conditions are established for asymptotic stability. The conditions are easy to use and numerically traceable with the help of software's such as MATLAB LMI Toolbox. Proposed criteria is compared with that by tamal bose[18] published in ieee transaction and it is proved with numerical example that, It is more relaxed minimizing the gap between necessary and sufficient conditions
International Journal of Advance Research and Innovative Ideas in Education, 2018
Stability analysis is the key issue for the two-dimensional (2-D) discrete time system. Linear st... more Stability analysis is the key issue for the two-dimensional (2-D) discrete time system. Linear state-space models describing 2-D discrete systems have been proposed by several researchers. A popular model, called Fornasini-Marchesini (FM) second model was proposed by Fornasini and Marchesini in 1978. An important key issue of practical importance in the two-dimensional (2-D) discrete system is stability analysis. This paper presents an analysis of the existing literature for the same. Also, we propose two results for sufficient conditions of stability criteria for 2 -D PSV Filters described by FM-2 Model. Periodically Shift Varying (PSV) filters are having numerous applications and hence its stability analysis has gained increased importance. LMI (Linear Matrix Inequality) based criteria has dual advantage of relaxed result as well as ease of implementation due to software tools like MATLAB LMI TOOLBOX. For that we have taken example of and analyzed stability criteria for FM2 model ...
Uploads
Papers by Prashant K Shah