Security and efficiency are the main grid authentication protocols objectives. Practical efficien... more Security and efficiency are the main grid authentication protocols objectives. Practical efficient certificate-less public key cryptography-based authentication protocol is widely acknowledged as a challenging issue in grid computing environment. Unfortunately, certificate-less authenticated key agreement protocols rely on bilinear pairing that is extremely computational expensive. A recently published protocol claims meeting some attacks, however, there are some shortcomings in such a protocol. In this paper, such a competing protocol is mathematically criticized. Then, a secure and efficient grid pairing-free certificate-less two-party authenticated key agreement protocol is proposed for grid computing. Finally, the proposed protocol is mathematically proved as secure and efficient in the Lippold model, a very strong security model, under the gap Diffie-Hellman assumption.
International Journal of Advances in Intelligent Informatics
The image forgery process can be simply defined as inserting some objects of different sizes to v... more The image forgery process can be simply defined as inserting some objects of different sizes to vanish some structures or scenes. Satellite images can be forged in many ways, such as copy-paste, copy-move, and splicing processes. Recent approaches present a generative adversarial network (GAN) as an effective method for identifying the presence of spliced forgeries and identifying their locations with a higher detection accuracy of large- and medium-sized forgeries. However, such recent approaches clearly show limited detection accuracy of small-sized forgeries. Accordingly, the localization step of such small-sized forgeries is negatively impacted. In this paper, two different approaches for detecting and localizing small-sized forgeries in satellite images are proposed. The first approach is inspired by a recently presented GAN-based approach and is modified to an enhanced version. The experimental results manifest that the detection accuracy of the first proposed approach noticea...
Blind image steganalysis (BIS) is the process of detecting whether an input image has hidden data... more Blind image steganalysis (BIS) is the process of detecting whether an input image has hidden data or not, without any prior known information ( i.e., blind) on the applied steganography technique. Recent BIS approaches typically suffer from limited detection accuracy and higher computational cost due to, e.g., pre-processing. In this paper, the proposed BIS approach discards the pre-processing step, so that the computational cost is reduced. As well, significant modifications on a recent convolution neural network (CNN)-model are considered in order to enhance the detection accuracy. First, an efficient parameters initialization is considered. Second, a cyclic learning rate and the LReLU activation function are used, during the learning phase, for faster convergence with noticeably higher detection accuracy. Finally, a hybrid technique of model and data parallelism techniques is performed in both convolution and fully connected layers, respectively, thus significantly reducing the c...
The importance of cryptography on ensuring security or integrity of the electronic data transacti... more The importance of cryptography on ensuring security or integrity of the electronic data transaction had become higher during the past few years. Multiple security protocols are currently using various block ciphers. One of the most widely used block ciphers is the Advanced Encryption Standard (AES) which is chosen as a standard for its higher efficiency and stronger security than its competitors. Unfortunately, the encryption and decryption processes of AES takes a considerable amount of time for large data size. The GPU is an attractive platform for accelerating block ciphers and other cryptography algorithms due to its massively parallel processing power. In this work, an implementation of the AES-128 ECB Encryption on three different GPU architectures (Kepler, Maxwell and Pascal) has been presented. The results show that encryption speeds with 207 Gbps on the NVIDIA GTX TITAN X (Maxwell) and 280 Gbps on the NVIDIA GTX 1080 (Pascal) have been achieved by performing new optimization techniques using 32bytes/thread granularity.
2018 20th International Conference on Transparent Optical Networks (ICTON), Jul 1, 2018
Multiprotocol label switched (MPLS) networks were introduced to enhance the network`s service pro... more Multiprotocol label switched (MPLS) networks were introduced to enhance the network`s service provisioning and optimize its performance using multiple protocols along with label switched based networking technique. With the addition of traffic engineering entity in MPLS domain, there is a massive increase in the networks resource management capability with better quality of services (QoS) provisioning for end users. Routing protocols play an important role in MPLS networks for network traffic management, which uses exact and approximate algorithms. There are number of artificial intelligence-based optimization algorithms which can be used for the optimization of traffic engineering in MPLS networks. The paper presents an optimization model for MPLS networks and proposed dolphin-echolocation algorithm (DEA) for optimal path computation. For Network with different nodes, both algorithms performance has been investigated to study their convergence towards the production of optimal solutions. Furthermore, the DEA algorithm will be compared with the bat algorithm to examine their performance in MPLS network optimization. Various parameters such as mean, minimum /optimal fitness function values and standard deviation.
IOP Conference Series: Materials Science and Engineering, 2019
Blind image Steganalysis is the binomial classification problem of determining if an image contai... more Blind image Steganalysis is the binomial classification problem of determining if an image contains hidden data or not. Classification problems have two main steps: i) feature extraction step and ii) classification step. Traditional blind image steganalysis approaches use handcrafted filters for the first step and use classifiers such as support vector machine (SVM) for the second step. The rapid development of steganographic techniques makes it harder to design new effective handcrafted filters, which negatively affect the feature extraction step. Recently, Convolutional Neural networks (CNNs) are introduced as an auspicious solution for this problem. CNN-based steganalysis can automatically extract features from the input images without using handcrafted filters. Although considerable success has been achieved with CNNs, CNN-based applications are considered as time consuming applications. Accordingly, it is important to quicken the CNN-based steganalysis approaches training in or...
Advances in Electrical and Electronic Engineering, 2016
In this paper, we present a modified interview prediction Multiview Video Coding (MVC) scheme fro... more In this paper, we present a modified interview prediction Multiview Video Coding (MVC) scheme from the perspective of viewer's interactivity. When a viewer requests some view(s), our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred) data with comparable rate-distortion.
The wide adoption of the Wireless Senor Networks (WSNs) applications around the world has increas... more The wide adoption of the Wireless Senor Networks (WSNs) applications around the world has increased the amount of the sensor data which contribute to the complexity of Big Data. This has emerged the need to the use of in-network data processing techniques which are very crucial for the success of the big data framework. This article gives overview and discussion about the state-of-theart of the data mining and data fusion techniques designed for the WSNs. It discusses how these techniques can prepare the sensor data inside the network (in-network) before any further processing as big data. This is very important for both of the WSNs and the big data framework. For the WSNs, the in-network pre-processing techniques could lead to saving in their limited resources. For the big data side, receiving a clean, non-redundant and relevant data would reduce the excessive data volume, thus an overload reduction will be obtained at the big data processing platforms and the discovery of values from these data will be accelerated.
Innovations in Bio-inspired Computing and Applications, 2014
This article presents a content-based image classification system to monitor the ripeness process... more This article presents a content-based image classification system to monitor the ripeness process of tomato via investigating and classifying the different maturity/ripeness stages. The proposed approach consists of three phases; namely pre-processing, feature extraction, and classification phases. Since tomato surface color is the most important characteristic to observe ripeness, this system uses colored histogram for classifying ripeness stage. It implements Principal Components Analysis (PCA) along with Support Vector Machine (SVM) algorithms for feature extraction and classification of ripeness stages, respectively. The datasets used for experiments were constructed based on real sample images for tomato at different stages, which were collected from a farm at Minia city. Datasets of 175 images and 55 images were used as training and testing datasets, respectively. Training dataset is divided into 5 classes representing the different stages of tomato ripeness. Experimental results showed that the proposed classification approach has obtained ripeness classification accuracy of 92.72%, using SVM linear kernel function with 35 images per class for training.
Advances in Electrical and Electronic Engineering, 2017
The Advanced Encryption Standard (AES) is One of the most popular symmetric block cipher because ... more The Advanced Encryption Standard (AES) is One of the most popular symmetric block cipher because it has better efficiency and security. The AES is computation intensive algorithm especially for massive transactions. The Graphics Processing Unit (GPU) is an amazing platform for accelerating AES. it has good parallel processing power. Traditional approaches for implementing AES using GPU use 16 byte per thread as a default granularity. In this paper, the AES-128 algorithm (ECB mode) is implemented on three different GPU architectures with different values of granularities (32,64 and 128 bytes/thread). Our results show that the throughput factor reaches 277 Gbps, 201 Gbps and 78 Gbps using the NVIDIA GTX 1080 (Pascal), the NVIDIA GTX TITAN X (Maxwell) and the GTX 780 (Kepler) GPU architectures.
International Journal of Computer Vision and Image Processing, 2011
A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression... more A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression ratio by reducing the number of source symbols through the application of an efficient technique. The proposed algorithm is based on processing the discrete cosine transform (DCT) of the image to extract the highest energy coefficients in addition to applying one of the novel quantization schemes proposed in the present work. This method is straightforward and simple. It does not need complicated calculation; therefore the hardware implementation is easy to attach. Experimental comparisons are carried out to compare the performance of the proposed technique with those of other standard techniques such as the JPEG. The experimental results show that the proposed compression technique achieves high compression ratio with higher peak signal to noise ratio than that of JPEG at low bit rate without the visual degradation that appears in case of JPEG.
Multimedia is one of the most popular data shared in the Web, and the protection of it via encryp... more Multimedia is one of the most popular data shared in the Web, and the protection of it via encryption techniques is of vast interest. In this paper, a secure and computationally feasible Algorithm called Optimized Multiple Huffman Tables (OMHT) technique is proposed. OMHT depends on using statistical-model-based compression method to generate different tables from the same data type of images or videos to be encrypted leading to increase compression efficiency and security of the used tables. A systematic study on how to strategically integrate different atomic operations to build a multimedia encryption system is presented. The resulting system can provide superior performance over other techniques by both its generic encryption and its simple adaptation to multimedia in terms of a joint consideration of security, and bitrate overhead. The effectiveness and robustness of this scheme is verified by measuring its security strength and comparing its computational cost against other techniques. The proposed technique guarantees security, and fastness without noticeable increase in encoded image size.
Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, Sep 24, 2017
In this paper, an image steganography approach is presented dividing the cover image into 2×2 non... more In this paper, an image steganography approach is presented dividing the cover image into 2×2 non-overlapping pixel blocks. The upper-left pixel of that block embeds a certain number of bits of the secret bit stream. Whereas, the remaining pixels of the same block embed the secret data using a modified version of the pixel-value-differencing (PVD) method that considers embedding secret data into both horizontal and vertical edges; unlike traditional image steganography approaches. The experimental results show that the proposed approach perceptually outperforms competing approaches in terms of the standard PSNR and the complex wavelet SSIM index. In turn, the imperceptibility of the stego-image is improved with a comparable bit-embedding capacity.
In geometric image registration, illumination variations that exist between image pairs tend to d... more In geometric image registration, illumination variations that exist between image pairs tend to degrade the precision of the registration, which can negatively affect subsequent processing. In this paper, we present a model to improve the sub-pixel geometric registration precision of image pairs when there exists locally variant illuminations with arbitrary shape. This model extends on our previous work to include multiple local shading levels of arbitrary shape, where the ill-posed problem is conditioned by constraining the solution to an estimated number of shading levels. The proposed model is solved using leastsquares estimation and is cast in an iterative coarse-to-fine framework, which allows a convergence rate that is similar to competing intensity-based image registration approaches. The primary advantage of the proposed approach is the nearly tenfold improvement in sub-pixel precision for the registration when convergence is obtained in this class of technique. Keywords Sub-pixel geometric registration • Global and local illumination variations 1 Introduction The geometric image registration (GIR) process for an image pair is to determine their point-by-point correspondence within the scene. GIR of a set of images is a
ABSTRACT In this paper, we propose a lossless (LS) image compression technique combining a predic... more ABSTRACT In this paper, we propose a lossless (LS) image compression technique combining a prediction step with the integer wavelet transform. The prediction step proposed in this technique is a simplified version of the median edge detector algorithm used with JPEG-LS. First, the image is transformed using the prediction step and a difference image is obtained. The difference image goes through an integer wavelet transform and the transform coefficients are used in the lossless codeword assignment. The algorithm is simple and test results show that it yields higher compression ratios than competing techniques. As well, computational cost is kept comparable to competing techniques.
2015 IEEE International Conference on Image Processing (ICIP), 2015
ABSTRACT In this paper, a new parallel scheme for the deblocking fil- ter (DF) in high efficiency... more ABSTRACT In this paper, a new parallel scheme for the deblocking fil- ter (DF) in high efficiency video coding (HEVC) is proposed. This scheme is based on a parallel-straight processing order that improves the performance of HEVC DF. One of the chal- lenges in HEVC is coding time due to computational com- plexity. Deblocking in HEVC is responsible for nearly 15% of the time consumed while performing video compression. As such, a parallel-straight processing order is introduced that allows improved concurrency for deblocking multiple hori- zontal and vertical edges. For our examined 4-core case, the approach achieves full utilization of all cores with fewer num- ber of DF steps (i.e., two edges or more) by 27% compared to recent techniques. A four-core parallel architecture is also proposed. This new parallel scheme is implemented on a graphical processing unit (GPU) rather than a CPU to further speed up coding time. Experimental results demonstrate the ability to achieve decoded frame processing times as low as 5.0 ms for All-Intra filtering and 3.0 ms for Low-Delay fil- tering, corresponding to speedup factors as high as 12 and 7, respectively, compared to the HEVC reference.
In this paper, we modify the band offset (BO) design of the sample adaptive offset (SAO) filter o... more In this paper, we modify the band offset (BO) design of the sample adaptive offset (SAO) filter of the HEVC to improve the video quality performance. In the BO design of the SAO filter, the sample values are classified into bands; each has an unique offset to be sent to the decoder. As the number of bands within a group of bands (GoB) increases, the corresponding number of offsets increases, thus requiring a larger buffer memory size at the decoder. Conventional designs use variant GoB designs to reduce the number of offsets; however, both rate-distortion and encoding/decoding time of those approaches were comparable to that of the HEVC standard. The proposed BO design outperforms competing designs by reducing both memory buffer size and corresponding encoding/decoding time by an average of 75% and 12%, respectively, and increasing the rate-distortion without changing the compression ratio using real video sequences.
Security and efficiency are the main grid authentication protocols objectives. Practical efficien... more Security and efficiency are the main grid authentication protocols objectives. Practical efficient certificate-less public key cryptography-based authentication protocol is widely acknowledged as a challenging issue in grid computing environment. Unfortunately, certificate-less authenticated key agreement protocols rely on bilinear pairing that is extremely computational expensive. A recently published protocol claims meeting some attacks, however, there are some shortcomings in such a protocol. In this paper, such a competing protocol is mathematically criticized. Then, a secure and efficient grid pairing-free certificate-less two-party authenticated key agreement protocol is proposed for grid computing. Finally, the proposed protocol is mathematically proved as secure and efficient in the Lippold model, a very strong security model, under the gap Diffie-Hellman assumption.
International Journal of Advances in Intelligent Informatics
The image forgery process can be simply defined as inserting some objects of different sizes to v... more The image forgery process can be simply defined as inserting some objects of different sizes to vanish some structures or scenes. Satellite images can be forged in many ways, such as copy-paste, copy-move, and splicing processes. Recent approaches present a generative adversarial network (GAN) as an effective method for identifying the presence of spliced forgeries and identifying their locations with a higher detection accuracy of large- and medium-sized forgeries. However, such recent approaches clearly show limited detection accuracy of small-sized forgeries. Accordingly, the localization step of such small-sized forgeries is negatively impacted. In this paper, two different approaches for detecting and localizing small-sized forgeries in satellite images are proposed. The first approach is inspired by a recently presented GAN-based approach and is modified to an enhanced version. The experimental results manifest that the detection accuracy of the first proposed approach noticea...
Blind image steganalysis (BIS) is the process of detecting whether an input image has hidden data... more Blind image steganalysis (BIS) is the process of detecting whether an input image has hidden data or not, without any prior known information ( i.e., blind) on the applied steganography technique. Recent BIS approaches typically suffer from limited detection accuracy and higher computational cost due to, e.g., pre-processing. In this paper, the proposed BIS approach discards the pre-processing step, so that the computational cost is reduced. As well, significant modifications on a recent convolution neural network (CNN)-model are considered in order to enhance the detection accuracy. First, an efficient parameters initialization is considered. Second, a cyclic learning rate and the LReLU activation function are used, during the learning phase, for faster convergence with noticeably higher detection accuracy. Finally, a hybrid technique of model and data parallelism techniques is performed in both convolution and fully connected layers, respectively, thus significantly reducing the c...
The importance of cryptography on ensuring security or integrity of the electronic data transacti... more The importance of cryptography on ensuring security or integrity of the electronic data transaction had become higher during the past few years. Multiple security protocols are currently using various block ciphers. One of the most widely used block ciphers is the Advanced Encryption Standard (AES) which is chosen as a standard for its higher efficiency and stronger security than its competitors. Unfortunately, the encryption and decryption processes of AES takes a considerable amount of time for large data size. The GPU is an attractive platform for accelerating block ciphers and other cryptography algorithms due to its massively parallel processing power. In this work, an implementation of the AES-128 ECB Encryption on three different GPU architectures (Kepler, Maxwell and Pascal) has been presented. The results show that encryption speeds with 207 Gbps on the NVIDIA GTX TITAN X (Maxwell) and 280 Gbps on the NVIDIA GTX 1080 (Pascal) have been achieved by performing new optimization techniques using 32bytes/thread granularity.
2018 20th International Conference on Transparent Optical Networks (ICTON), Jul 1, 2018
Multiprotocol label switched (MPLS) networks were introduced to enhance the network`s service pro... more Multiprotocol label switched (MPLS) networks were introduced to enhance the network`s service provisioning and optimize its performance using multiple protocols along with label switched based networking technique. With the addition of traffic engineering entity in MPLS domain, there is a massive increase in the networks resource management capability with better quality of services (QoS) provisioning for end users. Routing protocols play an important role in MPLS networks for network traffic management, which uses exact and approximate algorithms. There are number of artificial intelligence-based optimization algorithms which can be used for the optimization of traffic engineering in MPLS networks. The paper presents an optimization model for MPLS networks and proposed dolphin-echolocation algorithm (DEA) for optimal path computation. For Network with different nodes, both algorithms performance has been investigated to study their convergence towards the production of optimal solutions. Furthermore, the DEA algorithm will be compared with the bat algorithm to examine their performance in MPLS network optimization. Various parameters such as mean, minimum /optimal fitness function values and standard deviation.
IOP Conference Series: Materials Science and Engineering, 2019
Blind image Steganalysis is the binomial classification problem of determining if an image contai... more Blind image Steganalysis is the binomial classification problem of determining if an image contains hidden data or not. Classification problems have two main steps: i) feature extraction step and ii) classification step. Traditional blind image steganalysis approaches use handcrafted filters for the first step and use classifiers such as support vector machine (SVM) for the second step. The rapid development of steganographic techniques makes it harder to design new effective handcrafted filters, which negatively affect the feature extraction step. Recently, Convolutional Neural networks (CNNs) are introduced as an auspicious solution for this problem. CNN-based steganalysis can automatically extract features from the input images without using handcrafted filters. Although considerable success has been achieved with CNNs, CNN-based applications are considered as time consuming applications. Accordingly, it is important to quicken the CNN-based steganalysis approaches training in or...
Advances in Electrical and Electronic Engineering, 2016
In this paper, we present a modified interview prediction Multiview Video Coding (MVC) scheme fro... more In this paper, we present a modified interview prediction Multiview Video Coding (MVC) scheme from the perspective of viewer's interactivity. When a viewer requests some view(s), our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred) data with comparable rate-distortion.
The wide adoption of the Wireless Senor Networks (WSNs) applications around the world has increas... more The wide adoption of the Wireless Senor Networks (WSNs) applications around the world has increased the amount of the sensor data which contribute to the complexity of Big Data. This has emerged the need to the use of in-network data processing techniques which are very crucial for the success of the big data framework. This article gives overview and discussion about the state-of-theart of the data mining and data fusion techniques designed for the WSNs. It discusses how these techniques can prepare the sensor data inside the network (in-network) before any further processing as big data. This is very important for both of the WSNs and the big data framework. For the WSNs, the in-network pre-processing techniques could lead to saving in their limited resources. For the big data side, receiving a clean, non-redundant and relevant data would reduce the excessive data volume, thus an overload reduction will be obtained at the big data processing platforms and the discovery of values from these data will be accelerated.
Innovations in Bio-inspired Computing and Applications, 2014
This article presents a content-based image classification system to monitor the ripeness process... more This article presents a content-based image classification system to monitor the ripeness process of tomato via investigating and classifying the different maturity/ripeness stages. The proposed approach consists of three phases; namely pre-processing, feature extraction, and classification phases. Since tomato surface color is the most important characteristic to observe ripeness, this system uses colored histogram for classifying ripeness stage. It implements Principal Components Analysis (PCA) along with Support Vector Machine (SVM) algorithms for feature extraction and classification of ripeness stages, respectively. The datasets used for experiments were constructed based on real sample images for tomato at different stages, which were collected from a farm at Minia city. Datasets of 175 images and 55 images were used as training and testing datasets, respectively. Training dataset is divided into 5 classes representing the different stages of tomato ripeness. Experimental results showed that the proposed classification approach has obtained ripeness classification accuracy of 92.72%, using SVM linear kernel function with 35 images per class for training.
Advances in Electrical and Electronic Engineering, 2017
The Advanced Encryption Standard (AES) is One of the most popular symmetric block cipher because ... more The Advanced Encryption Standard (AES) is One of the most popular symmetric block cipher because it has better efficiency and security. The AES is computation intensive algorithm especially for massive transactions. The Graphics Processing Unit (GPU) is an amazing platform for accelerating AES. it has good parallel processing power. Traditional approaches for implementing AES using GPU use 16 byte per thread as a default granularity. In this paper, the AES-128 algorithm (ECB mode) is implemented on three different GPU architectures with different values of granularities (32,64 and 128 bytes/thread). Our results show that the throughput factor reaches 277 Gbps, 201 Gbps and 78 Gbps using the NVIDIA GTX 1080 (Pascal), the NVIDIA GTX TITAN X (Maxwell) and the GTX 780 (Kepler) GPU architectures.
International Journal of Computer Vision and Image Processing, 2011
A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression... more A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression ratio by reducing the number of source symbols through the application of an efficient technique. The proposed algorithm is based on processing the discrete cosine transform (DCT) of the image to extract the highest energy coefficients in addition to applying one of the novel quantization schemes proposed in the present work. This method is straightforward and simple. It does not need complicated calculation; therefore the hardware implementation is easy to attach. Experimental comparisons are carried out to compare the performance of the proposed technique with those of other standard techniques such as the JPEG. The experimental results show that the proposed compression technique achieves high compression ratio with higher peak signal to noise ratio than that of JPEG at low bit rate without the visual degradation that appears in case of JPEG.
Multimedia is one of the most popular data shared in the Web, and the protection of it via encryp... more Multimedia is one of the most popular data shared in the Web, and the protection of it via encryption techniques is of vast interest. In this paper, a secure and computationally feasible Algorithm called Optimized Multiple Huffman Tables (OMHT) technique is proposed. OMHT depends on using statistical-model-based compression method to generate different tables from the same data type of images or videos to be encrypted leading to increase compression efficiency and security of the used tables. A systematic study on how to strategically integrate different atomic operations to build a multimedia encryption system is presented. The resulting system can provide superior performance over other techniques by both its generic encryption and its simple adaptation to multimedia in terms of a joint consideration of security, and bitrate overhead. The effectiveness and robustness of this scheme is verified by measuring its security strength and comparing its computational cost against other techniques. The proposed technique guarantees security, and fastness without noticeable increase in encoded image size.
Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, Sep 24, 2017
In this paper, an image steganography approach is presented dividing the cover image into 2×2 non... more In this paper, an image steganography approach is presented dividing the cover image into 2×2 non-overlapping pixel blocks. The upper-left pixel of that block embeds a certain number of bits of the secret bit stream. Whereas, the remaining pixels of the same block embed the secret data using a modified version of the pixel-value-differencing (PVD) method that considers embedding secret data into both horizontal and vertical edges; unlike traditional image steganography approaches. The experimental results show that the proposed approach perceptually outperforms competing approaches in terms of the standard PSNR and the complex wavelet SSIM index. In turn, the imperceptibility of the stego-image is improved with a comparable bit-embedding capacity.
In geometric image registration, illumination variations that exist between image pairs tend to d... more In geometric image registration, illumination variations that exist between image pairs tend to degrade the precision of the registration, which can negatively affect subsequent processing. In this paper, we present a model to improve the sub-pixel geometric registration precision of image pairs when there exists locally variant illuminations with arbitrary shape. This model extends on our previous work to include multiple local shading levels of arbitrary shape, where the ill-posed problem is conditioned by constraining the solution to an estimated number of shading levels. The proposed model is solved using leastsquares estimation and is cast in an iterative coarse-to-fine framework, which allows a convergence rate that is similar to competing intensity-based image registration approaches. The primary advantage of the proposed approach is the nearly tenfold improvement in sub-pixel precision for the registration when convergence is obtained in this class of technique. Keywords Sub-pixel geometric registration • Global and local illumination variations 1 Introduction The geometric image registration (GIR) process for an image pair is to determine their point-by-point correspondence within the scene. GIR of a set of images is a
ABSTRACT In this paper, we propose a lossless (LS) image compression technique combining a predic... more ABSTRACT In this paper, we propose a lossless (LS) image compression technique combining a prediction step with the integer wavelet transform. The prediction step proposed in this technique is a simplified version of the median edge detector algorithm used with JPEG-LS. First, the image is transformed using the prediction step and a difference image is obtained. The difference image goes through an integer wavelet transform and the transform coefficients are used in the lossless codeword assignment. The algorithm is simple and test results show that it yields higher compression ratios than competing techniques. As well, computational cost is kept comparable to competing techniques.
2015 IEEE International Conference on Image Processing (ICIP), 2015
ABSTRACT In this paper, a new parallel scheme for the deblocking fil- ter (DF) in high efficiency... more ABSTRACT In this paper, a new parallel scheme for the deblocking fil- ter (DF) in high efficiency video coding (HEVC) is proposed. This scheme is based on a parallel-straight processing order that improves the performance of HEVC DF. One of the chal- lenges in HEVC is coding time due to computational com- plexity. Deblocking in HEVC is responsible for nearly 15% of the time consumed while performing video compression. As such, a parallel-straight processing order is introduced that allows improved concurrency for deblocking multiple hori- zontal and vertical edges. For our examined 4-core case, the approach achieves full utilization of all cores with fewer num- ber of DF steps (i.e., two edges or more) by 27% compared to recent techniques. A four-core parallel architecture is also proposed. This new parallel scheme is implemented on a graphical processing unit (GPU) rather than a CPU to further speed up coding time. Experimental results demonstrate the ability to achieve decoded frame processing times as low as 5.0 ms for All-Intra filtering and 3.0 ms for Low-Delay fil- tering, corresponding to speedup factors as high as 12 and 7, respectively, compared to the HEVC reference.
In this paper, we modify the band offset (BO) design of the sample adaptive offset (SAO) filter o... more In this paper, we modify the band offset (BO) design of the sample adaptive offset (SAO) filter of the HEVC to improve the video quality performance. In the BO design of the SAO filter, the sample values are classified into bands; each has an unique offset to be sent to the decoder. As the number of bands within a group of bands (GoB) increases, the corresponding number of offsets increases, thus requiring a larger buffer memory size at the decoder. Conventional designs use variant GoB designs to reduce the number of offsets; however, both rate-distortion and encoding/decoding time of those approaches were comparable to that of the HEVC standard. The proposed BO design outperforms competing designs by reducing both memory buffer size and corresponding encoding/decoding time by an average of 75% and 12%, respectively, and increasing the rate-distortion without changing the compression ratio using real video sequences.
Uploads
Papers by Mohamed Fouad