Papers by DrEng. Islam Amro
ABSTRACT In this paper, we exploit the Hamming Correction Code Compressor (HCDC) to compress spee... more ABSTRACT In this paper, we exploit the Hamming Correction Code Compressor (HCDC) to compress speech signals of different sampling rates and bit resolutions, the compression is carried out directly to samples in its Pulse Code Modulation (PCM) form, with no prior processing. The resulting compression rate was around 2.3 in average for the tested datasets. The results were compared against the Free Lossless Audio Compression (FLAC) package which utilizes entropy coding methods (Run-Length/Golomb-Rice). The HCDC could achieve slightly better compression rate than FLAC in 8 bit resolutions samples only, while the compression rate in 16 bit resolution was less than FLAC performance and were around 1.4 average. The main advantage of using HCDC in speech compression is its relative simplicity and its lower time and space consumption compared to other lossless speech compression techniques.
ABSTRACT In this paper, we exploit the loss less Hamming Correction Code Compressor (HCDC) in com... more ABSTRACT In this paper, we exploit the loss less Hamming Correction Code Compressor (HCDC) in compressing the parameters of the ITU-T G.729 Conjugate Structure Algebraic Code Exited Linear Prediction Annex D of 6.4 Kbps. The CODEC Parameters were generated using the standard ITU source Codes. These parameters includes Linear Prediction Coefficients represented as LSPs, Gain and Excitation Bits, forming 64 bits represents 10 ms frames. These frames are grouped into four frames of 256 bits and subjected to the HCDC algorithm. Testing data set used were for adult English speaking males and females. HCDC were able to reduce transmission rate by 50% in average using the. The results were compared to the entropy compression methods (Golomb-Rice) which is implemented in FLAC package, these methods could reduce the data rate by 22% in average. The overall compression performance were calculated with averaging the compression rates for all frames for each sample. HCDC were able to compress on parity=3 only, for other parity values, few blocks were valid, but the overall compression rate for the frames were below 1.
ABSTRACT In this Paper, we worked on the modeling of packet loss using very short segments of tim... more ABSTRACT In this Paper, we worked on the modeling of packet loss using very short segments of time, the model suggested in this paper is based on binary time series, it is represented by investigating the probability of losses occurrence and loss dependency using Markov models. A well known problems of time series modeling is achieving segment's stationarity, this obstacle dictates using long time segments in order to achieve small average variations. we suggested a short segment cumulative modeling algorithm using segments of 15 minutes instead of 2 hours, through making a higher segmentation and give expectation for the models value for a given segment dynamically and cumulatively, this was achieved with error less than 0.001 for single segment. these results were compared with models created using very long segments of 2 hours, the overall error between the two models (short segments and long segments) were less than 0.001. The data set used was real data obtained from EUMED Connect Network (Mediterranean research network connects 6 Arab countries) from the Palestinian side. The research exploited a data set of 72 hours. each country was expressed by a randomly selected 12-hour dataset, each dataset was divided into two-hour segments, each segment was modeled as a binary time series. From the 36 segments, 26 segments were found stationary. For Stationary segments, the research investigated the segment correlation and used it as a modeling reference. 11 segments were modeled using Bernoulli model, 12 segments were modeled using 2-state Markov chain and 5 segments showed k-th order Markov chain tendencies with orders 2, 3, 8, 27, 38. The models were built under 0.05 threshold in average filter condition of stationary and with confidence of 95% for lag dependency selection. the comparison between long term segment modeling and short term segment modeling was carried out showing errors around 0.001 in average between the two modeling approaches. The importance of this resea- ch is being able to expect the packet losses for longer time on early stages of losses.
ABSTRACT In this Paper, we worked on the modeling of packet loss in EUMEDConnect Network (network... more ABSTRACT In this Paper, we worked on the modeling of packet loss in EUMEDConnect Network (network connects 6 Arab countries) from the Palestinian side. This research exploited a data set of 72 hours. each country was expressed by a randomly selected 12-hour dataset, each dataset was divided into two-hour segments, each segment was modeled as a binary time series. From the 36 segments, 26 segments were found stationary. For Stationary segments, the research investigated the segment correlation and used it as a modeling reference. 11 segments were modeled using Bernoulli model, 12 segments were modeled using 2-state Markov chain and 5 segments showed k-th order Markov chain tendencies with orders 2, 3, 8, 27, 38. The models were built under 0.05 threshold in average filter condition of stationary and with confidence of 95% for lag dependency selection. After modeling each segment independently, an average loss model for each country was calculated using its modeled segments. confidence of these models were 95%. Afterward, we suggested a cumulative modeling algorithm through making a higher segmentation on shorter intervals less than 2 hours and give expectation for the models value for a given segment dynamically and cumulatively, this was achieved with error less than 0.001 for single segment.
In this paper, we used the lossless compression algorithm of Hamming Correction Code Compressor (... more In this paper, we used the lossless compression algorithm of Hamming Correction Code Compressor (HCDC) in compressing the parameters of the ITU-T G.729 CODEC, which is the Conjugate Structure Algebraic Code Exited Linear Prediction. This standard has two rates; the standard rate with 8 Kbps and Annex D rate with 6.4 Kbps. The CODEC Parameters were generated using the standard ITU source Codes. These parameters includes linear prediction coefficients represented as LSPs, Gain and Excitation Bits, forming 80 bits for the standard rate and 64 bits for annex D rate. These bits are calculated over frames of 10 ms duration each. In this Papers; the HCDC compression algorithm was tested against both rates, the original signal's frames were grouped into four frames of 320 bits for 80 bits frames and 256 bits for annex D frames, then the compression performance were evaluated. Testing data set used were voices of four adult English speaking males and females. HCDC was able to reduce transmission rate by 50% in average for both CODEC rates. The results were compared to the entropy compression methods (Golomb-Rice) implemented in FLAC package, these methods could reduce the data rate by 22% in average. The overall compression performance were evaluated by averaging the compression rates for all frames for each sample in data set. HCDC were able to compress on parity=3 only. For other parity values; few blocks were valid, and the overall compression rate for frames were below 1.
Introduction to Networking and Signal Processing Approaches 1.1 Introduction Voice over Internet ... more Introduction to Networking and Signal Processing Approaches 1.1 Introduction Voice over Internet Protocol (VoIP), is a new technology that aims to transmit voice as packets over the internet, so that the wired and the wireless systems will be holding the same type of data and serve the same purpose, rather than having two separate systems, one for voice and another for data. This actually is part of a bigger problem which is converting all types of communication into data communication and converge to the point that the use of different media will serve the same purpose, and follow the same constrains, respecting the privacy of each type of media, i.e., video constrains and audio constrains, etc.
ABSTRACT In this paper, we exploit the Hamming Correction Code Compressor (HCDC) Code Excited Lin... more ABSTRACT In this paper, we exploit the Hamming Correction Code Compressor (HCDC) Code Excited Linear Prediction frame's Parameter. These parameters includes Linear Prediction Coefficients, Gain and Excitation Bits; which is DCT residual for the signal frame, consist of 40 coefficients, each is quantized using 4 bits. For the signals used in experiments; the total bits in frame were 261 bits with transmission rate 0f 5.22 Kbps. For each samples of data set for males and females; we segmented the samples into short frames of 20ms for processing and performed compression over these frames. We were able to reduce transmission rate by 50% in average using the HCDC. The results were compared to the entropy compression methods (GolombRice) which is implemented in FLAC package, these methods could reduce the data rate by 22% in average. The overall compression performance were calculated with averaging the compression rates for all frames for each sample. HCDC were able to compress on parity=3 only, for other parity values, few blocks were valid, but the overall compression rate for the frames were below 1.
المجلة الفلسطينية للتربية المفتوحة عن بعد, 2015
This paper addresses risk assessment in organizations lacking benchmarking and risk assessment re... more This paper addresses risk assessment in organizations lacking benchmarking and risk assessment references. We started with a strategic conceptualization of information technology services that an organization depends on, these services were seen as network services that are redistributed into basic service elements; these service elements are expressed in terms of hosts running these services and their interconnections. Eventually; we were able to express strategic services' vulnerabilities in terms of host vulnerabilities. Closing this gap led us to construct a risk reference for the organizational strategic services. Using relevant information about these vulnerabilities we were able to introduce a risk probability model, a risk impact model and a risk weighting approach using Borda Count. We followed a step-by-step approach to build the risk with a holistic view. We implemented the suggested model on Al-Quds Open University's (QOU) IT infrastructure as a case study and we were able to derive the strategic services' risks and the overall organizational IT risk.
Palestinian Journal for Open Learning & e-Learning, 2015
AL-Quds University, May 10, 2005
Introduction to Networking and Signal Processing Approaches 1.1 Introduction Voice over Internet ... more Introduction to Networking and Signal Processing Approaches 1.1 Introduction Voice over Internet Protocol (VoIP), is a new technology that aims to transmit voice as packets over the internet, so that the wired and the wireless systems will be holding the same type of data and serve the same purpose, rather than having two separate systems, one for voice and another for data. This actually is part of a bigger problem which is converting all types of communication into data communication and converge to the point that the use of different media will serve the same purpose, and follow the same constrains, respecting the privacy of each type of media, i.e., video constrains and audio constrains, etc.
— This research exploits a lossless compression method based on Hamming codes to compress speech ... more — This research exploits a lossless compression method based on Hamming codes to compress speech signals with different bit resolutions and sampling rates. This compression is performed directly to signal in its Pulse Coded shape PCM. For the tested data; the average performance for the compressor reached up to the ratio of 2.3. The resulting compression rates were compared to the classic entropy compression methods (Run-Length/Golomb-Rice) which is implemented in the FLAC suite. The performance of the Classic entropy methods were close to the performance of the Hamming code compressor over 16 bits resolutions, the compression ratio for these cases was around 1.4 average. The best performance levels for the Hamming code compressor was at 8 bit resolution and achieved the ratio of 2.3.
Introduction to Networking and Signal Processing Approaches 1.1 Introduction Voice over Internet ... more Introduction to Networking and Signal Processing Approaches 1.1 Introduction Voice over Internet Protocol (VoIP), is a new technology that aims to transmit voice as packets over the internet, so that the wired and the wireless systems will be holding the same type of data and serve the same purpose, rather than having two separate systems, one for voice and another for data. This actually is part of a bigger problem which is converting all types of communication into data communication and converge to the point that the use of different media will serve the same purpose, and follow the same constrains, respecting the privacy of each type of media, i.e., video constrains and audio constrains, etc.
المجلة الفلسطينية للتكنولوجيا والعلوم التطبيقية, 2022
This research aims at exploiting the lossless Hamming correction code compression algorithm (HCDC... more This research aims at exploiting the lossless Hamming correction code compression algorithm (HCDC) to reduce the transmission data rate in the GSM 6.10 standard, which holds several similarities with modern adaptive multi-rate codec in coefficients calculations and excitation principles. The compression algorithms depend on the properties of the hamming codes where data bits can be calculated from the parity bits. In this research, we chose parity equals 3 and data bits equals 4. Several iterations were conducted over the compressed frame information to achieve even higher compression rates. The compression rate was implemented over the standard of GSM 6.10, which is a variation Code Exited Linear Prediction coding (CELP). Regarding the data samples selected to conduct the test, two males and two females' voice file samples at 8khz and quantized on 8-bit resolution were selected. The duration of the files varies from 4 to 6 seconds. Each sample was divided into 20ms frames; each frame was expressed using GSM6.10 with 260 bits of data included Linear perdition coefficients, pitch period, gain, peak magnitude value, grid position, and the sample amplitude. This shows that the 260 bits every 20ms form a data rate of 13kbps. The 260 bits were subjected to HCDC, and the data rate was reduced by 60%, reaching down to 5kbps on average. The results compared to the famous FLAC lossless audio compression, which showed 15% compression only. The research did not consider any quality testing since the compression is lossless. The research used standard ITU libraries to conduct the GSM6.10 data acquisition and open-source platforms for FLAC.
Medical teacher, 2017
Competency-based medical education (CBME) is an approach to the design of educational systems or ... more Competency-based medical education (CBME) is an approach to the design of educational systems or curricula that focuses on graduate abilities or competencies. It has been adopted in many jurisdictions, and in recent years an explosion of publications has examined its implementation and provided a critique of the approach. Assessment in a CBME context is often based on observations or judgments about an individual's level of expertise; it emphasizes frequent, direct observation of performance along with constructive and timely feedback to ensure that learners, including clinicians, have the expertise they need to perform entrusted tasks. This paper explores recent developments since the publication in 2010 of Holmboe and colleagues' description of CBME assessment. Seven themes regarding assessment that arose at the second invitational summit on CBME, held in 2013, are described: competency frameworks, the reconceptualization of validity, qualitative methods, milestones, feedb...
International Journal of Speech Technology, 2011
2013 8th EUROSIM Congress on Modelling and Simulation, 2013
ABSTRACT In this paper, we exploit the loss less Hamming Correction Code Compressor (HCDC) in com... more ABSTRACT In this paper, we exploit the loss less Hamming Correction Code Compressor (HCDC) in compressing the parameters of the ITU-T G.729 Conjugate Structure Algebraic Code Exited Linear Prediction Annex D of 6.4 Kbps. The CODEC Parameters were generated using the standard ITU source Codes. These parameters includes Linear Prediction Coefficients represented as LSPs, Gain and Excitation Bits, forming 64 bits represents 10 ms frames. These frames are grouped into four frames of 256 bits and subjected to the HCDC algorithm. Testing data set used were for adult English speaking males and females. HCDC were able to reduce transmission rate by 50% in average using the. The results were compared to the entropy compression methods (Golomb-Rice) which is implemented in FLAC package, these methods could reduce the data rate by 22% in average. The overall compression performance were calculated with averaging the compression rates for all frames for each sample. HCDC were able to compress on parity=3 only, for other parity values, few blocks were valid, but the overall compression rate for the frames were below 1.
2013 8th EUROSIM Congress on Modelling and Simulation, 2013
ABSTRACT In this Paper, we worked on the modeling of packet loss in EUMEDConnect Network (network... more ABSTRACT In this Paper, we worked on the modeling of packet loss in EUMEDConnect Network (network connects 6 Arab countries) from the Palestinian side. This research exploited a data set of 72 hours. each country was expressed by a randomly selected 12-hour dataset, each dataset was divided into two-hour segments, each segment was modeled as a binary time series. From the 36 segments, 26 segments were found stationary. For Stationary segments, the research investigated the segment correlation and used it as a modeling reference. 11 segments were modeled using Bernoulli model, 12 segments were modeled using 2-state Markov chain and 5 segments showed k-th order Markov chain tendencies with orders 2, 3, 8, 27, 38. The models were built under 0.05 threshold in average filter condition of stationary and with confidence of 95% for lag dependency selection. After modeling each segment independently, an average loss model for each country was calculated using its modeled segments. confidence of these models were 95%. Afterward, we suggested a cumulative modeling algorithm through making a higher segmentation on shorter intervals less than 2 hours and give expectation for the models value for a given segment dynamically and cumulatively, this was achieved with error less than 0.001 for single segment.
Uploads
Papers by DrEng. Islam Amro