Lossless Embedded Compression Algorithm With Context-Based Error Compensation For Video Application
Lossless Embedded Compression Algorithm With Context-Based Error Compensation For Video Application
Lossless Embedded Compression Algorithm With Context-Based Error Compensation For Video Application
TABLE I
For low complexity and storage efficiency, we quantize
COMPARISON OF THE PROPOSED AVERAGE PREDICTION WITH MED [3] FOR HD context conditions into 9 steps using T1, T2, and T3
VIDEO SEQUENCES threshold level. (T1 = 3, T2 = 7, T3 = 21). Thus, the
Prediction Entropy Arithmetic Operation/Pixel
Method
quantization regions are represented as, {0}, {1, 2}, {3, 4,
(324 block)
MED [3] 2.60 3 comparison, 1 add, 2 sub 5, 6}, {7, 8, , 20}, {21, , 255} and indexed [-4, 4].
average 2.76 2 add, 1 shift
A total of (2T + 1)3 = 729 contexts (T=4). By merging
contexts of opposite signs the total number of contexts
( )
B. Context-based Prediction Error Compensation
becomes (2T + 1)3 + 1 / 2 = 365 context conditions [3]. We
Fig. 2(a) presents the prediction error distribution. If it is the
worst case of compression performance, the prediction error call this condition CTX999.
Meanwhile, in typical error compensation using the
distribution, , is wider than the best case of that. The
proposed context-based model, coding errors are accumulated
prediction error distribution, , is more concentrated to zero according to the contexts that correspond to gradient values
using context-based error compensation in Fig. 2(b). between neighborhood pixels within a frame. In EC
algorithms, the context-based model utilization is largely
Prediction Error Distribution constrained because each small coding unit is independently
30.0% dealt with, which causes lack of statistical data for context
25.0%
accumulation. In order to circumvent this obstacle, we take
Best Compression Performance
advantage of the context-based model calculated from pixels
20.0%
Worst Compression Performance
of the previously processed frame that is the closest to the
15.0% current one, where is depicted in Fig. 3(b).
10.0%
5.0%
0.0%
-16 -8 0 8 16
Prediction Error Value
(a)
15.0%
with context-based prediction
(a)
10.0%
5.0% ...
0.0%
-16 0 16
Prediction Error Value
previously coded frame next coded frame
2
into 7 steps using T1 and T2 threshold level. A total of because of dramatically reduced context conditions.
context conditions are 172. We call this condition CTX777.
And then, we quantize context conditions into 5 steps using TABLE III
only T1 threshold level. A total of context conditions are 63. COMPRESSION PERFORMANCE OF CTX990, CTX770, AND CTX550
We call this condition CTX555. Video sequence CTX990 CTX770 CTX550
The compression performance of the CTX999, CTX777, and Aspen 2.50:1 2.46:1 2.29:1
CTX555 is presented in Table II. ControlledBurn 1.93:1 1.91:1 1.87:1
TABLE IV
COMPRESSION RATIO OF A HIGH TEXTURE SEQUENCE AND A LOW
TEXTURE SEQUENCE , CROWD RUN AND S PEEDBAG , RESPECTIVELY
CrowdRun SpeedBag
When context-based error
1.6:1 2.6:1
compensation is used
Fig. 4. An example of the reduced context conditions in the image samples. when context-based error
1.5:1 2.6:1
compensation is not used
3
Experimental Results (QP22) Experimental Results (QP27)
70.0% 70.0%
proposed algorithm proposed algorithm
Kim's algorithm Kim's algorithm
65.0% 65.0%
60.0% 60.0%
55.0% 55.0%
50.0% 50.0%
45.0% 45.0%
(a) (b)
60.0% 60.0%
55.0% 55.0%
50.0% 50.0%
(c) (d)
Fig. 5. Experimental results of the proposed EC algorithm compared with Kims one; (a) QP22, (b) QP27, (c) QP32, and (d) QP37.
Second, we conduct some experiments along various QP The compression performance gain of the proposed
value and compare the proposed algorithm with Kims algorithm can enhance the video coding efficiency by
algorithm [2] with 14 video sequences. Experimental results enlarging the search range of motion estimation [4] or by
illustrated in Fig. 5. reducing additional memory bandwidth for various video
According to the experimental results illustrated in Fig. 5, applications.
it can be seen that the proposed EC algorithm shows at least
50% data reduction ratio on average, which is higher than REFERENCES
Kims algorithm. Especially, in the video sequences, Johnny, [1] J.-C. Tuan, T.-S. Chang, and C.-W. Jen, "On the data reuse and memory
the compression ratio of the proposed EC algorithm is up to bandwidth analysis for full-search block-matching VLSI architecture,"
IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 1, pp. 61-72, Jan.
5% higher than that of Kims algorithm. This gain of 2002.
compression ratio comes from the proposed temporal context- [2] J. Kim and C.-M. Kyung, A Lossless Embedded Compression using
Significant Bit Truncation for HD Video Coding, IEEE Trans. Circuits
based error compensation.
Syst. Video Technol., vol. 20, no. 6, pp. 848-860, June 2010.
[3] M.J. Weinberger, G. Seroussi, and G. Sapiro, "The LOCO-I lossless
image compression algorithm: Principles and standardization into JPEG-
LS," IEEE Trans. Image Processing, vol. 9, pp. 1309-1324, Aug. 2000.
IV. CONCLUSION [4] J. Jung, J. Kim, and C.-M. Kyung, A dynamic search range algorithm
In this paper, we proposed a lossless embedded for stabilized reduction of memory traffic in video encoder, IEEE Trans.
Circuits Syst. Video Technol., vol. 20, no. 7, pp. 1041-1046, July 2010.
compression algorithm for video application. The proposed [5] HEVC Reference Software HM6.0, March 2012. [Online]. Available:
algorithm occurs in three steps: average prediction, context- http://hevc.kw.bbc.co.uk/trac/browser/tags/HM-6.0
based error compensation, and SBT coding. The average
prediction gives more low complexity compared to other
prediction method. Through the context-based error
compensation, more than 5% of data is compressed with no
quality degradation and bit-rate increment. And, we can be
implemented small memory size increase to store context
conditions through temporal contexts or largely reduced
quantized regions of context conditions.