A new hypergraph coloring problem is introduced by defining N(H; e) as the maximal number of colo... more A new hypergraph coloring problem is introduced by defining N(H; e) as the maximal number of colors in a vertex coloring of a hypergraph H = (V; E) , which has not more than e different colors in every edge. Our main results concern the asymptotic behaviour of this quantity for the uniform hypergraph H(n; `; k) = (V(n; `); E(n; `; k)) with vertex set V(n; `) = GammaOmega n ` Delta forOmega n = f1; 2; : : : ; ng and edge set E(n; `; k) = n E = Gamma A ` Delta : A 2 GammaOmega n k Delta o . In case ` = 2 there are connections to Turan's graph. 2 1. Introduction As a natural generalization of the concept of a chromatic number of a graph (which includes also several of its generalizations suggested by others (see ch. 19 of [6])), Erd os and Hajnal [1] introduced the chromatic number /(H) of a hypergraph H = (V; E) as the minimal number of colors needed to color the vertices such that no edge E 2 E with jE j ? 1 has all its vertices with the same color. A stron...
2007 IEEE International Symposium on Information Theory, 2007
In this paper, we study the error correction capability of random linear network error correction... more In this paper, we study the error correction capability of random linear network error correction codes [7]. We derive bounds on the probability mass function of the minimum distance of a random network error correction code and the field size required for the existence of a network error correction code with a given degradation, which is the difference between the highest possible minimum distance in the Singleton bound and the minimum distance of the code. The main tool that we use to study these problems is an improved bound on the failure probability of random linear network codes that at one or more sinks, the source messages are not decodable. This problem was originally studied in [6].
2006 IEEE International Symposium on Information Theory, 2006
In multi-source multi-sink network coding, messages across different sources are coded to increas... more In multi-source multi-sink network coding, messages across different sources are coded to increase the overall throughput. The various types of coded information in the network significantly complicate the determination of its capacity region. In this work, we derive explicit inner and outer bounds for acyclic multi-source multi-sink networks based on a cut-based network decomposition technique and a role-based information characterization technique. In particular, we derive a linear programming inner bound for regular K-pairs acyclic threelayer networks and a network sharing outer bound for arbitrary acyclic multi-source multi-sink networks. The techniques used in this paper reveal some of the basic mechanisms of multi-source multi-sink network coding.
2006 IEEE International Symposium on Information Theory, 2006
Ignited by the pioneering work of Ahlswede et al on the characterization of the capacity region f... more Ignited by the pioneering work of Ahlswede et al on the characterization of the capacity region for single-source multicast networks, a number of works have been devoted to determining the capacity regions for more general networks. However, except a few interesting outer bounds that have been derived so far, the exact characterization is still restricted to rather simple networks. In this paper, we determine the capacity region for a more general class of networks called degree-2 Kpairs three-layer networks. The result suggests a characterization technique for more general multi-source multi-sink networks.
Abstract A new on-line universal lossy data compression algorithm is presented. For finite memory... more Abstract A new on-line universal lossy data compression algorithm is presented. For finite memoryless sources with unknown statistics, its performance asymptotically approaches the fundamental rate distortion limit. The codebook is generated on the fly, and continuously ...
Abract-We investigate the maximization of the differential entropy h (X + Y) of arbitrary depende... more Abract-We investigate the maximization of the differential entropy h (X + Y) of arbitrary dependent random variables X and Y under the constraints of fixed equal m a r g l~l densities for X and Y. We show that maxh(X + Y) = h (2 X) , under the constraints that X and Y have the same k e d marginal density f, if and only if f is log-eoncave. The maximum is achieved rrhen X Y. Iff is not logconcaw, the maximum is strictly greater than h(2X). As an example, identically distributed Gaussian rpadem variables have log-concnve densuies and satisfy max L (X + Y) = h(2X) with X = Y. More general inequalities in this direction should lead to capacity bounds for additive noise channels with feedback, Index Tenns-Differential entropy, maximum entropy, entropy power inequality.
This work examines the nearest neighbor encoding problem with an unstructured codebook of arbitra... more This work examines the nearest neighbor encoding problem with an unstructured codebook of arbitrary size and vector dimension. We propose a new tree-structured nearest neighbor encoding method that significantly reduces the complexity of the full-search method without any performance degradation in terms of distortion. Our method consists of efficient algorithms for constructing a binary tree for the codebook and nearest neighbor encoding by using this tree. Numerical experiments are given to demonstrate the performance of the proposed method.
Motivated by applications in universal data compression algorithms we study the problem of bounds... more Motivated by applications in universal data compression algorithms we study the problem of bounds on the sizes of constant weight covering codes. We are concerned with the minimal sizes of codes of length n and constant weight u such that every word of length n and weight v is within Hamming distance d from a codeword. In addition to a brief summary of part of the relevant literature, we also give new results on upper and lower bounds to these sizes. We pay particular attention to the asymptotic covering density of these codes. We include tables of the bounds on the sizes of these codes both for small values ofn and for the asymptotic behavior. A comparison with techniques for attaining bounds for packing codes is also given. Some new combinatorial questions are also arising from the techniques.
The theory of group codes has been shown to be a useful starting point for the construction of go... more The theory of group codes has been shown to be a useful starting point for the construction of good geometrically uniform codes. In this paper we study the problem of building multilevel group codes, i.e., codes obtained combining separate coding at different levels in such a way that the resulting code is a group code. A construction leading to multilevel group codes for semi-direct and direct products is illustrated. The codes that can be obtained in this way are identified. New geometrically uniform Euclidean-space codes obtained from multilevel codes over abelian and nonabelian groups are presented. Index Terms-Group codes, multilevel codes, geometrically uniform codes. I. INTROUUCIION N IMPORTANT problem in communication theory is A the construction of good Euclidean-space codes for the transmission over the additive Gaussian channel. For a given number of bits per dimension to be transmitted (the spectral efficiency), one looks for a constellation S R" and a "good" Euclidean-space code C over it (a set of sequences C C SI, where I C 2) yielding this spectral efficiency, where "good" should in principle refer to the minimization of the bit error probability. A fair comparison between different codes must take into account the complexity of the corresponding maximum-likelihood decoders. A good deal of effort has been devoted to finding good Euclidean-space codes using both block-coded modulation (finite-length sequences with I = [l, MI) and trcllis-coded modulation (infinite-length sequences with I = 2) since Ungerboeck's classical paper [ 11. For this purpose, a problem is the lack, at least for trellis codes, of algebraic methods to synthesize good codes: the only practical way seems to be the generation of a large number of codes followed by their performance evaluation. This fact has some consequences: since the bit-error probability is difficult to compute, the "goodness" of a code is usually identified by its squared free distance d:ree (the minimum squared Euclidean distance between code sequences), its multiplicity Nfree, and, perhaps, the first terms of the code distance profile. Moreover, since it is computationally impossible to analyze the entire class of codes, it is necessary to restrict the search to smaller classes where one expects to find the best codes.
1993 IEEE International Symposium on Circuits and Systems
The Papoulis-Gerchberg (PG) algorithm is used for band-limited signal extrapolation. The PG algor... more The Papoulis-Gerchberg (PG) algorithm is used for band-limited signal extrapolation. The PG algorithm is generalized to signals in wavelet subspaces. The generalized PG algorithm is considered for both continuous and discrete time signals. Its convergence is investigated. Numerical examples show its performance
The minimum mean-squared error (MMSE) estimator has been used to reconstruct a band-limited signa... more The minimum mean-squared error (MMSE) estimator has been used to reconstruct a band-limited signal from its finite samples in a bounded interval and shown to have many nice properties. In this research, we consider a special class of bandlimited 1-D and 2-D signals which have a multiband structure in the frequency domain, and propose a new reconstruction algorithm to exploit the multiband feature of the underlying signals. The concept of the critical value and region is introduced to measure the performance of a reconstruction algorithm. We show analytically that the new algorithm performs #better than the MMSE estimator for band-limited/multiband signals in terms of the critical value and region measure. Finally, numerical examples of 1-D and 2-D signal reconstruction are given for performance comparison of various methods. Zusammenfassung Der Minimum-Mean-Squared-Error-(MMSE) SchPtzer wurde zur Rekonstruktion eines bandbegrenzten $ignals aus endlich vielen Abtastwerten aus einem begrenzten Interval1 benutzt und es zeigt sich, dal3 er viele giinstige Eigenschaften aufweist. In dieser Arbeit betrachten wir eine spezielle Klasse bandbegrenzter lD-und 2D-Signale mit Multiband-Struktur im Frequenzbereich und schlagen einen neuen Rekonstruktionsalgorithmus vor, der die Multiband-Eigenschaft des zugrundeliegenden Signals ausnutzt. Zur Bewertung des Rekonstruktionsalgorithmus wird das Konzept des kritisthen Wertes und Gebietes eingefiihrt. Wir zeigen analytisch, dalj der neue Algorithmus besser arbeitet als der MMSE-Schltzer fiir bandbegrenzte/Multiband-Signale in Abhlngigkeit vom kritischen Wert und Gebiets-Ma& SchlieBlich werden numerische Beispiele zur Rekonstruktion von lD-und 2D-Signalen gegeben, wobei die Eigenschaften verschiedener Methoden verglichen werden. Rbumi! L'estimateur de l'erreur minimale au sens des moindres carrts (EMMC) a Cti: utilist: pour reconstruire un signal a bande 1imitCe & partir de ses &chantillons finis sur un intervalle fermt, et a montrt: qu'il avait des proprittts interessantes.
His research interests include digital signal and image processing, communication theory, data co... more His research interests include digital signal and image processing, communication theory, data compression, information and coding theory, wavelet transforms, numerical analysis, applied probability theory and statistics, and inverse problems.
Symmetrical multilevel diversity coding with independent data streams has been studied by Roche e... more Symmetrical multilevel diversity coding with independent data streams has been studied by Roche et al. (1992), and the admissible coding rate region was determined for the case of three levels. In particular, it was shown that coding by superposition is optimal, which means that optimality can be achieved by very simple coding. However, it is very difficult to generalize their
We introduce and analyze a new coding problem for a correlated source (X n ; Y n) 1 n=1. The obse... more We introduce and analyze a new coding problem for a correlated source (X n ; Y n) 1 n=1. The observer of X n can transmit data depending on X n at a prescribed rate R. Based on these data the observer of Y n tries to identify whether for some distortion measure (like the Hamming distance) 1 n (X n ; Y n) d, a prescribed delity criterion. We investigate as functions of R and d the exponents of two error probabilities, the probabilities for misacceptance and the probabilities for misrejection. Our analysis has led to a new method for proving converses. Its basis is "The Inherently Typical Subset Lemma". It goes considerably beyond the "Entropy Characterisation" of 2], the "Image Size Characterisation" of 3], and its extensions in 5]. It is conceivable that it has a strong impact on Multiuser Information Theory.
Let X and Y be two jointly distributed random variables. Suppose person P x , the informant, know... more Let X and Y be two jointly distributed random variables. Suppose person P x , the informant, knows X, and person P y , the recipient, knows Y , and both know the joint probability distribution of the pair (X , Y). Using a predetermined protocol, they communicate over a binary error-free channel in order for P y to learn X, whereas Px may or may not learn Y. C,(X 1 Y) is the minimum number of bits required to be transmitted (by both persons) in-the worst case when only m message exchanges are allowed. C,(X I Y) is the number of bits required when P x and P-y can communicate back and forth an arbitrary number of 9 e s. Orlitsky proved that for all (X, Y) pairs, C,(X I Y) 5 4 C d X I Y) + 3, and that for every positive e and E "th E < 1, there exist (X, Y) pairs with These results show that two messages are almost optimal, but not optimal. A natural question, then, is whether three messages are asymptotically optimal. In this work, we prove that for any c and E with 0 < E < 1 and e > 0, there exist some (X , Y) pairs for which C 3 (X I Y) 2 (2-E) C~(X 1 Y) 2 e. That is, three messages are not optimal either.
The redundancy problem of lossy source coding with abstract source and reproduction alphabets is ... more The redundancy problem of lossy source coding with abstract source and reproduction alphabets is considered. For coding at a fixed rate level, it is shown that for any fixed rate R&amp;gt;0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {Cn}n=1∞ of block codes at the rate R such that the distortion redundancy of Cn (defined as the difference between the performance of Cn and the distortion rate function d(P, R) of P) is upper-bounded by |(∂d(P,R))/(∂R)| ln n/2n+o(ln n/n). For coding at a fixed distortion level, it is demonstrated that for any d&amp;gt;0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {Cn}n=1∞ of block codes at the fixed distortion d such that the rate redundancy of Cn (defined as the difference between the performance of C n and the rate distortion function R(P,d) of P) is upper-bounded by (7ln n)/(6n)+o(ln n/n). These results strengthen the traditional Berger&amp;#x27;s (1968, 1971) abstract alphabet source coding theorem, and extend the positive redundancy results of Zhang, Yang, and Wei (see ibid., vol.43, no.1, p.71-91, 1997, and ibid., vol.42, p.803-21, 1996) on lossy source coding with finite alphabets and the redundancy result of Wyner (see ibid., vol.43, p.1452-64, 1997) on block coding of memoryless Gaussian sources
Inspired by mobile satellite communications systems, we consider a source coding system which con... more Inspired by mobile satellite communications systems, we consider a source coding system which consists of multiple sources, multiple encoders, and multiple decoders. Each encoder has access to a certain subset of the sources, each decoder has access to certain subset of the encoders, and each decoder reconstructs a certain subset of the sources almost perfectly. The connectivity between the sources and the encoders, the connectivity between the encoders and the decoders, and the reconstruction requirements for the decoders are all arbitrary. Our goal is to characterize the admissible coding rate region. Despite the generality of the problem, we have developed an approach which enables us to study all cases on the same footing. We obtain inner and outer bounds of the admissible coding rate region in terms of 0 3 N and 0 3 N , respectively, which are fundamental regions in the entropy space recently defined by Yeung. So far, there has not been a full characterization of 0 3 N , so these bounds cannot be evaluated explicitly except for some special cases. Nevertheless, we obtain an alternative outer bound which can be evaluated explicitly. We show that this bound is tight for all the special cases for which the admissible coding rate region is known. The model we study in this paper is more general than all previously reported models on multilevel diversity coding, and the tools we use are new in multiuser information theory.
Three new constructions for families of cyclic con stant weight codes are presented. All are asym... more Three new constructions for families of cyclic con stant weight codes are presented. All are asymptotically optimum in the sense that in each case, as the length of the sequences within the family approaches infinity, tbe ratio of family size to the maximum possible under the Johnson upper bound, approacbes unity.
The fixed slope lossy algorithm derived from the kthorder adaptive arithmetic codeword length fun... more The fixed slope lossy algorithm derived from the kthorder adaptive arithmetic codeword length function is extended to the case of finite-state decoders or trellis-structured decoders. It is shown that when this algorithm is used to encode a stationary, ergodic source with a continuous alphabet, the Lagrangian performance (i.e., the resulting compression rate plus times the resulting distortion) converges with probability one to a quantity computable as the infimum of an information-theoretic functional over a set of auxiliary random variables and reproduction levels, where > 0 and 0 is designated to be the slope of the ratedistortion function R(D) of the source at some D; the quantity is close to R(D) + D when the order k used in the arithmetic coding or the number of states in the decoders is large enough. An alternating minimization algorithm for computing the quantity is presented; this algorithm is based on a training sequence and in turn gives rise to a design algorithm for variable-rate trellis source codes. The resulting variable-rate trellis source codes are very efficient in low-rate regions (below 0:8 bits/sample). With k = 8, the mean-squared error encoding performance at the rate 1=2 bits/sample for memoryless Gaussian sources is comparable to that afforded by trellis-coded quantizers; with k = 8 and the number of states in the decoder = 32, the mean-squared error encoding performance at the rate 1=2 bits/sample for memoryless Laplacian sources is about 1 dB better than that afforded by the trellis-coded quantizers with 256 states. With k = 8 and the number of states in the decoder = 256, the meansquared error encoding performance at the rates of a fraction of 1 bit/sample for highly dependent Gauss-Markov sources with correlation coefficient 0:9 is within about 0.6 dB of the distortionrate function. Note that at such low rates, predictive coders usually perform poorly.
A new hypergraph coloring problem is introduced by defining N(H; e) as the maximal number of colo... more A new hypergraph coloring problem is introduced by defining N(H; e) as the maximal number of colors in a vertex coloring of a hypergraph H = (V; E) , which has not more than e different colors in every edge. Our main results concern the asymptotic behaviour of this quantity for the uniform hypergraph H(n; `; k) = (V(n; `); E(n; `; k)) with vertex set V(n; `) = GammaOmega n ` Delta forOmega n = f1; 2; : : : ; ng and edge set E(n; `; k) = n E = Gamma A ` Delta : A 2 GammaOmega n k Delta o . In case ` = 2 there are connections to Turan's graph. 2 1. Introduction As a natural generalization of the concept of a chromatic number of a graph (which includes also several of its generalizations suggested by others (see ch. 19 of [6])), Erd os and Hajnal [1] introduced the chromatic number /(H) of a hypergraph H = (V; E) as the minimal number of colors needed to color the vertices such that no edge E 2 E with jE j ? 1 has all its vertices with the same color. A stron...
2007 IEEE International Symposium on Information Theory, 2007
In this paper, we study the error correction capability of random linear network error correction... more In this paper, we study the error correction capability of random linear network error correction codes [7]. We derive bounds on the probability mass function of the minimum distance of a random network error correction code and the field size required for the existence of a network error correction code with a given degradation, which is the difference between the highest possible minimum distance in the Singleton bound and the minimum distance of the code. The main tool that we use to study these problems is an improved bound on the failure probability of random linear network codes that at one or more sinks, the source messages are not decodable. This problem was originally studied in [6].
2006 IEEE International Symposium on Information Theory, 2006
In multi-source multi-sink network coding, messages across different sources are coded to increas... more In multi-source multi-sink network coding, messages across different sources are coded to increase the overall throughput. The various types of coded information in the network significantly complicate the determination of its capacity region. In this work, we derive explicit inner and outer bounds for acyclic multi-source multi-sink networks based on a cut-based network decomposition technique and a role-based information characterization technique. In particular, we derive a linear programming inner bound for regular K-pairs acyclic threelayer networks and a network sharing outer bound for arbitrary acyclic multi-source multi-sink networks. The techniques used in this paper reveal some of the basic mechanisms of multi-source multi-sink network coding.
2006 IEEE International Symposium on Information Theory, 2006
Ignited by the pioneering work of Ahlswede et al on the characterization of the capacity region f... more Ignited by the pioneering work of Ahlswede et al on the characterization of the capacity region for single-source multicast networks, a number of works have been devoted to determining the capacity regions for more general networks. However, except a few interesting outer bounds that have been derived so far, the exact characterization is still restricted to rather simple networks. In this paper, we determine the capacity region for a more general class of networks called degree-2 Kpairs three-layer networks. The result suggests a characterization technique for more general multi-source multi-sink networks.
Abstract A new on-line universal lossy data compression algorithm is presented. For finite memory... more Abstract A new on-line universal lossy data compression algorithm is presented. For finite memoryless sources with unknown statistics, its performance asymptotically approaches the fundamental rate distortion limit. The codebook is generated on the fly, and continuously ...
Abract-We investigate the maximization of the differential entropy h (X + Y) of arbitrary depende... more Abract-We investigate the maximization of the differential entropy h (X + Y) of arbitrary dependent random variables X and Y under the constraints of fixed equal m a r g l~l densities for X and Y. We show that maxh(X + Y) = h (2 X) , under the constraints that X and Y have the same k e d marginal density f, if and only if f is log-eoncave. The maximum is achieved rrhen X Y. Iff is not logconcaw, the maximum is strictly greater than h(2X). As an example, identically distributed Gaussian rpadem variables have log-concnve densuies and satisfy max L (X + Y) = h(2X) with X = Y. More general inequalities in this direction should lead to capacity bounds for additive noise channels with feedback, Index Tenns-Differential entropy, maximum entropy, entropy power inequality.
This work examines the nearest neighbor encoding problem with an unstructured codebook of arbitra... more This work examines the nearest neighbor encoding problem with an unstructured codebook of arbitrary size and vector dimension. We propose a new tree-structured nearest neighbor encoding method that significantly reduces the complexity of the full-search method without any performance degradation in terms of distortion. Our method consists of efficient algorithms for constructing a binary tree for the codebook and nearest neighbor encoding by using this tree. Numerical experiments are given to demonstrate the performance of the proposed method.
Motivated by applications in universal data compression algorithms we study the problem of bounds... more Motivated by applications in universal data compression algorithms we study the problem of bounds on the sizes of constant weight covering codes. We are concerned with the minimal sizes of codes of length n and constant weight u such that every word of length n and weight v is within Hamming distance d from a codeword. In addition to a brief summary of part of the relevant literature, we also give new results on upper and lower bounds to these sizes. We pay particular attention to the asymptotic covering density of these codes. We include tables of the bounds on the sizes of these codes both for small values ofn and for the asymptotic behavior. A comparison with techniques for attaining bounds for packing codes is also given. Some new combinatorial questions are also arising from the techniques.
The theory of group codes has been shown to be a useful starting point for the construction of go... more The theory of group codes has been shown to be a useful starting point for the construction of good geometrically uniform codes. In this paper we study the problem of building multilevel group codes, i.e., codes obtained combining separate coding at different levels in such a way that the resulting code is a group code. A construction leading to multilevel group codes for semi-direct and direct products is illustrated. The codes that can be obtained in this way are identified. New geometrically uniform Euclidean-space codes obtained from multilevel codes over abelian and nonabelian groups are presented. Index Terms-Group codes, multilevel codes, geometrically uniform codes. I. INTROUUCIION N IMPORTANT problem in communication theory is A the construction of good Euclidean-space codes for the transmission over the additive Gaussian channel. For a given number of bits per dimension to be transmitted (the spectral efficiency), one looks for a constellation S R" and a "good" Euclidean-space code C over it (a set of sequences C C SI, where I C 2) yielding this spectral efficiency, where "good" should in principle refer to the minimization of the bit error probability. A fair comparison between different codes must take into account the complexity of the corresponding maximum-likelihood decoders. A good deal of effort has been devoted to finding good Euclidean-space codes using both block-coded modulation (finite-length sequences with I = [l, MI) and trcllis-coded modulation (infinite-length sequences with I = 2) since Ungerboeck's classical paper [ 11. For this purpose, a problem is the lack, at least for trellis codes, of algebraic methods to synthesize good codes: the only practical way seems to be the generation of a large number of codes followed by their performance evaluation. This fact has some consequences: since the bit-error probability is difficult to compute, the "goodness" of a code is usually identified by its squared free distance d:ree (the minimum squared Euclidean distance between code sequences), its multiplicity Nfree, and, perhaps, the first terms of the code distance profile. Moreover, since it is computationally impossible to analyze the entire class of codes, it is necessary to restrict the search to smaller classes where one expects to find the best codes.
1993 IEEE International Symposium on Circuits and Systems
The Papoulis-Gerchberg (PG) algorithm is used for band-limited signal extrapolation. The PG algor... more The Papoulis-Gerchberg (PG) algorithm is used for band-limited signal extrapolation. The PG algorithm is generalized to signals in wavelet subspaces. The generalized PG algorithm is considered for both continuous and discrete time signals. Its convergence is investigated. Numerical examples show its performance
The minimum mean-squared error (MMSE) estimator has been used to reconstruct a band-limited signa... more The minimum mean-squared error (MMSE) estimator has been used to reconstruct a band-limited signal from its finite samples in a bounded interval and shown to have many nice properties. In this research, we consider a special class of bandlimited 1-D and 2-D signals which have a multiband structure in the frequency domain, and propose a new reconstruction algorithm to exploit the multiband feature of the underlying signals. The concept of the critical value and region is introduced to measure the performance of a reconstruction algorithm. We show analytically that the new algorithm performs #better than the MMSE estimator for band-limited/multiband signals in terms of the critical value and region measure. Finally, numerical examples of 1-D and 2-D signal reconstruction are given for performance comparison of various methods. Zusammenfassung Der Minimum-Mean-Squared-Error-(MMSE) SchPtzer wurde zur Rekonstruktion eines bandbegrenzten $ignals aus endlich vielen Abtastwerten aus einem begrenzten Interval1 benutzt und es zeigt sich, dal3 er viele giinstige Eigenschaften aufweist. In dieser Arbeit betrachten wir eine spezielle Klasse bandbegrenzter lD-und 2D-Signale mit Multiband-Struktur im Frequenzbereich und schlagen einen neuen Rekonstruktionsalgorithmus vor, der die Multiband-Eigenschaft des zugrundeliegenden Signals ausnutzt. Zur Bewertung des Rekonstruktionsalgorithmus wird das Konzept des kritisthen Wertes und Gebietes eingefiihrt. Wir zeigen analytisch, dalj der neue Algorithmus besser arbeitet als der MMSE-Schltzer fiir bandbegrenzte/Multiband-Signale in Abhlngigkeit vom kritischen Wert und Gebiets-Ma& SchlieBlich werden numerische Beispiele zur Rekonstruktion von lD-und 2D-Signalen gegeben, wobei die Eigenschaften verschiedener Methoden verglichen werden. Rbumi! L'estimateur de l'erreur minimale au sens des moindres carrts (EMMC) a Cti: utilist: pour reconstruire un signal a bande 1imitCe & partir de ses &chantillons finis sur un intervalle fermt, et a montrt: qu'il avait des proprittts interessantes.
His research interests include digital signal and image processing, communication theory, data co... more His research interests include digital signal and image processing, communication theory, data compression, information and coding theory, wavelet transforms, numerical analysis, applied probability theory and statistics, and inverse problems.
Symmetrical multilevel diversity coding with independent data streams has been studied by Roche e... more Symmetrical multilevel diversity coding with independent data streams has been studied by Roche et al. (1992), and the admissible coding rate region was determined for the case of three levels. In particular, it was shown that coding by superposition is optimal, which means that optimality can be achieved by very simple coding. However, it is very difficult to generalize their
We introduce and analyze a new coding problem for a correlated source (X n ; Y n) 1 n=1. The obse... more We introduce and analyze a new coding problem for a correlated source (X n ; Y n) 1 n=1. The observer of X n can transmit data depending on X n at a prescribed rate R. Based on these data the observer of Y n tries to identify whether for some distortion measure (like the Hamming distance) 1 n (X n ; Y n) d, a prescribed delity criterion. We investigate as functions of R and d the exponents of two error probabilities, the probabilities for misacceptance and the probabilities for misrejection. Our analysis has led to a new method for proving converses. Its basis is "The Inherently Typical Subset Lemma". It goes considerably beyond the "Entropy Characterisation" of 2], the "Image Size Characterisation" of 3], and its extensions in 5]. It is conceivable that it has a strong impact on Multiuser Information Theory.
Let X and Y be two jointly distributed random variables. Suppose person P x , the informant, know... more Let X and Y be two jointly distributed random variables. Suppose person P x , the informant, knows X, and person P y , the recipient, knows Y , and both know the joint probability distribution of the pair (X , Y). Using a predetermined protocol, they communicate over a binary error-free channel in order for P y to learn X, whereas Px may or may not learn Y. C,(X 1 Y) is the minimum number of bits required to be transmitted (by both persons) in-the worst case when only m message exchanges are allowed. C,(X I Y) is the number of bits required when P x and P-y can communicate back and forth an arbitrary number of 9 e s. Orlitsky proved that for all (X, Y) pairs, C,(X I Y) 5 4 C d X I Y) + 3, and that for every positive e and E "th E < 1, there exist (X, Y) pairs with These results show that two messages are almost optimal, but not optimal. A natural question, then, is whether three messages are asymptotically optimal. In this work, we prove that for any c and E with 0 < E < 1 and e > 0, there exist some (X , Y) pairs for which C 3 (X I Y) 2 (2-E) C~(X 1 Y) 2 e. That is, three messages are not optimal either.
The redundancy problem of lossy source coding with abstract source and reproduction alphabets is ... more The redundancy problem of lossy source coding with abstract source and reproduction alphabets is considered. For coding at a fixed rate level, it is shown that for any fixed rate R&amp;gt;0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {Cn}n=1∞ of block codes at the rate R such that the distortion redundancy of Cn (defined as the difference between the performance of Cn and the distortion rate function d(P, R) of P) is upper-bounded by |(∂d(P,R))/(∂R)| ln n/2n+o(ln n/n). For coding at a fixed distortion level, it is demonstrated that for any d&amp;gt;0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {Cn}n=1∞ of block codes at the fixed distortion d such that the rate redundancy of Cn (defined as the difference between the performance of C n and the rate distortion function R(P,d) of P) is upper-bounded by (7ln n)/(6n)+o(ln n/n). These results strengthen the traditional Berger&amp;#x27;s (1968, 1971) abstract alphabet source coding theorem, and extend the positive redundancy results of Zhang, Yang, and Wei (see ibid., vol.43, no.1, p.71-91, 1997, and ibid., vol.42, p.803-21, 1996) on lossy source coding with finite alphabets and the redundancy result of Wyner (see ibid., vol.43, p.1452-64, 1997) on block coding of memoryless Gaussian sources
Inspired by mobile satellite communications systems, we consider a source coding system which con... more Inspired by mobile satellite communications systems, we consider a source coding system which consists of multiple sources, multiple encoders, and multiple decoders. Each encoder has access to a certain subset of the sources, each decoder has access to certain subset of the encoders, and each decoder reconstructs a certain subset of the sources almost perfectly. The connectivity between the sources and the encoders, the connectivity between the encoders and the decoders, and the reconstruction requirements for the decoders are all arbitrary. Our goal is to characterize the admissible coding rate region. Despite the generality of the problem, we have developed an approach which enables us to study all cases on the same footing. We obtain inner and outer bounds of the admissible coding rate region in terms of 0 3 N and 0 3 N , respectively, which are fundamental regions in the entropy space recently defined by Yeung. So far, there has not been a full characterization of 0 3 N , so these bounds cannot be evaluated explicitly except for some special cases. Nevertheless, we obtain an alternative outer bound which can be evaluated explicitly. We show that this bound is tight for all the special cases for which the admissible coding rate region is known. The model we study in this paper is more general than all previously reported models on multilevel diversity coding, and the tools we use are new in multiuser information theory.
Three new constructions for families of cyclic con stant weight codes are presented. All are asym... more Three new constructions for families of cyclic con stant weight codes are presented. All are asymptotically optimum in the sense that in each case, as the length of the sequences within the family approaches infinity, tbe ratio of family size to the maximum possible under the Johnson upper bound, approacbes unity.
The fixed slope lossy algorithm derived from the kthorder adaptive arithmetic codeword length fun... more The fixed slope lossy algorithm derived from the kthorder adaptive arithmetic codeword length function is extended to the case of finite-state decoders or trellis-structured decoders. It is shown that when this algorithm is used to encode a stationary, ergodic source with a continuous alphabet, the Lagrangian performance (i.e., the resulting compression rate plus times the resulting distortion) converges with probability one to a quantity computable as the infimum of an information-theoretic functional over a set of auxiliary random variables and reproduction levels, where > 0 and 0 is designated to be the slope of the ratedistortion function R(D) of the source at some D; the quantity is close to R(D) + D when the order k used in the arithmetic coding or the number of states in the decoders is large enough. An alternating minimization algorithm for computing the quantity is presented; this algorithm is based on a training sequence and in turn gives rise to a design algorithm for variable-rate trellis source codes. The resulting variable-rate trellis source codes are very efficient in low-rate regions (below 0:8 bits/sample). With k = 8, the mean-squared error encoding performance at the rate 1=2 bits/sample for memoryless Gaussian sources is comparable to that afforded by trellis-coded quantizers; with k = 8 and the number of states in the decoder = 32, the mean-squared error encoding performance at the rate 1=2 bits/sample for memoryless Laplacian sources is about 1 dB better than that afforded by the trellis-coded quantizers with 256 states. With k = 8 and the number of states in the decoder = 256, the meansquared error encoding performance at the rates of a fraction of 1 bit/sample for highly dependent Gauss-Markov sources with correlation coefficient 0:9 is within about 0.6 dB of the distortionrate function. Note that at such low rates, predictive coders usually perform poorly.
Uploads
Papers by Zhen Zhang