Academia.eduAcademia.edu

On the Estimation of Quantizer Reconstruction Levels

2000, IEEE Transactions on Instrumentation and Measurement

We consider the problem of estimating the optimal reconstruction levels of a fixed quantizer, such as the amplitude quantization part of an analog-to-digital converter. A probabilistic transfer function model is applied for the quantizer. Two different assumptions are made for the transfer function, and an estimator based on order statistics is applied. The estimator turns out to give better results in terms of mean square error than the commonly applied sample mean.

c 2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems Signal Processing S-100 44 STOCKHOLM KUNGL TEKNISKA HÖGSKOLAN Institutionen för Signaler, Sensorer & System Signalbehandling 100 44 STOCKHOLM 1 On the Estimation of Quantizer Reconstruction Levels Henrik Lundin, Member, IEEE, Mikael Skoglund, Senior Member, IEEE, and Peter Händel, Senior Member, IEEE Abstract We consider the problem of estimating the optimal reconstruction levels of a fixed quantizer, such as the amplitude quantization part of an analog-to-digital converter. A probabilistic transfer function model is applied for the quantizer. Two different assumptions are made for the transfer function, and an estimator based on order statistics is applied. The estimator turns out to give better results in terms of mean square error than the commonly applied sample mean. The Cramér–Rao lower bound for the estimation problem is also considered, and simulation results indicate that the derived estimator is asymptotically efficient. Index Terms Quantizer, Analog-to-digital converter, Cramér–Rao lower bound, characterization, estimation, order statistics. I. I NTRODUCTION Post-correction of analog-to-digital converters (ADCs) has gained increasing interest among researchers recently. There are several reasons for this. To mention one, post-correction is considered to be a key technology to enable so called software defined radio receivers, or all-digital receivers [1]. Many different post-correction approaches and techniques have been proposed in the literature [2], but the vast majority of them boils down to that they replace the output sample from the ADC with a value that is considered to be better, in some sense. Within the art of post-correction lies an estimation problem, namely how to find the best correction values for a specific ADC. There exist solid theories for what ideal values should be used to correct the ADC for a given input, and different estimators for these values have been proposed, but the theories are incomplete in the sense that they do not provide variance expressions for the estimates. Some work in this direction has been presented in [3], but only for the case of sinewave histogram tests. May 4, 2006 DRAFT 2 ADC s(t) x(n) s(n) Q Fig. 1. A model of an ADC. The sample-and-hold is assumed to be ideal while the quantizer Q can introduce errors. Also, in [4] and [5] similar results were derived for the Gaussian and truncated Gaussian histogram test, respectively. In the present paper, we analyze the problem of estimating the transfer function of an ADC. The estimation is assisted by some reference signal. A probabilistic ADC model is used. The model was introduced in the pioneering work of Giaquinto et al. [6], in which it was used to find the optimal correction terms in relation to the model. Two different scenarios are investigated in the present paper, and estimators are derived for them both. Also, a fundamental lower limit on the variance – the Cramér–Rao bound, cf. [7] – is calculated and compared with the performance of the derived estimators. In particular, we show that in scenarios with a staircase transfer function and with an accurate reference signal available, a simple estimator taking the mean of the smallest and the largest sample significantly outperforms the traditionally used sample mean. II. P ROBABILISTIC Q UANTIZER M ODEL In the sequel we model the ADC to consist of an ideal sample-and-hold circuit followed by a nonideal but static quantizer, see Fig. 1. We disregard the sample-and-hold in the forthcoming analysis, and consider only the quantizer with discrete-time input and output signals. The quantizer is assumed −1 to be b-bit, hence having M = 2b quantization levels and M possible output values {xk }M k=0 . There exist several different models for the quantizer transfer function of an ADC. In the present paper we apply the probabilistic model presented in [6]. The model, here denoted M, has a probabilistic transfer function where the input and output of the quantizer are modeled as random variables S and X , respectively. The two variables are linked through the probabilistic quantizer, resulting in a joint probability distribution fS, X (s, x). The transfer function is defined by the conditional probability mass function (PMF) fX|S (xk |s) = Pr{X = xk |S = s}. The output value xk , often referred to as reconstruction level, is in the ADC case merely a number associated with the k -th quantization level1 . It may be more or less representative of the input value 1 In fact, the output of the ADC is a binary code. This code can be interpreted as the quantization level number k, while the midpoint voltage of the nominal quantization region would be used as reconstruction level xk . May 4, 2006 DRAFT 3 that produced the output. This is the background for the concept of post-correction: is there a better value γk , with which we want to represent the input values that give the output xk ? Lloyd [8] provided the conditions for optimal reconstruction levels to minimize the mean square error (MSE) between the input and the output of the quantizer, in the case of a deterministic staircasetype transfer function (partitioning the range of input values into M disjoint quantization regions −1 {Sj }M j=0 ). The input value s = s(n) is regarded to be drawn from a stochastic variable (s.v.) S with a probability density function (PDF) fS (s). If the quantization regions {Sj } are assumed fixed, then it is proved that the optimal reconstruction values {γj }, in the mean-squared sense, are given by R s fS (s) ds 2 γk = arg min E[(γ − S) |S ∈ Sk ] = Rs∈Sk , (1) γ s∈Sk fS (s) ds i.e., the optimal reconstruction value for each region is the “center of mass” of the region. The extension of this result to the non-deterministic transfer function of model M is that the optimal estimator is given by the conditional expectation of S given xk , that is [6] Z +∞ Z +∞ fX|S (xk |s) fS (s) s s fS|X (s|xk ) ds = γk = ds, pX (xk ) −∞ −∞ (2) where pX (xk ) is the probability mass function for the output random variable X and the last equality is obtained using Bayes’ rule. The problem of finding the optimal reconstruction levels for the probabilistic ADC model used here has a strong connection to the problem of quantizer design for noisy communication channels, e.g. [9]. We see that the optimal reconstruction levels are functions of both the quantizer (ADC) under test and the “test signal” used. Thus, we take into account what we know about the signal. From an ADC characterization point of view one might want to consider the reconstruction levels to be an inherent parameter of the ADC under test, and not of the input signal. This way of looking at the matter is most often found in conjunction with the static staircase quantizer transfer function model determined by the transition levels. The optimal reconstruction levels are then considered to be the midpoints between adjacent transition levels. This approach is in fact consistent with Lloyd’s approach (1) above if the variable S is assumed to be symmetric within each quantization region. In this section we have formulated the exact expressions (1) and (2) for the optimal reconstruction levels, under certain model assumptions. However, the first expression requires that we know the PDF fS (s) of the input signal and the limits of the quantization region Sk , and the second expression depends on exact knowledge of the transfer statistics fX|S (xk |s) together with fS (s). Neither of these cases are likely when characterizing an ADC. In fact, finding the unknown transfer function is the goal of the characterization procedure, wherefore we must find a way to estimate the reconstruction levels from a finite number of measurements. This is the objective of the next section. May 4, 2006 DRAFT 4 III. E STIMATING THE R ECONSTRUCTION L EVELS In this section the problem of estimating the optimal reconstruction levels of a quantizer from measurements is considered. Fig. 2(a) depicts the setup that is considered in the sequel. Assume that a signal s(n) is connected to the input of the quantizer under test and that Ntot output samples x(n), n = 1, 2, . . . , Ntot are recorded. The input samples are for now assumed to be independent and identically distributed realizations of an s.v. with PDF fS (s). It is further assumed that a positive number Nk ≤ Ntot of samples result in the specific ADC output x(n) = xk (the k -th output value), P i.e., M k=1 Nk = Ntot (typically Nk ≪ Ntot ). An estimate of the input signal s(n) is also obtained. The estimate is denoted z(n) and is a perturbed version of the true input signal. The reference signal z(n) can for example be an estimate based on the measured samples x(1), . . . , x(Ntot ), or a measurement from a reference device. The frequently used sine-wave fit method [10], [11] is an example of a reference signal estimated from the output samples. The reference signal is modeled as the true input s(n) with an additive perturbation u(n), z(n) = s(n) + u(n). The perturbation can then be used to account for measurement errors, (reference device) quantization errors, reconstruction errors or estimation errors, as appropriate depending on how the reference signal is obtained. We record Ntot samples of z(n). The subset of Nk samples for which the output x(n) = xk is of special interest, and these samples are collected in a column h i {k} {k} {k} T of length Nk , where the notation {k} denotes that the sample vector z{k} = z1 z2 . . . zNk corresponds to an instant when the ADC produced xk as output. In the same manner we denote the h i {k} {k} {k} T corresponding input samples in the column vector s{k} = s1 s2 . . . sNk . That is, when the {k} sample sn {k} was input, the the reference signal was estimated to be zn . (Note that s{k} is unknown in the estimation problem.) The input to the quantizer is considered to be drawn from an s.v. S . If we further assume that the perturbation samples are independent realizations of an s.v. U with PDF fU (u), then we can model the reference estimate as an s.v. Z = S + U . Fig. 2(a) illustrates the signal relationship. The resulting PDF of a sum of two independent variables is the convolution of the PDFs of the two terms. Thus, the PDF of Z is fZ (z) = (fS ∗ fU )(z) = Z +∞ −∞ fS (z − ζ) fU (ζ) dζ (3) if we assume independence between S and U . In the sequel the problem of estimating the optimal reconstruction level γk given the observation z{k} is considered. May 4, 2006 DRAFT 5 S X ADC Z U (a) gk Z W U (b) Fig. 2. The probabilistic ADC setup. Figure (a) shows the original ADC characterization problem with an input stimuli S, the resulting output X and a reference signal Z. Figure (b) shows the equivalent estimation problem setup of the type “DC level in noise.” A. Reformulating the estimation problem Since we only consider the sample instances where the quantizer output was a specific xk , the PDF for our observation of Z can be formulated as a conditional PDF fZ|X (z|xk ). The variable Z is still the sum of S and U , but now we can describe S using the conditional statistics fS|X (s|xk ). Thus, we have fZ|X (z|xk ) = (fS|X ∗ fU )(z) Z +∞ fS|X (z − ζ|xk ) fU (ζ) dζ. = (4) −∞ Assume that the conditional PDF fZ|X (z|xk ) is symmetric around a location parameter gk , meaning that fZ|X (z|xk ) depends on gk only as a shift with gk along the z -axis. If U is assumed to be zero-mean, then symmetry around gk is implied for fS|X (s|xk ) as well. Define the stochastic variable Wk = S|X − gk , (5) which then of course has the conditional PDF fWk |X (w|xk ) = fS|X (w + gk |xk ), (6) i.e., the location parameter of fS|X is removed, repositioning the PDF at the origin. The s.v. Wk is thus independent of gk . With these premises it is easy to see from (2) that γk = gk . Thus, estimating the optimal reconstruction levels is in this case equivalent to estimating the location parameter of May 4, 2006 DRAFT 6 fZ|X (z|xk ). We can now reformulate our observation Z as a constant (DC) level gk in two additive noises Wk and U : Z = gk + Wk + U, when x(n) = xk . (7) Fig. 2(b) depicts the modified estimation problem setup. Two different assumptions for the distribution of Wk will be considered. In the first case, Wk is assumed to be zero-mean Gaussian with variance σk2 , and in the second case Wk is assumed to be zero-mean uniform with a width ∆k (variance ∆2k /12). In both cases U is assumed to be zero-mean Gaussian with variance σU2 . Since we are now only considering one specific quantization level, namely the k -th level, both the index k and the explicit conditioning of the PDFs will be omitted for brevity. {k} Thus, g , σ 2 , ∆, z and zi will in the sequel represent gk , σk2 , ∆k , z{k} and zi , respectively. B. Gaussian W and U The first scenario, motivated by its simplicity, is where W ∈ N (0, σ 2 ) and U ∈ N (0, σU2 ). Hence, the observed variable Z = g + W + U is also Gaussian with PDF   1 (z − g)2 fZ (z) = q . exp − 2(σ 2 + σU2 ) 2π(σ 2 + σU2 ) (8) We know from the literature (e.g. [7]) that the sample mean is the best unbiased estimator of g in the mean-square sense in this case. That is, ĝsm (z) = N 1 X zi N (9) i=1 is the estimator with the lowest variance and zero bias (the index ‘sm’ denotes sample mean). The variance is also straightforward to calculate: var (ĝsm (z)) = σ 2 + σU2 . N (10) In fact, this is the variance for the sample mean of any two independent zero-mean variables with variances σ 2 and σU2 , regardless of the distributions. In this case, the total variance σ̄ 2 = σ 2 + σU2 can be estimated for the recorded data, but not the individual variances. Further, one may note that superefficient estimators do exist for estimating σ̄ 2 [12]. (A superefficient estimator possesses a mean-square error lower than the Cramér–Rao bound (cf. Sect. IV) at the expense of non-zero bias.) May 4, 2006 DRAFT 7 C. Uniform W and Gaussian U The second case results from modifying the model M to coincide with the classic staircase transfer function and using an input signal whose PDF fS (s) is uniform within each step of the transfer function2 (e.g., a ramp signal or a uniform noise, but also when the step size is small compared with the variability of the PDF fS (s)). In this case, the distribution of the input S given the output X (fS|X ) is uniform with some width ∆ (possibly dependent on k , again omitted for brevity). Therefore, the PDF for W in the equivalent model shown in Fig. 2(b) is in this case    1 |w| ≤ ∆ , 2 fW (w) = ∆  0 otherwise. (11) The PDF for U is still assumed to be zero-mean Gaussian with variance σU2 so the resulting PDF for W + U becomes fW +U (v) = Z ∆ 2 −∆ 2   1 (v − τ )2 q dτ. exp − 2σU2 ∆ 2πσU2 (12) Define the stochastic variable V , (W + U )/∆. Straightforward derivations from (12) give that the PDF for V becomes ! Z 1 2 1 (v − τ )2 1 √ σ exp − fV (v) = 2 dτ = erf U σ 1 U 2 2π ∆ −2 2 ∆ v + 12 √ 2ρ ! 1 − erf 2 v − 12 √ 2ρ ! , (13) where ρ , σU /∆. We see from (13) that V depends on ∆ and σU only through ρ, which can be interpreted as a shape parameter. When ρ → 0 the distribution for V approaches a uniform distribution with unit width, while for large ρ, V approaches a Gaussian distribution with the variance ρ. In fact, in the context of modeling quantized data using noise, it has been claimed [13] that when ρ is approximately larger than 0.35 the distribution for V can be considered Gaussian, a claim that is further investigated at the end of this section. So far, we have reformulated the estimation problem as: estimate g from N independent observations of Z = ∆V + g , where the PDF of V depends on the parameter ρ. This setting was considered in [14] where an estimator for g was derived using order statistics, i.e., an estimator working on a batch of samples arranged by their magnitude. Consider again our batch z of observation samples. Let z(i) , i = 1, . . . , N be the ordered samples with z(1) the smallest and z(N ) the largest, that is z(1) ≤ · · · ≤ z(i) ≤ · · · ≤ z(N ) . The estimator 2 This case can also be obtained by using the traditional deterministic staircase transfer function instead of M. May 4, 2006 DRAFT 8 should be linear in the ordered samples3 , i.e., ĝos (z) = N X αi z(i) , (14) i=1 where α , [α1 . . . αN ]T is the vector of filter coefficients to be determined. In [14] it was shown that the best (in the mean-square sense) unbiased estimator, linear in the ordered samples, is given by the coefficient vector α= R−1 (V ) 1 1T R−1 (V ) 1 , (15) with a resulting variance var(ĝos ) = ∆2 . 1T R−1 (V ) 1 (16) In the above equations 1 is a column vector of all ones, and R(V ) denotes the correlation matrix for an ordered batch of N independent samples drawn from V , i.e., the k, ℓ element of R(V ) is   E(V(k) V(ℓ) ) . The correlation matrix R(V ) is difficult to compute analytically for the PDF (13). Therefore, it has been calculated numerically for some exemplary values of ρ and N . The integrals (22) and (23) in the Appendix, describing the correlations, have been solved using a Monte-Carlo technique to obtain numerical values for R(V ) . From these, the optimal filter coefficients were calculated using (15). Fig. 3 shows exemplary solutions for N = 10 and different values of ρ. We see that for high values of ρ all αi go to 1/N (0.1 in this case), i.e., the ordinary sample mean. For low ρ, on the other hand, we see that the solution approaches α1 = αN = 1/2 and the remaining coefficients zero. That is, in the limiting case when ρ → 0 the best estimate of g based on ordered samples is simply the mean of the smallest and the largest sample. These two asymptotic results align perfectly with the results in [14] where the limiting cases – purely uniform and purely Gaussian distribution, respectively – were investigated. Finally, the white line visible in the plot where the two “fins” disappear into the surface marks where ρ = 0.35 (or 20 log10 ρ ≈ −9). We see that for ρ larger than this, the surface is almost flat at 1/N , which supports the results from [13] that the distribution for V is approximately Gaussian in that region, implying that the sample mean is optimal. IV. M IDPOINT E STIMATION C RAM ÉR –R AO B OUND The Cramér–Rao Bound (CRB) is a theoretical lower bound on the variance of any unbiased estimator for the parameter of interest, given the considered problem formulation and model. In this section we will pursue the CRB for the estimation problem described in the previous section. 3 Note, however, that this estimator is nonlinear in the observed samples z since determining the sample order is a nonlinear operation. May 4, 2006 DRAFT 9 0.5 αi 0.4 0.3 0.2 0.1 0 2 4 6 i Fig. 3. 8 10 −60 −40 −20 0 20 20 log10 ρ The filter coefficients αi as a function of ρ for N = 10. Assume that the ADC under test follows the model M introduced earlier. The observation at hand are the Nk samples of the reference signal z(n), z{k} = z = [z1 z2 . . . zNk ]T , as described above. We seek to find the probability of this observation z given that we are considering samples when x(n) = xk only, i.e., f (z|x = xk · 1). Under the model assumption M the input S and the output X of the ADC are related through their joint PMF, from which we can derive the conditional PDF fS|X (s|xk ). Under the assumption that the disturbance U is independent from both S and X , the PDF f (xi |xi = xk ) for the single sample zi was given in (4). The total PDF for the observation z, when the samples are assumed to be independent over time, is found by taking the product over i = 1, 2, . . . , Nk , i.e., f (z|x = xk · 1) = Nk Y i=1 f (zi |xi = xk ). The CRB for an unbiased estimator ĝk (z) can now be found from (17) as [7]   2 −1 ∂ ln f (z|x = xk · 1) var(ĝk (z)) ≥ E , ∂gk2 provided that the regularity condition   ∂ ln f (z|x = xk · 1) E =0 ∂gk (17) (18) (19) holds for all values of gk . In the following two subsections the CRB for the two different cases introduced in Sect. III will be investigated. In the all-Gaussian case we will obtain a closed form analytic solution, while in the case of uniform and Gaussian distributions we will again resort to numerical methods. May 4, 2006 DRAFT 10 A. Gaussian W and U We make the same assumptions as in Sect. III-B, i.e., W ∈ N (0, σ 2 ) and U ∈ N (0, σU2 ). The PDF for a single observation of Z is then (8). As we have already pointed out, this is a problem of estimating a constant in additive zero-mean Gaussian noise with variance σ 2 + σU2 . Again, this is a standard problem found in the literature (e.g. [7]) and the CRB for this problem is easily calculated using (18) to var(ĝk (z)) ≥ E  ∂ 2 ln f (z|x = xk · 1) ∂gk2 −1 = σ 2 + σU2 . N (20) We see that the estimator ĝsm (z) in Sect. III-B has a variance that attains the CRB, and the sample mean is thus statistically efficient in this case. B. Uniform W and Gaussian U The assumptions in Sect. III-C are made once more. However, because of the rather complicated PDF for W + U in (12) we again have to resort to numerical solutions. The normalized variable V with PDF (13) is considered. If the CRB for a parameter g (the index k is as before omitted for brevity) observed in the presence of V is calculated, we can obtain the CRB when V is replaced with ∆V simply by scaling the bound by ∆2 (in the same way that the variance changes with a scaling factor). First, the regularity condition (19) is verified for this problem. The PDF for a single observation z of Z = V + g is in this case fV (z − g), and inserting into (19) yields   Z ∞ Z ∞ ∂ ln fV (z − g) ∂ ln fV (z − g) fV (z − g) ∂fV (z − g) E = fV (z − g) dz = dz ∂g ∂g ∂g −∞ −∞ fV (z − g) (21) Z ∞ ∂ ∂ fV (z − g) dz = 1 = 0, = ∂g −∞ ∂g since fV (z − g) > 0 for all z and ρ > 0 (otherwise the limits of integration would depend on g and the order of integration and differentiation cannot be changed). After verifying that the regularity condition is met, the CRB should be calculated from (18). As mentioned above, the CRB has only been calculated numerically for the PDF fV (z − g) due to the heavy integration required to solve it analytically. The resulting CRB is presented in the next section along with the simulation results in Fig. 4. V. S IMULATIONS In order to verify the results obtained above some simulations have been carried out. The purpose of the experiment was to evaluate the estimators obtained in Sect. III and compare them with the Cramér–Rao bounds obtained in Sect. IV. However, the results from the case where both W and U May 4, 2006 DRAFT 11 are Gaussian are omitted, since these only verify the standard results reported numerous times in the literature. The emphasis is instead on the scenario with uniform W and Gaussian U . More specifically, we are interested in assessing the difference between the sample mean (ĝsm ), the order statistic estimator (ĝos ) and a third estimator (ĝmm ) which is the mean of the smallest and the largest sample (equivalent to ĝos when ρ → 0). The experiment was set up according to Fig. 2(b), i.e., an unknown constant g is corrupted by two independent additive white noises W and U , with the former being uniform in [−∆/2, ∆/2] and the latter zero-mean Gaussian with variance σU2 . In each experiment N samples were taken and the following estimators were calculated: 1) Order statistics estimator ĝos in (14) with α from (15). 2) Sample mean ĝsm in (9). 3) Min-max estimator ĝmm = (z(1) + z(N ) )/2. 4) Maximum likelihood estimator (MLE), calculated using numerical maximization of the likelihood function fZ|X (z|x = xk · 1; g) with respect to g , under the Gaussian-uniform assumption of Sect. III-C. The parameter ρ was varied (as before ρ , σU /∆), and for each value of ρ the experiment was repeated 10000 times. The mean square error (MSE) between the estimated values and the true value was calculated. The results are plotted in Fig. 4. The first observation is that the order statistics (‘ ’) estimator always has the lowest MSE coinciding with the theoretical value (16) (solid line). The maximum likelihood estimator (‘∗’) of g also results in the same MSE as ĝos for all ρ. We see that for 20 log10 ρ > −10 the MSE of the sample mean estimator (ĝsm marked with ‘’) closely follows that of the order statistic estimator. Again, this verifies the result from [13] that for ρ large enough the distribution for V is approximately Gaussian implying that the sample mean is optimal in the mean-square sense for those values of ρ. For low ρ, on the other hand, the estimator based on the largest and the smallest samples (ĝmm marked with ‘♦’) actually approaches the performance of the order statistics estimator. This is not surprising, since we know from Sect. III that the coefficients of the order statistics estimator approaches α1 = αN = 1/2 (with the remaining coefficients zero), which is equivalent to ĝmm . Another observation is that the order statistics estimator ĝos attains the CRB (dashed line) above a certain ρ, and that this breakpoint ρ is decreased when N is increased from 10 to 100 (cf. Figs. 4(a) and 4(b)). Also, we know from estimation theory (e.g. [7]) that the MLE – if it exists – attains the CRB when N → ∞. Since the numerical results indicate that the order statistics estimator performs equally well as the MLE, we conjecture that the order statistics estimator ĝos is asymptotically efficient (in the number of samples N ). That is, it seems reasonable to believe that the order statistics estimator May 4, 2006 DRAFT 12 MSE/∆2 10 10 10 MSE for MSE for MSE for MSE for var(ĝos ) var(ĝsm ) CRB 0 ĝos ĝsm ĝmm MLE −1 −2 −3 10 −60 −50 −40 −30 −20 −10 0 10 −10 0 10 20 log10 ρ (a) N = 10 10 MSE/∆2 10 10 10 10 10 MSE for MSE for MSE for MSE for var(ĝos ) var(ĝsm ) CRB 0 −1 ĝos ĝsm ĝmm MLE −2 −3 −4 −5 −60 −50 −40 −30 −20 20 log10 ρ (b) N = 100 Fig. 4. Mean squared error results from simulations. In both plots the performance in terms of MSE is plotted for four estimators of g: optimized order statistics ĝos (‘ ’), sample mean ĝos (‘’), the mean of the minimum and maximum samples ĝmm (‘♦’), and the MLE (‘∗’). Also, the theoretical variance for ĝos and ĝsm , and the CRB are plotted using solid, dash-dotted and dashed lines, respectively. May 4, 2006 DRAFT 13 attains the CRB for all ρ > 0 when N → ∞. VI. C ONCLUSIONS We have considered the problem of estimating the optimal reconstruction levels {γk } of a quantizer. It was initially shown that under a probabilistic quantizer model this problem is equivalent to estimating the location parameter of a probability distribution. Two different cases were examined, and it was illustrated that for a frequently used model assumption a simple estimator, based on only the smallest and the largest samples, actually outperforms the sample mean when the uncertainty of the reference signal is small (ρ is small). A PPENDIX The correlation E(V(k) V(ℓ) ) is given by (see e.g. [15]) E(V(k) V(ℓ) ) = Z ∞ −∞ v(ℓ) N! −∞ (k − 1)! (ℓ − k − 1)! (N − ℓ)! ℓ−k−1 FV (v(ℓ) ) − FV (v(k) ) (1 − FV (v(ℓ) ))N −ℓ Z FV (v(k) )k−1 fV (v(k) ) fV (v(k) ) v(k) v(ℓ) dv(k) dv(ℓ) (22) for k 6= ℓ and 2 E(V(k) ) = Z ∞ −∞ 2 v(k) N   N −1 FV (v(k) )k−1 (1 − FV (v(k) ))N −k fV (v(k) ) dv(k) k−1 (23) for k = ℓ. Here, FV (v) is the cumulative distribution function for the variable V , and subscripts with parentheses denotes ordered realizations. R EFERENCES [1] D. Hummels, “Performance improvement of all-digital wide-bandwidth receivers by linearization of ADCs and DACs,” Measurement, vol. 31, no. 1, pp. 35–45, Jan. 2002. [2] E. Balestrieri, P. Daponte, and S. Rapuano, “A state of the art on ADC error compensation methods,” in Proceedings IEEE Instrumentation and Measurement Technology Conference, vol. 1, Como, Italy, May 2004, pp. 711–716. [3] P. Carbone, E. Nunzi, and D. Petri, “Statistical efficiency of the ADC sinewave histogram test,” IEEE Transactions on Instrumentation and Measurement, vol. 51, no. 4, pp. 849–852, 2002. [4] A. Moschitta, P. Carbone, and D. Petri, “Statistical performance of Gaussian ADC histogram test,” in Proceedings of the 8th International Workshop on ADC Modelling and Testing. Perugia, Italy: IMEKO, Sept. 2003, pp. 213–217. [5] N. Björsell and P. Händel, “Benefits with truncated Gaussian noise in ADC histogram tests,” in Proceedings of the 9th Workshop on ADC Modelling and Testing, vol. 2. Athens, Greece: IMEKO, Sept. 2004, pp. 787–792. [6] N. Giaquinto, M. Savino, and A. Trotta, “Testing and optimizing ADC performance: A probabilistic approach,” IEEE Transactions on Instrumentation and Measurement, vol. 45, no. 2, pp. 621–626, Apr. 1996. [7] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Englewood Cliffs, NJ: Prentice-Hall, 1993. May 4, 2006 DRAFT 14 [8] S. P. Lloyd, “Least squares quantization in PCM,” IEEE Transactions on Information Theory, vol. IT-28, no. 2, pp. 129–137, Mar. 1982. [9] A. J. Kurtenbach and P. A. Wintz, “Quantizing for noisy channels,” IEEE Transactions on Communication Technology, vol. COM-17, no. 2, pp. 291–302, Apr. 1969. [10] IEEE Standard for Terminology and Test Methods for Analog-to-Digital Converters, IEEE, IEEE Std. 1241. 2000. [11] P. Händel, M. Skoglund, and M. Pettersson, “A calibration scheme for imperfect quantizers,” IEEE Transactions on Instrumentation and Measurement, vol. 49, pp. 1063–1068, Oct. 2000. [12] P. Stoica and R. L. Moses, “On biased estimators and the unbiased Cramér–Rao lower bound,” Signal Processing, vol. 21, no. 4, pp. 349–350, Dec. 1990. [13] M. Bertocco, C. Narduzzi, P. Paglierani, and D. Petri, “A noise model for digitized data,” IEEE Transactions on Instrumentation and Measurement, vol. 49, no. 1, pp. 83–86, Feb. 2000. [14] E. H. Lloyd, “Least-squares estimation of location and scale parameters using order statistics,” Biometrika, vol. 39, pp. 88–95, 1952. [15] J. Astola and P. Kuosmanen, Fundamentals of Nonlinear Digital Filtering. Boca Raton, FL: CRC Press LLC, 1997. Henrik Lundin (S’01–M’06) was born in Stockholm, Sweden, in 1976. He received the M.Sc. degree in Electrical Engineering from the Royal Institute of Technology (KTH), Stockholm, Sweden, PLACE PHOTO HERE in 2000. In 2001 he joined KTH Signals, Sensors and Systems, from where he received his Signal Processing Lic.Eng. and Ph.D. degrees in 2003 and 2005, respectively. His research interests include signal processing and analog-to-digital conversion. Dr. Lundin is now with Global IP Sound AB, Stockholm, Sweden. Mikael Skoglund (S’93–M’97–SM’04) received the Ph.D. degree in 1997 from Chalmers University of Technology, Göteborg Sweden. In 1997 he joined the Royal Institute of Technology (KTH), StockPLACE PHOTO HERE holm, Sweden. Here he held various positions until he was appointed Professor of Communication Theory in October 2003. Dr. Skoglund’s research interests are in information theory, communications, and detection and estimation. He has worked on problems in vector quantization, combined source–channel coding, coding for wireless communications, space–time coding and statistical signal processing. Dr. Skoglund has authored some 90 scientific papers. Several of these have received best paper awards, and one recent journal paper ranks as ’highly cited’ according to the ISI Essential Science Indicators. Dr. Skoglund is frequently serving as area expert and reviewer for research grants and publications, and he is an Associate Editor with the IEEE Transactions on Communications. Dr. Skoglund has also consulted for industry, and he holds 6 patents. He is a senior member of the IEEE. May 4, 2006 DRAFT 15 Peter Händel (S’88–M’94–SM’98) received the M.Sc. degree in Engineering Physics and the Ph.D. degree in Automatic Control, both from the Department of Technology, Uppsala University, Uppsala, PLACE Sweden, in 1987 and 1993, respectively. PHOTO HERE During 1987-88, he was at The Svedberg Laboratory, Uppsala University. Between 1988 and 1993, he was at the Systems and Control Group, Uppsala University. In 1996, he was appointed as Docent at Uppsala University. During 1993-97, he was with the Research and Development Division, Ericsson AB, Kista, Sweden. During the academic year 96/97, Dr. Händel was a Visiting Scholar at the Signal Processing Laboratory, Tampere University of Technology, Tampere, Finland. In 1998, he was appointed as Docent at the same university. Since August 1997, he has been with the Signal Processing Laboratory, Royal Institute of Technology, Stockholm, Sweden. where he currently is Professor in Signal Processing. Dr. Händel is a former President of the IEEE Finland joint Signal Processing and Circuits and Systems Chapter. Currently he is President of the IEEE Sweden Signal Processing Chapter. He is a registered engineer (EUR ING). He has published some 35 journal articles, 65 conference papers, and holds 10 patents. He has conducted research in a wide area including design and analysis of digital filters and adaptive filters, measurement and estimation theory (especially, temporal frequency estimation), system identification, speech processing, and measurement methods for characterization of analog-to-digital converters and power amplifiers. He is a member of the editorial board of the EURASIP Journal of Applied Signal Processing. He is Associate Editor of the IEEE Transactions on Signal Processing. May 4, 2006 DRAFT