Speech Enhancement With Natural Sounding Residual Noise Based On Connected Time-Frequency Speech Presence Regions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

EURASIP Journal on Applied Signal Processing 2005:18, 29542964

c 2005 K. V. Srensen and S. V. Andersen




Speech Enhancement with Natural Sounding Residual


Noise Based on Connected Time-Frequency Speech
Presence Regions
Karsten Vandborg Srensen
Department of Communication Technology, Aalborg University, DK-9220 Aalborg East, Denmark
Email: [email protected]

Sren Vang Andersen


Department of Communication Technology, Aalborg University, DK-9220 Aalborg East, Denmark
Email: [email protected]
Received 13 May 2004; Revised 3 March 2005
We propose time-frequency domain methods for noise estimation and speech enhancement. A speech presence detection method
is used to find connected time-frequency regions of speech presence. These regions are used by a noise estimation method and both
the speech presence decisions and the noise estimate are used in the speech enhancement method. Dierent attenuation rules are
applied to regions with and without speech presence to achieve enhanced speech with natural sounding attenuated background
noise. The proposed speech enhancement method has a computational complexity, which makes it feasible for application in
hearing aids. An informal listening test shows that the proposed speech enhancement method has significantly higher mean
opinion scores than minimum mean-square error log-spectral amplitude (MMSE-LSA) and decision-directed MMSE-LSA.
Keywords and phrases: speech enhancement, noise estimation, minimum statistics, speech presence detection.

1.

INTRODUCTION

The performance of many speech enhancement methods relies mainly on the quality of a noise power spectral density
(PSD) estimate. When the noise estimate diers from the
true noise, it will lead to artifacts in the enhanced speech.
The approach taken in this paper is based on connected region speech presence detection. Our aim is to exploit spectral and temporal masking mechanisms in the human auditory system [1] to reduce the perception of these artifacts in
speech presence regions and eliminate the artifacts in speech
absence regions. We achieve this by leaving downscaled natural sounding background noise in the enhanced speech in
connected time-frequency regions with speech absence. The
downscaled natural sounding background noise will spectrally and temporally mask artifacts in the speech estimate
while preserving the naturalness of the background noise.
In the definition of speech presence regions, we are inspired by the work of Yang [2]. Yang demonstrates high perceptual quality of a speech enhancement method where conThis is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.

stant gain is applied in frames with no detected speech presence. Yang lets a single decision cover a full frame. Thus, musical noise is present in the full spectrum of the enhanced
speech in frames with speech activity. We therefore extend
the notion of speech presence to individual time-frequency
locations. This, in our experience, significantly improves the
naturalness of the residual noise. The speech enhancement
method, proposed in this paper, thereby eliminates audible
musical noise in the enhanced speech. However, fluctuating
speech presence decisions will reduce the naturalness of the
enhanced speech and the background noise. Thus, reasonably connected regions of the same speech presence decision
must be established.
To achieve this, we use spectral-temporal periodogram
smoothing. To this end, we make use of the spectraltemporal smoothing method by Martin and Lotter [3],
which extends the original groundbreaking work of Martin
[4, 5]. Martin and Lotter derive optimum smoothing coefficients for (generalized) 2 -distributed spectrally smoothed
spectrograms, which is particularly well suited for noise types
with a smooth power spectrum. The underlying assumption
in this approach is that the real and imaginary parts of the
associated STFT coecients for the averaged periodograms
have the same means and variances. For the application of

Speech Enhancement with Natural Sounding Residual Noise

2955

spectral-temporal smoothing to obtain connected regions


of speech presence decisions, we augment Martin and Lotters smoothing method with the spectral smoothing method
used by Cohen and Berdugo [6].
For minimum statistics noise estimation, Martin [5] has
suggested a theoretically founded bias compensation factor,
which is a function of the minimum search window length,
the smoothed noisy speech, and the noise PSD estimate variances. This enables a low-biased noise estimate that does not
rely on a speech presence detector. However, as our proposed
speech enhancement method has connected speech presence
regions as an integrated component, this enables us to make
use of a new, simple, yet ecient, bias compensation. To verify the performance of the new bias compensation, we objectively evaluate the noise estimation method that uses this
bias compensation of minimum tracks from our spectrally
temporally smoothed periodograms, prior to integrating this
noise estimate in the final speech enhancement method.
In result, our proposed speech enhancement algorithm
has a low computational complexity, which makes it particularly relevant for application in digital signal processors with
limited computational power, such as those found in digital
hearing aids. In particular, the obtained algorithm provides
a significantly higher perceived quality than our implementation of the decision-directed minimum mean-square error log-spectral amplitude (MMSE-LSA-DD) estimator [7]
when evaluated in listening tests. Furthermore, the noise
PSD estimate that we use to obtain a noise magnitude spectrum estimate for the attenuation rule in connected regions
of speech presence is shown to be superior to estimates from
minimum statistics (MS) noise estimation [5] and our implementation of 2 -based noise estimation [3] for spectrally
smooth noise types.
The rest of this paper is organized as follows. In Section 2,
we describe the signal model and give an overview of the proposed algorithm. In Section 3, we list the necessary equations
to perform the spectral-temporal periodogram smoothing.
Section 4 contains a description of our detector for connected speech presence regions, and in Section 5, we describe
how the spectrally temporally smoothed periodograms and
the speech presence regions can be used to obtain both a
noise PSD estimate and a noise periodogram estimate, which
both rely on the new bias compensation. In the latter noise
estimation method, we estimate the squared magnitudes of
the noise short-time Fourier transform (STFT) coecients.
In Section 6, the connected region speech presence detector
is introduced in a speech enhancement method with the purposes of reducing noise and augmenting listening comfort.
Section 7 contains the experimental setup and all necessary
initializations. Finally, Section 8 describes the experimental
results and Section 9 concludes the paper with a discussion
of the proposed methods and obtained results.

overview before the individual methods, which constitute the


algorithm, are described in detail.

2.

STRUCTURE OF THE ALGORITHM

After an introduction to the signal model, we give a structural description of the algorithm to provide an algorithmic

2.1.

Signal model

We assume that noisy speech y(i) at sampling time index i


consists of speech s(i) and additive noise n(i). For joint timefrequency analysis of y(i), we apply the K-point STFT, that
is,
Y (, k) =

L
1

y(R + )h() exp

=0

j2k
,
K

(1)

where Z is the (subsampled) time index, k


{0, 1, . . . , K 1} is the frequency index, and L is the window
length. In this paper, we have that L equals K. The quantity R is the number of samples that successive frames are
shifted and h() is a unit-energy window function, that is,
L1 2
=0 h () = 1. From the linearity of (1), we have that
Y (, k) = S(, k) + N(, k),

(2)

where S(, k) and N(, k) are the STFT coecients of speech


s(i) and additive noise n(i), respectively. We further assume
that s(i) and n(i) are zero mean and statistically independent,
which leads to a power relation, where the noise is additive
[8], that is,



2 
2 
2 
E Y (, k) = E S(, k) + E N(, k) .

2.2.

(3)

Structural algorithm description

The structure of the proposed algorithm and names of variables with a central role are shown in Figure 1. After applying an analysis window to the noisy speech, we take the
STFT, from which we calculate periodograms PY (, k) 
|Y (, k)|2 . These periodograms are spectrally smoothed,
yielding PY (, k), and then temporally smoothed to produce
P (, k). These smoothed periodograms are temporally minimum tracked, and by comparing ratios and dierences of
the minimum tracked values to P (, k), they are used for
speech presence detection. As a distinct feature of the proposed method, we use speech presence detection to achieve
low-biased noise PSD estimates PN (, k), but also for noise
periodogram estimates PN (, k), which equal PY (, k) when
D(, k) = 0, that is, no detected speech presence. When
D(, k) = 1, that is, detected speech presence, the noise periodogram estimate equals the noise PSD estimate, that is,
a recursively smoothed bias compensation factor applied on
the minimum tracked values. The bias compensation factor is recursively smoothed power ratios between the noise
periodogram estimates and the minimum tracks. This factor is only updated while no speech is present in the frames
and kept fixed while speech is present. A noise magnitude

spectrum estimate |N(,
k)| obtained from the noise PSD

2956

EURASIP Journal on Applied Signal Processing

y (i)

Windowing

PY (, k) consist of a weighted sum of 2D + 1 periodogram


bins, spectrally centered at k [6], that is,
PY (, k) =

| |2

Y (, k)

STFT

PY (, k)
Spectral
smoothing

Y (, k)

Temporal
smoothing

Minimum
tracking

Speech presence
detection

=D

b()PY , (k )

(4)

where ((m))K denotes m modulus K, and K is the length


of the full (mirrored) spectrum. The window function b()
used for spectral weighting is chosen such that it sums to

1, that is, D=D b() = 1, and therefore preserves the total
power of the spectrum.
3.2.

(, k)

D


Temporal smoothing

The spectrally smoothed periodograms PY (, k), see


Figure 1, are now temporally smoothed recursively with
time and frequency varying smoothing parameters (, k)
to produce a spectrally temporally smoothed noisy speech
periodogram P (, k), that is,

D (, k)
Noise
estimation

(, k)|
|N

Speech
enhancement
|S (, k)|

Inverse
STFT

Y (, k)

We use the optimum smoothing parameters proposed by


Martin and Lotter [3]. Their method consists of optimum
smoothing parameters for 2 -distributed data with some
modifications that makes it suited for practical implementation. The optimum smoothing parameters are given by
 (, k) =

WOLA

P (, k) = (, k)P ( 1, k) + 1 (, k) PY (, k). (5)


s (i)

2 , (6)


2
2 + K P ( 1, k)/E N(, k) 1

with


Figure 1: A block diagram of the proposed speech enhancement


algorithm. Only the most essential variables are introduced in the
figure.

estimate and the speech presence decisions are used in a


speech enhancement method that applies dierent attenuation rules for speech presence and speech absence. For speech
synthesis, we take the inverse STFT of the estimated speech
magnitude spectrum with the phase from the STFT of the
noisy speech. The synthesized frame is used in a weighted
overlap-add (WOLA) method, where we apply a synthesis
window before overlap and add.

K = (4D + 2)

SPECTRAL-TEMPORAL PERIODOGRAM
SMOOTHING

In this section, we briefly describe the spectral-temporal periodogram smoothing method.


3.1. Spectral smoothing
First, the noisy speech periodograms PY (, k) are spectrally
smoothed by letting a spectrally smoothed periodogram bin

(7)

equivalent degrees of freedom of a 2 -distribution [3]. For


practical implementation, the noise PSD, which is used in
the calculation of the optimum smoothing parameters, is estimated as the previous noise PSD estimate, that is,


  2
E N(, k) = PN ( 1, k).

3.3.
3.

2
L1 2
=0 b ()

L L=01 b4 ()

(8)

Complete periodogram smoothing algorithm

Pseudocode for the complete spectral-temporal periodogram smoothing method is provided in Algorithm 1. A
smoothing parameter correction factor c (, k), proposed
by Martin [5], is multiplied on (, k). Additionally, in this
paper, we lower-limit the resulting smoothing parameters to
ensure a minimum degree of smoothing, that is,

(, k) = max c (, k)
(, k), 0.4 .

(9)

Speech Enhancement with Natural Sounding Residual Noise


(1) {Initialize as listed in Tables 3 and 1}
(2) for = 0 to M 1 do
(3) for k = 0 to K 1 
do
(4)
PY (, k) abs( L=01 y(R + )h()
exp( j2k/K))2

(5)
PY (, k) D=D b()PY (, mod (k , K))
 ( 1, k)/ PN ( 1, k) 1)2 )
(6)
 (, k) 2/(2 + K(P
(7) end for

(8) R ( Kk=01 P ( 1, k))/( Kk=01 PY (, k))
2
(9) c 1/ 1 + (R 1)
(10) c () 0.7c ( 1) + 0.3 max(
c , 0.7)
(11) for k = 0 to K 1 do

(12)
(, k) max c ()
(, k), 0.4

(13)
P (, k) (, k)P ( 1, k) + 1 (, k) PY (k)
(14)
{Obtain a noise PSD estimate PN (, k), e.g., as
proposed in Section 5.}
(15) end for
(16) end for
Algorithm 1: Periodogram smoothing.

In the next section, we use temporal minimum tracking


on the spectrally temporally smoothed noisy speech periodograms in a method for detection of connected speech
presence regions, which later will be used for noise estimation and speech enhancement.
4.

CONNECTED SPEECH PRESENCE REGIONS

We now base a speech presence detection method on comparisons, at each frequency, between the smoothed noisy
speech periodograms and temporal minimum tracks of the
smoothed noisy speech periodograms.
4.1. Temporal minimum tracking
From the spectrally temporally smoothed noisy speech periodograms P (, k), we track temporal minimum values
Pmin (, k), within a minimum search window of length Dmin ,
that is,

Pmin (, k) = min P (, k) | Dmin < ,

(10)

with Z. Dmin is chosen as a tradeo between the ability


to bridge over periods of speech presence [5], which is crucial
for the minimum track to be robust to speech presence, and
the ability to follow nonstationary noise. Typically, a window
length corresponding to 0.51.5 seconds yields an acceptable tradeo between these two properties [5, 6]. We now
have that Pmin (, k) is approximately unaected by periods
of speech presence, but on the average, biased towards lower
values, when no spectral smoothing is applied [5]. Memory
requirements of the tracking method can be reduced at the
cost of lost temporal resolution, see, for example, [5]. In the
following, the temporal minimum tracks Pmin (, k) are used
in a speech presence decision rule.
For the spectral-temporal periodogram smoothing, we
use the settings and algorithm initializations given in Table 1.

2957
The decision rules, that are used for speech presence detection, have the threshold values listed in Table 2. For noise
estimation, we use the two parameters from Table 4. The
speech enhancement method uses the parameter settings that
are listed in Table 5.
4.2.

Binary speech presence decision rule

We have shown in previous work [9] that temporally


smoothed periodograms and their temporal minimum
tracks can be used for speech presence detection. Also shown
in [9] is that including terms to compensate for bias on
the minimum tracks improves the speech presence detection performance (measured as the decrease in a cost function) by less than one percent. In this paper, we therefore do
not consider a bias compensation factor in the speech presence decision rule. Rather, as we show later in this paper, the
speech presence decisions can be used in the estimation of
a simple and very well-performing bias compensation factor
for noise estimation. Similar to our previous approach for
temporally smoothed periodograms [9], we now exploit the
properties of spectrally temporally smoothed periodograms
P (, k) in a binary decision rule for the detection of speech
presence. The presence of speech will cause an increase of
power in P (, k) at a particular time-frequency location, due
to (3). Thus, the ratio between P (, k) and a noise PSD estimate, given by a minimum track Pmin (, k) with a bias reduction, yields a robust (due to the smoothing) estimate of
the signal-plus-noise-to-noise ratio at the particular timefrequency location. Our connected region speech presence
detection method is based on the smooth nature of P (, k)
and Pmin (, k). The smoothness will ensure that spurious
fluctuations in the noisy speech power will not cause spurious fluctuations in our speech presence decisions. Thus, we
will be able to obtain connected regions of speech presence
and of speech absence. This property is fundamental for the
proposed noise estimation and speech enhancement methods. As a rule to decide between the two speech presence hypotheses, namely,
H0 (, k) : speech absence,
H1 (, k) : speech presence,

(11)

which can be written in terms of the STFT coecients, that


is,
H0 (, k) : Y (, k) = N(, k),
H1 (, k) : Y (, k) = N(, k) + S(, k),

(12)

we use a combination of two binary initial decision rules.


First, let D(, k) = i be the decision to believe in hypothesis Hi (, k) for i {0, 1}. We define two initial decision rules, which will give two initial decisions D (, k)
and D (, k). The initial decision rules are given by a rule,
where the smoothed noisy speech periodograms P (, k) are
compared with the temporal minimum tracks Pmin (, k),

2958

EURASIP Journal on Applied Signal Processing


Table 1: Smoothing setup and initializations for Algorithm 1.

Variable
D
b()
M
K
PN (1, k)
c (1)

Value
7
Gb triang(2D + 1)i
154220ii
20.08iii
PY (0, k)
1

Description
Spectral window length: 2D + 1
Spectral smoothing window
Number of frames
Equivalent degrees of freedom of 2 -distribution
Initial noise periodogram estimate
Initial correction variable

1)))1 scales the window to unit sum.


M = round (length (y(i))/R 1/2) 1.
 1 2
L1 4
iii Calculated at run time as K
2
 = (4D + 2)([ L=
0 b ()] )/(L =0 b ()) [3, 14].
i G = (sum(triang(2D +
b
ii Calculated at run time as

Table 2: Speech presence detection setup.


Variable



Value
6
0.5

Description
Constant for ratio-based decision rule
Constant for dierence-based decision rule

weighted with a constant  , that is,


D (, k) : P (, k)

D (,k)=1

D (,k)=0

 Pmin (, k),

(13)

and one where, at time , the dierence is compared to the


average of the minimum tracks scaled by  , that is,
K 1
1 
Pmin (, k).
K k=0
D (,k)=0
(14)
For the initial decision rules, we have adopted the notation,
used by Shanmugan and Breipohl [8]. Because the minimum
tracks are representatives of the noise PSDs [5], the first initial decision rule classifies time-frequency bins based on the
estimated signal-plus-noise-to-noise power ratio. Note that
this can be seen as a special case of the indicator function
proposed by Cohen [10] (with 0 =  /Bmin and 0 = ).
The second initial decision rule D (, k) classifies bins from
the estimated power dierence between the noisy speech and
the noise using a threshold that adapts to the minimum track
power level in each frame. Multiplication of the two binary
initial decisions corresponds to the logical AND-operation,
when we define true as deciding on H1 (, k) and false as
deciding on H0 (, k). We therefore propose a decision that
combines the two initial decisions from the initial decision
rules above, that is,

D (, k) : P (, k)

D (,k)=1

Pmin (, k) + 

D(, k) = D (, k) D (, k).

(15)

In eect, the combined decision allows detection of speech


in low signal-to-noise ratios (SNRs) without letting lowpower regions with high SNRs contaminate the decisions.
Thereby, we obtain connected time-frequency regions of
speech presence. The constant  is not sensitive to the type

and intensity of environmental noise [11] and it can be adjusted empirically. This is also the case for  . For applications where a reasonable objective performance measure can
be defined, the constants  and  can be obtained by interpreting the decision rule as artificial neural network and then
conduct a supervised training of this network [9].
Speech at frequencies below 100 Hz is considered perceptually unimportant, and bins below this frequency are therefore always classified with speech absence. Real-life noise
sources often have a large part of their power at the low frequencies, so this rule ensures that this power does not cause
the speech presence detection method to falsely classify these
low-frequency bins as if speech is present. If less than 5% of
the K periodogram bins are classified with speech presence,
we expect that these decisions have been falsely caused by the
noise characteristics, and all decisions in the current frame
are reclassified to speech absence. When the speech presence
decisions are used in a speech enhancement method, as we
propose in Section 6, this reclassification will ensure the naturalness of the background noise in periods of speaker silence.
5.

NOISE ESTIMATION

The spectral-temporal smoothing method [3], which we use


in this paper, reduces the bias between the noise PSD and the
minimum track Pmin (, k) if the noise is assumed to be ergodic in its PSD. That is, it reduces the bias compared to minimum tracked values from periodograms, smoothed temporally using Martins first method [5]. Martin gives a parametric description of a bias compensation factor, which depends on the minimum search window length, the smoothed
noisy speech, and the noise PSD estimate variances. The
spectral smoothing lowers the smoothed noisy speech periodogram variance, and as a consequence, a longer minimum
search window can be applied when the noise spectrum is not
changing rapidly. This give the ability to bridge over longer
speech periods.
We propose to use the speech presence detection method
from Section 4 to obtain two dierent noise estimates, that
is, a noise PSD estimate and a noise periodogram estimate.
The PSD estimate will be used in the speech enhancement
methods and the noise periodogram estimate will illustrate

Speech Enhancement with Natural Sounding Residual Noise

2959

Table 3: General setup.


Variable
Fs
K
L
R
h()

Value
8 kHz
256
256
 128
Gh 1 Hanning(K)i

Description
Sample frequency
FFT size
Frame size
Frame skip
Analysis window

hs ()

Gh Hanning(K)ii

Synthesis window

iG
h

is the square root of the energy of Hanning(K), which scales the analysis window to unit energy. This is to avoid scaling factors
throughout the paper.
ii G scales the synthesis window h () such that the analysis window h(), multiplied with h (), yields a Hanning(K) window.
h
s
s

some of the properties of the residual noise from the speech


enhancement method we propose in Section 6.
5.1. Noise periodogram estimation
The noise periodogram estimate is equal to a time-varying
power scaling of the minimum tracks Pmin (, k), for
D(, k) = 1. For D(, k) = 0, it is equal to the noisy speech
periodogram PY (, k), that is,

PN (, k) =

min ()Pmin (, k)

if D(, k) = 1,
if D(, k) = 0.

PY (, k)

(16)

In the above equation, a bias compensation factor Rmin ()


scales the minimum. The scaling factor is updated in frames
where no speech presence is detected and kept fixed while
speech presence is detected in the frames. We let Rmin () be
given by the ratio between the sums of the previous noise
periodogram estimate PN ( 1, k) and the minimum tracks
Pmin (, k), that is,

This noise periodogram estimate equals the true noise periodogram |N(, k)|2 when the speech presence detection is
correctly detecting no-speech presence. When entering a region with speech presence, the noise periodogram estimate
will take on the smooth shape of the minimum track, scaled
with the bias compensation factor in (18) such that the power
develops smoothly into the speech presence region.
5.2.

The noise PSD estimate PN (, k) is obtained exactly as the


noise periodogram estimate but with (16) modified such that
the noise PSD estimate is obtained directly as the powerscaled minimum tracks, that is,
PN (, k) = Rmin ()Pmin (, k).


 
N(,

k) = P (, k).
N

6.

Rmin ()

if

min Rmin ( 1)+ 1 min Rmin ()

if

K
1
k=0
K
1

D(, k) > 0,
D(, k) = 0,

k=0

(18)
where 0 min 1 is a constant recursive smoothing parameter. The magnitude spectrum, at time index , is obtained by
taking the square root of noise periodogram estimate, that is,

 
N(,

k) = P (, k).
N

(21)

(17)

which is recursively smoothed when speech is absent in the


frame, and fixed when speech is present in the frame, that is,

R ( 1)

min

(20)

A smooth estimate of the noise magnitude spectrum can be


obtained by taking the square root of the noise PSD estimates, that is,

K 1

P ( 1, k)
Rmin () = k=K01 N
,
k=0 Pmin (, k)

Noise PSD estimation

(19)

SPEECH ENHANCEMENT

We now describe the speech enhancement method for which


the speech presence detection method has been developed.
It is well known that methods that subtract a noise PSD estimate from a noisy speech periodogram, for example, using an attenuation rule, will introduce musical noise. This
happens whenever the noisy speech periodogram exceeds the
noise PSD estimate. If, on the other hand, the noise PSD estimate is too high, the attenuation will reduce more noise, but
also will cause the speech estimate to be distorted. To mitigate these eects, we propose to distinguish between connected regions with speech presence and speech absence. In
speech presence, we will use a traditional estimation technique, by means of generalized spectral subtraction, with
the noise magnitude spectrum estimate, obtained using (21)
from the noise PSD estimate. In speech absence, we will use
a simple noise-scaling attenuation rule to preserve the naturalness in the residual noise. Note that this approach, but

2960

EURASIP Journal on Applied Signal Processing


Table 4: Noise estimation setup.

Variable
Dmin
min
i Corresponds

Value
150i
0.7

Description
Minimum tracking window length
Scaling factor smoothing parameter

to a time duration of Dmin R/Fs = 2.4 seconds.

with D(, k) = 0. After the scaling, these noisy speech STFT


magnitudes lead to the noise component that will be left, after STFT synthesis, in the speech estimate as artifact masking
[1] and natural sounding attenuated background noise.
For synthesis, we let the STFT spectrum of the estimated
speech be given by the magnitude, obtained from (22), and
the noisy phase Y (, k), that is,

Table 5: Speech enhancement setup.


Variable Value
0
0.1
1
1.4
a1
0.8

Description
Noise scaling factor for no-speech presence
Noise overestimation factor for speech presence
Attenuation rule order for speech presence

with only a single speech presence decision covering all frequencies in each frame, has previously been proposed by
Yang [2]. Moreover, Cohen and Berdugo [11] propose a binary detection of speech presence/absence (called the indicator function in their paper), which is similar to the one
we propose in this paper. However, their decision includes
noisy speech periodogram bins without smoothing, hence
some decisions will not be regionally connected. In our experience, this leads to artifacts if the decisions are used directly
in a speech enhancement scheme with two dierent attenuation rules for speech absence and speech presence. Cohen
and Berdugo smooth their binary decisions to obtain estimated speech presence probabilities, which are used for a soft
decision between two separate attenuation functions. Our
approach, as opposed to this, is to obtain adequately timefrequency smoothed spectra from which connected speech
presence regions can be obtained directly in a robust manner. As a consequence, we avoid distortion in speech absence
regions, and thereby obtain a natural sounding background
noise.
Let the generalized spectral subtraction variant be given
similar to the one proposed by Berouti et al. [12], but with
the decision of which attenuation rule to use given explicitly
by the proposed speech presence decisions, instead of comparisons between the estimated speech power and an estimated noise floor. The immediate advantage of our approach
is a higher degree of control with the properties of the enhancement algorithm. Our proposed method is given by


S(,
k)



a 1/a1


Y (, k)a1 1 N(,

k) 1
=



0 Y (, k)

if D(, k) = 1,
if D(, k) = 0,
(22)

where a1 determines the power in which the subtraction is


performed, 1 is a noise overestimation factor that scales the

estimated magnitude of the noise STFT coecient |N(,
k)|,
obtained from the noise PSD estimate by (21) in Section 5,
raised to the a1 th power. The factor 0 scales the noisy speech
STFT coecient magnitude, which before this scaling equals
the square root of the noise periodogram estimate for bins

k) = S(,
k)e jY (,k) .
S(,

(23)

By applying the inverse STFT, we synthesize a time-domain


frame, which we use in a WOLA scheme, as illustrated in
Figure 1, to form the synthesized signal. Depending on the
analysis window, a corresponding synthesis window hs () is
applied before overlap add is performed.
7.

EXPERIMENTAL SETUP

In the experiments, we use 6 speech recordings from the


TIMIT database [13]. The speech is spoken by 3 dierent
male and 3 dierent female speakersall uttering dierent
sentences of 2-3 seconds duration. These sentences are added
with zero-mean highway noise and car interior noise in 0,
5, and 10 dB overall signal-to-noise ratios to form a test set
of 36 noisy speech sequences. Spectrograms of time-domain
signals are shown with time-frequency axes and always with
the time-domain signals. When we plot intermediate coecients, the figures are shown with axes of subsampled time
index and frequency index k. For all illustrations in this
paper, we use the noisy speech from one of the male speakers with additive highway noise in a 5 dB over all SNR. The
spectrograms and time-domain signals of this particular case
of noisy speech and the corresponding noise are shown in
Figures 2a and 2b, respectively. The general setup in the experiments is listed in Table 3. The analysis window h() is
the square root of a Hanning window, scaled to unit energy. As the synthesis window hs (), we also use the square
root of a Hanning window, but scaled, such that an unmodified frame would be windowed by a Hanning window after
both the analysis and synthesis window have been applied. It
will therefore be ready for overlap add with 50% overlapping
frames.
8.

EXPERIMENTAL RESULTS

In this section, we evaluate the performance of the proposed


algorithm. We measure the performance of the algorithm
by means of visual inspection of spectrograms, spectral distortion measures, and informal listening tests. To illustrate
the properties of the proposed spectral-temporal smoothing
method, we show the spectrogram of the smoothed noisy
speech in Figure 3. By removing the power in speech absence
regions and speech presence regions from the noisy speech
periodogram, we see in Figures 4a and 4b, respectively, that
most of the speech, that is detectable by visual inspection,
has been detected by the proposed algorithm. Spectrograms

10
0
10
20
30
40

3000
2000
1000
0

0.5

1.5
Time

2.5

Frequency

4000

10
0
10
20
30
40

3000
2000
1000
0

0.5

0
10
20
30
40

1.5
Time

120
100
80
60
40
20

2.5

10
0
10
20
30
40
20

of the noise periodogram estimate and the noise PSD estimate, obtained using the methods we propose in Section 5,
are shown in Figures 5a and 5b, respectively.
We evaluate the performance of the noise estimation
methods by means of their spectral distortion, which we
measure as segmental noise-to-error ratios (SegNERs). We
calculate the SegNERs in the time-frequency domain, as the
ratio (in dB) between the noise energy and the noise estimation error energy. These values are upper and lower limited
by 35 and 0 dB [15], respectively, that is,

K 1 


2

SegNER() = min max NER(), 0 , 35 ,

(24)

where

N(, k)
NER() = 10 log10 K 1 
 
 2 ,
N(, k) N(,

k)
k=0
(25)
k=0

and averaged over all (M) frames, that is,


SegNER =

M 1
1 
SegNER().
M =0

(26)

In Table 6, we list the average SegNERs over the same 6


speakers that are used in the informal listening test of the

Frequency index k

Figure 2: Spectrograms and time-domain signals of the illustrating


speech recording with highway trac noise (noisy speech) at 5 dB
SNR (a) and the noise (b). The speech recording is of a male speaker
uttering These were heroes, nine feet tall to him.

40 60 80 100 120 140 160 180


(Sub-sampled) Time index
(a)

(b)

40 60 80 100 120 140 160 180


(Sub-sampled) Time index

Figure 3: The noisy speech periodogram from Figure 2a after


smoothing with the smoothing method from Section 3 (spectrally
temporally smoothed noisy speech).

(a)

10

120
100
80
60
40
20
20

Frequency index k

Frequency

4000

2961
Frequency index k

Speech Enhancement with Natural Sounding Residual Noise

120
100
80
60
40
20

10
0
10
20
30
40
20

40 60 80 100 120 140 160 180


(Sub-sampled) Time index
(b)

Figure 4: Noisy speech with speech absence regions removed,


D(, k) = 0 bins removed (a); and with speech presence regions removed (b), noisy speech with D(, k) = 1 bins removed.

speech enhancement method. We list the average SegNERs


for the noise periodogram estimation method, the noise PSD
estimation method, our implementation of 2 -based noise
estimation [3], and minimum statistics (MS) noise estimation [5]. Our implementation of the 2 -based noise estimation uses the MS noise estimate [5] in the calculation of the
optimum smoothing parameters, as suggested by Martin and
Lotter [3]. The spectral averaging in our implementation of
the 2 -based noise estimation is performed in sliding spectral windows of the same size as used by the two proposed
noise estimation methods. We see that the noise PSD estimate has less spectral distortion than both our implementation of the 2 -based noise estimation [3] and MS noise estimation [5]. This can be explained by a more accurate bias
compensation factor, which uses speech presence information. Note that in many scenarios, the proposed smooth and
low-biased noise PSD estimate is preferable over the noise
periodogram estimate.

2962

EURASIP Journal on Applied Signal Processing


Table 6: Segmental noise-to-error ratios in dB.
Noise type

Highway trac

Noisy speech SNR (dB)


Noise periodogram estimation
Noise PSD estimation
2 -based noise estimation [3]
MS noise estimation [5]

0
19.3
4.6
3.6
1.0

Car interior

5
17.0
4.6
3.1
1.8

10
14.7
4.4
2.6
2.4

Table 7: Opinion score scale.

10
0
10
20
30
40

40 60 80 100 120 140 160 180


(Sub-sampled) Time index

Frequency index k

(a)

120
100
80
60
40
20

10
0
10
20
30
40
20

40 60 80 100 120 140 160 180


(Sub-sampled) Time index
(b)

Figure 5: Spectrograms of the noise periodogram estimate (a) and


the noise PSD estimate (b). In regions with speech presence, the
noise periodogram estimate equals the noise PSD estimate.

As an objective measure of time-domain waveform similarity, we list the signal-to-noise ratios, and as a subjective
measure of speech quality, we conduct an informal listening
test. In this test, test subjects give scores from the scale in
Table 7 ranging from 1 to 5, in steps of 0.1, to three dierent speech enhancement methods, with the noisy speech as a
reference signal. Higher score is given to the preferred speech
enhancement method. The test subjects are asked to take parameters, such as the naturalness of the enhanced speech, the
quality of the speech, and the degree of noise reduction into

Frequency

Excellent
Good
Fair
Poor
Bad

Frequency index k

Description

5
4
3
2
1

20

5
16.6
3.1
2.3
2.1

4000

Score

120
100
80
60
40
20

0
18.3
3.0
2.7
1.9

10
0
10
20
30

3000
2000
1000
0

10
15.0
3.2
2.0
2.6

40

0.5

1.5
Time

2.5

Figure 6: Spectrogram and time-domain plot of the enhanced


speech from the enhancement method proposed in this paper. The
noisy speech is shown in Figure 2a. The naturalness is preserved by
the enhancement method and, in particular, the enhanced speech
does not contain any audible musical noise.

account, when assigning a score to an estimate. The presentation order of estimates from individual methods is blinded,
randomized, and vary in each test set and for each test subject. A total of 8 listeners, all working within the field of
speech signal processing, participated in the test. The proposed speech enhancement method was compared with our
implementation of two reference methods.
(i) MMSE-LSA. Minimum mean-square error log-spectral amplitude estimation, as proposed by Ephraim
and Malah [7].
(ii) MMSE-LSA-DD. Decision-directed MMSE-LSA,
which is the MMSE-LSA estimation in combination
with a smoothing mechanism [7]. Constants are as
proposed by Ephraim and Malah.
All three methods in the test use the proposed noise PSD estimate, as shown in Figure 5b. Also, they all use the analysis/synthesis setup described in Section 7. The enhanced
speech obtained from the noisy speech signal in Figure 2a is
shown in Figure 6.
SNRs and mean opinion scores (MOSs) from the informal subjective listening test are listed in Tables 8 and 9. All
results are averaged over both speakers and listeners. The best
obtained results are emphasized using bold letters. To identify if the proposed method is significantly better, that is, has
a higher MOS, than MMSE-LSA-DD, we use the matched
sample design [16], where the absolute values of the opinion
scores are eliminated as a source of variation. Let d be the
mean of the opinion score dierence between the proposed
method and the MMSE-LSA-DD. Using this formulation, we

Speech Enhancement with Natural Sounding Residual Noise

2963

Table 8: Highway trac noise speech enhancement results.


SNR (dB)
7.7
7.4
4.6
0.0

Proposed method
MMSE-LSA-DD
MMSE-LSA
Noisy speech

MOS
3.50
2.75
1.63

SNR (dB)
10.3
11.1
9.3
5.0

MOS
3.56
2.85
1.92

SNR (dB)
13.0
15.0
14.0
10.0

MOS
3.74
3.07
2.04

SNR (dB)
16.5
15.4
12.6
10.0

MOS
3.95
3.29
2.37

Table 9: Car interior noise speech enhancement results.


SNR (dB)
10.5
7.3
3.1
0.0

Proposed method
MMSE-LSA-DD
MMSE-LSA
Noisy speech

MOS
3.53
2.54
1.89

SNR (dB)
13.4
10.9
7.7
5.0

Table 10: Highway trac noise statistics at a 99% level of confidence.


SNR (dB)

Test statistics

Test result

Interval estimate

0
5
10

z = 10.3
z = 10.2
z = 10.1

Reject H0
Reject H0
Reject H0

0.75 0.19
0.72 0.18
0.67 0.17

write the null and alternative hypotheses as


H0 : d 0,
HA : d > 0,

(27)

respectively. The null hypothesis H0 in this context should


not be mistaken for the hypothesis H0 in the speech presence
detection method. With 48 experiments at each combination
of SNR and noise type, we are in the large sample case, and
we therefore assume that the dierences are normally distributed. The rejection rule, at 1% level of significance, is
Reject H0

if z > z.01 ,

(28)

with z.01 = 2.33. Tables 10 and 11 list the test statistic z,


and the corresponding test result. Also, the two-tailed 99%
confidence interval [16], of the dierence between the MOS
of the proposed method and MMSE-LSA-DD, for highway
trac and car interior noise, respectively, is listed. From
our results we can therefore state with a confidence level of
99% that the proposed method has a higher perceptual quality than MMSE-LSA-DD. Furthermore, the dierence corresponds generally to more than 0.5 MOS, which generally
change the ratings from somewhere between Poor and Fair
to somewhere between Fair and Good on the MOS scale.
9.

DISCUSSION

We have in this paper presented new noise estimation


and speech enhancement methods that utilize a proposed

MOS
3.82
2.99
2.07

Table 11: Car interior noise statistics at a 99% level of confidence.


SNR (dB)

Test statistics

Test result

Interval estimate

0
5
10

z = 11.4
z = 9.4
z = 6.7

Reject H0
Reject H0
Reject H0

1.00 0.23
0.83 0.23
0.66 0.25

connected region speech presence detection method. Despite the simplicity, the proposed methods are shown to have
superior performance when compared to our implementation of state-of-the-art reference methods in the case of both
noise estimation and speech enhancement.
In the first proposed noise estimation method, the connected speech presence regions are used to achieve noise periodogram estimates in the regions where speech is absent.
In the remaining regions, where speech is present, minimum tracks of the smoothed noisy speech periodograms are
bias compensated with a factor that is updated in regions
with speech absence. A second proposed noise estimation
method provides a noise PSD estimate by means of the same
power-scaled minimum tracks that are used by the noise periodogram estimation method when speech is present. It is
shown that the noise PSD estimate has less spectral distortion
than both our implementation of 2 -based noise estimation
[3] and MS noise estimation [5]. This can be explained by a
more accurate bias compensation factor, which uses speech
presence information. The noise periodogram estimate is by
far the less spectrally distorted noise estimate of the tested
noise estimation methods. This verifies the connected region
speech presence principle which is fundamental for the proposed speech enhancement method.
Our proposed enhancement method uses dierent attenuation rules for each of the two types of speech presence regions. When no speech is present, the noisy speech is downscaled and left in the speech estimate as natural sounding
masking noise, and when speech is present, a noise PSD estimate is used in a traditional generalized spectral subtraction.
In addition to enhancing the speech, the most distinct feature of the proposed speech enhancement method is that it

2964
leaves natural sounding background noise matching the actual surroundings of the person wearing the hearing aid. The
proposed method performs well at SNRs equal to or higher
than 0 dB for noise types with slowly changing and spectrally smooth periodograms. Rapid, and speech-like, changes
in the noise will be treated as speech, and will therefore be
enhanced, causing a decrease in the naturalness of the background noise. At very low SNRs, the detection of speech presence will begin to fail. In this case, we suggest the implementation of the proposed method in a scheme, where low SNR
is detected and causes a change to an approach with only a
single and very conservative attenuation rule. Strong tonal
interferences will aect the speech presence decisions as well
as the noise estimation and enhancement method and should
be detected and removed by preprocessing of the noisy signal
immediately after the STFT analysis. Otherwise, a suciently
strong tonal interference with duration longer than the minimum search window will cause the signal to be treated as if
speech is absent and the speech enhancement algorithm will
downscale the entire noisy speech by multiplication with 0 .
Our approach generalizes to other noise reduction
schemes. As an example, the proposed binary scheme can
also be used with MMSE-LSA-DD for the speech presence
regions. For such a combination, we expect performance
similar to, or better than, what we have shown in this paper
for the generalized spectral subtraction. This is supported by
the findings of Cohen and Berdugo [11] that have shown that
a soft-decision approach improves MMSE-LSA-DD.
The informal listening test confirms that listeners prefer the downscaled background noise with fully preserved
naturalness over the less realistic whitened residual noise
from, for example, MMSE-LSA-DD. From our experiments,
we can conclude, with a confidence level of 99%, that the
proposed speech enhancement method receives significantly
higher MOS than MMSE-LSA-DD at all tested combinations
of SNR and noise type.
ACKNOWLEDGMENTS
The authors would like to thank the anonymous reviewers
for many constructive comments and suggestions to the previous versions of the manuscript, which have largely improved the presentation of this work. This work was supported by The Danish National Centre for IT Research, Grant
no. 329, and Microsound A/S.
REFERENCES
[1] T. Painter and A. Spanias, Perceptual coding of digital audio,
Proc. IEEE, vol. 88, no. 4, pp. 451515, 2000.
[2] J. Yang, Frequency domain noise suppression approaches in
mobile telephone systems, in Proc. IEEE Int. Conf. Acoustics,
Speech, Signal Processing (ICASSP 93), vol. 2, pp. 363366,
Minneapolis, Minn, USA, April 1993.
[3] R. Martin and T. Lotter, Optimal recursive smoothing of
non-stationary periodograms , in Proc. International Workshop on Acoustic Echo Control and Noise Reduction (IWAENC
01), pp. 4346, Darmstadt, Germany, September 2001.
[4] R. Martin, Spectral subtraction based on minimum statistics, in Proc. 7th European Signal Processing Conference (EU-

EURASIP Journal on Applied Signal Processing

[5]
[6]
[7]

[8]
[9]

[10]

[11]
[12]

[13]
[14]
[15]
[16]

SIPCO 94), pp. 11821185, Edinburgh, Scotland, September


1994.
R. Martin, Noise power spectral density estimation based
on optimal smoothing and minimum statistics, IEEE Trans.
Speech Audio Processing, vol. 9, no. 5, pp. 504512, 2001.
I. Cohen and B. Berdugo, Noise estimation by minima controlled recursive averaging for robust speech enhancement,
IEEE Signal Processing Lett., vol. 9, no. 1, pp. 1215, 2002.
Y. Ephraim and D. Malah, Speech enhancement using a minimum mean-square error log-spectral amplitude estimator,
IEEE Trans. Acoust., Speech, Signal Processing, vol. 33, no. 2,
pp. 443445, 1985.
K. S. Shanmugan and A. M. Breipohl, Random Signals - Detection, Estimation, and Data Analysis, John Wiley & Sons, New
York, NY, USA, 1988.
K. V. Srensen and S. V. Andersen, Speech presence detection
in the time-frequency domain using minimum statistics, in
Proc. 6th Nordic Signal Processing Symposium (NORSIG 04),
pp. 340343, Espoo, Finland, June 2004.
I. Cohen, Noise spectrum estimation in adverse environments: improved minima controlled recursive averaging,
IEEE Trans. Speech Audio Processing, vol. 11, no. 5, pp. 466
475, 2003.
I. Cohen and B. Berdugo, Speech enhancement for nonstationary noise environments, Signal Processing, vol. 81,
no. 11, pp. 24032418, 2001.
M. Berouti, R. Schwartz, and J. Makhoul, Enhancement of
speech corrupted by acoustic noise, in Proc. IEEE Int. Conf.
Acoustics, Speech, Signal Processing (ICASSP 79), vol. 4, pp.
208211, Washington, DC, USA, April 1979.
DARPA TIMIT Acoustic-Phonetic Speech Database, National
Institute of Standards and Technology (NIST), Gaithersburg,
Md, USA, CD-ROM.
D. Brillinger, Time Series: Data Analysis and Theory, HoldenDay, San Francisco, Calif, USA, 1981.
J. R. Deller Jr., J. H. L. Hansen, and J. G. Proakis, Discrete-Time
Processing of Speech Signals, Wiley-Interscience, Hoboken, NJ,
USA, 2000.
D. R. Anderson, D. J. Sweeney, and T. A. Williams, Statistics for
Business and Economics, South-Western, Mason, Ohio, USA,
1990.

Karsten Vandborg Srensen received his


M.S. degree in electrical engineering from
Aalborg University, Aalborg, Denmark, in
2002. Since 2003, he has been a Ph.D.
student with the Digital Communications
(DICOM) Group at Aalborg University. His
research areas are within noise reduction
in speech signals: noise estimation, speech
presence detection, and enhancement.
Sren Vang Andersen received his M.S. and
Ph.D. degrees in electrical engineering from
Aalborg University, Aalborg, Denmark, in
1995 and 1999, respectively. Between 1999
and 2002, he was with the Department of
Speech, Music and Hearing at the Royal Institute of Technology, Stockholm, Sweden,
and Global IP Sound AB, Stockholm, Sweden. Since 2002, he has been an Associate
Professor with the Digital Communications
(DICOM) Group at Aalborg University. His research interests are
within multimedia signal processing: coding, transmission, and enhancement.

You might also like