CMOS_Noise_Sources
CMOS_Noise_Sources
CMOS_Noise_Sources
Jess Johnson
Senior Instrumentation Scientist
Steward Observatory, University of Arizona
11 April 2024
Release Version One
Abstract
In imaging sensors, there are two distinct classes of noise: signal-related noise, which is a function of the
impinging photons, independent of the sensor, and sensor-related noise. Sensor noise can be further classified
into fixed pattern noise, signal shot noise, and read noise.
Some of these forms of noise are temporal noise, varying from moment to moment, and others are spatial
noise, persistent in time but varying from pixel to pixel. Whereas spatial noise can be effectively mitigated with
traditional data reduction techniques, temporal noise, such as electronic noise, is difficult, if not impossible,
to effectively reduce. In addition, CMOS sensors are prone to a type of destructive temporal noise known as
Random Telegraph Signal Noise, also known as Salt & Pepper noise, which is extremely difficult to mitigate
and increases dramatically over time with exposure to proton radiation. Other forms of noise which are
typically of small contribution to the sensor’s noise profile at start can also be expected to increase with
exposure.
This memo begins with a brief discussion of CMOS structure and architecture, in which the features and
structures of active pixel CMOS sensors that are responsible for generating noise are presented. The next
section presents a brief overview of the mathematical representation of noise. The following section then lists
the classifications8 of CMOS noise, and discusses the various types of noise and the mechanisms that create
them. The next section discusses the combined effects of the different noise sources. The following section
breifly touches on the effects of radiation on noise, and the final section deals with noise reduction techniques.
The conclusion summarizes the major points of interest to the instrumentation teams.
1
Author’s notes
This paper was inspired not only by a request from a instrumentation researcher, but also by my need to
understand the fundamentals of noise to better inform the design of the sensor testing program, and by my
own curiosity and concern.
The use of a large number of CMOS sensors in space-based missions is unprecedented, and many aspects
of their use in the space environment is not well studied. We do know, from the JUICE mission, that CMOS
sensors will degrade in response to radiation, increasing the sensor’s noise level. Therefore, understanding
noise and its mitigation are extremely important when imaging is used not only for scientific data acquisition,
but also for vital functionality such as guidance and wavefront control.
This paper is essentially a review of the literature I have digested over the last 18 months of my study
of CMOS imaging and the characterization of sensors as it relates to the vital issues of imaging noise and
its reduction. No single reference provides a comprehensive overview of all noise sources in a CMOS imager.
My role here was to take material from a large number of sources and attempt to organize and present it
in a coherent and understandable way. Most of this paper was taken from my notes on those sources, and
the largest issue I faced was attempting to filter and compress a voluminous amount of information to the
essentials. I made no assumption at the outset about a reader’s knowledge level; I begin the discussion from
fundamentals.
I used three different texts as primary references for this memo. The first is James Janesick’s excellent
Photon Transfer: : DN → [lambda] [1]. Anyone who wants to more fully understand both CMOS functionality
and a powerful method of characterization should read this book. The next is CMOS Image Sensors, by
Konstantin Stefanov [2]. Although quite technical, it is one of the best ’deep understanding’ books on the
topic that I’ve found. The final reference is Ultra Low Noise CMOS Image Sensors by Assim Boukhayma [3].
All three of these references complement each other quite well, and all three are highly recommended.
There are several important topics briefly discussed here that I continue to pursue, and will report on as
necessary. First and most important on that list are white noise and RTS denoising algorithms, which I believe
to be essential to allow continued effective operation of CMOS cameras in the space environment. Another
important topic is understanding the radiation environment that we will be operating in so that exposure levels
to different forms of radiation can be determined over the lifetime of the mission. This will begin to allow us
to predict which performance parameters are likely to be degraded, and by how much. This is dependent on
the final orbit selection, however, and so sits on the back burner for the time being.
2
Contents
1 Introduction 6
1.1 Classification of Noise Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Signal Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Sensor Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3
4.5.2.1 Thermal Noise (σTH ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.2.2 Sense Node Reset Noise (σRESET ) . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.2.3 1/f Noise (σf ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.2.4 RTS Noise (σRTS ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5.2.5 Leakage Current Shot Noise (σLC ) . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5.2.6 Pixel Source Follower Noise (σSF ) . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5.3 Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.6 Fixed Pattern Noise (σFPN ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.6.2 Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.6.2.1 Photo Response Non-Uniformity (σPRNU ) . . . . . . . . . . . . . . . . . . . . . 20
4.6.2.2 Column Fixed Pattern Noise (σCFNU ) . . . . . . . . . . . . . . . . . . . . . . . 20
4.6.2.3 Offset Spatial Variation Noise (σOFPN ) . . . . . . . . . . . . . . . . . . . . . . 20
4.6.2.4 Dark Signal Non-Uniformity (σDSNU ) . . . . . . . . . . . . . . . . . . . . . . . 20
4.6.2.5 Dark Current Fixed Pattern Noise (σDFPN ) . . . . . . . . . . . . . . . . . . . . 20
4.6.2.6 Total Fixed Pattern Noise (σFPN ) . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.6.3 Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.7 Other Noise Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7.2 Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7.2.1 ADC Quantizing Noise (σADC ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7.2.2 System Noise (σSYS ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7.3 Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4
7 Noise Reduction 29
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.1.1 Spatial Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.1.2 Temporal Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2 Reducing Spatial Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.1 Types of Calibration Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2.2 Flat Fielding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2.3 Dark Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2.4 Bias Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.2.5 A Note on Space Based Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.3 Reducing Temporal Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.3.1 General Statement of Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.3.2 Spatial Domain Denoising Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.3.2.1 Spatial Domain Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.3.2.2 Variational Denoising - RTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.3.3 Transform Domain Denoising Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.3.4 Other Noise Reduction Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8 Conclusions 34
5
1 Introduction
Noise is defined as ’the uncertainty which accompanies [an] acquired signal’[4]. It is an inherent reality in any
form of imaging sensor, from photographic to electronic; it is always present as the result of acquiring an image.
Some of this noise is a property of light itself, but a significant amount of it originates in the devices that we use
to capture the light.
Noise can never be completely eliminated, but by understanding its nature and its origins can be substantially
mitigated. The process of characterization, which is the determination of an imager’s performance and noise
properties, is essential to this endeavor, as it provides critical information as to the extent of noise from various
sources that occur in any individual camera.
The general noise profiles of CCD sensors and CMOS sensors share similarities, and so significant portions
of one’s knowledge of the noise properties of CCD imagers transfer to understanding noise in CMOS imagers.
CMOS does, however, have unique forms of noise that present substantial complications to the typical suite of
noise reduction tools, most of which have been developed for working with CCD imagers. In particular, Random
Telegraph Signal noise (RTS) is a major issues with CMOS sensors that impacts their use as quantitative imaging
devices (see Section 4.5.2.4), but plays a far lesser role in CCD noise performance.
I should note that there are two types of noise related phenomena that are not covered in this paper. The
first, Fano noise, is a form of shot noise that occurs in detectors in response to photons with energy levels in
the far UV and which becomes of concern in the soft x-ray regime. The second concerns the noise generated by
excessive quantum yield (QY), specifically the production of multiple photoelectrons per single impinging photon.
For silicon, photons of wavelengths between 400 nm and 1200 nm will generate single electrons. As we will be
using camera filters passing photons in the 400 nm to 800 nm range, Fano noise is not relevant, and in that range,
QY = 1.
6
An outline of noise sources in a CMOS imager looks like this. Following each noise class are the individual
types of noise that this paper discusses.
• Signal Noise
– Photon Shot Noise
• Temporal Noise
– Dark Current Noise
∗ Dark Current Shot Noise
∗ Diffusion Dark Current
∗ Deplete Area Generation Noise
– Transfer Noise
∗ Non-ideal Charge Transfer Noise
– Electronic Noise
∗ kT/C Noise
∗ MOS Transistor Thermal Noise
∗ 1/f Noise
∗ RTS Noise
∗ Leakage Current Shot Noise
– Other Noise
∗ ADC Quantizing Noise
∗ System Noise
• Spatial Noise
– Fixed Pattern Noise
∗ Photo Response Non-Uniformity
∗ Offset Spatial Variation
∗ Column Level Gain Variation
∗ Dark Signal Non-Uniformity
∗ Dark Current Fixed Pattern Noise
This particular classification scheme is based on the source of the noise, that is, where in the imager it
originates. The definitions of each of these noise types, along with the physical processes that create them, are
the subject of Section 4.
Since we are grouping noise types by source, it is helpful to understand the physical structures and processing
architecture in a CMOS sensor that gives rise to these forms of noise.
7
Figure 1: Structure of a CMOS active pixel [5].
8
Figure 2: Structure of a Pinned Photodiode [6].
displacement damage. Also, the depletion region which underlies it is susceptible both to manufacturing defects
and radiation induced electron trap creation, a process which drastically increase the thermal generation of dark
current.
A schematic of a PPD is shown in Figure 2. The diagram shows the essential components of the PPD.
• nPD: The active photodiode region of the PPD, where the photoelectric effect occurs;
• p Well: The potential Well, storage location during exposure for photoelectrons generated by the photodi-
ode;
• GR: The recombination-generation Center, the location that supplies mobile charge carriers (electrons and
electron holes) to the photodiode;
• p+ Pinning Layer: The pinning layer, which both protects the photodiode from thermally generated
electrons from the GR region and creates a gradient allowing fast drift motion of signal electrons to the
potential well.
• TG: The transfer gate, the structure that regulates the flow of photoelectrons from the potential well to
the floating diffusion region.
• nFD: The floating diffusion region, where electrons accumulate at readout, changing the voltage of the
region, which is then read out by the sense node as the pixel’s final voltage.
This structure facilitates several useful mechanisms. It enables relatively low dark current; rapid discharge of
signal electrons from the photodiode to the diffusion region, which reduces image lag, which in turns allows for
the rapid electronic shuttering mentioned above.
9
Figure 3: Functional Schematic of a 3T sensor’s active pixel transistor design.[8].
form of low frequency noise, from which it gets its other name flicker noise, so called because its frequency is low
enough to be seen as flickering by the eye.
A direct result of the multi-transistor 3T/4T design is fixed pattern noise (FPN). FPN is a result of manu-
facturing process inconsistencies leading to variations in the source follower transistor’s gain in pixels across the
sensor array. The result is a spatial noise pattern that is not mitigated by the CDS process.
The area to the left of the transfer gate represents the pinned photo diode. The area under the photodiode
represents the potential well. In abbreviated form, the pixel readout process proceeds as follows:
10
4. The source follower transistor monitors the floating diffusion region’s potential.
5. The floating diffusion region’s final potential is transferred to the column bus line by the row select transistor.
6. The floating diffusion region is reset by the reset transistor.
This is probably enough information about active pixel structure and functionality to understand the sources
of noise related to CMOS pixels. Next up is sensor architecture and signal processing.
Although some of these have been discussed previously, a few definitions are in order.
2.2.2.1 Quantum Yield Gain: Essentially the number of electrons produced per photon. Whereas quantum
efficiency indicates the likelihood of a sensor producing electrons from incident photons, quantum yield gain
indicates the number of electrons produced per incident photon. For the purposes of this paper, ηI = 1.
2.2.2.2 Sense Node Gain: The sense node region, which can be seen in Figure 4, is the structure where
signal charge (electrons) are converted to working voltage. It is represented physically by the floating diffusion
portion of the PPD (see Figure 2).
Figure 5: Block diagram for a typical CMOS imaging sensor, showing internal gain functions (values inside of
blocks), signal parameters (values above blocks) and noise parameters (values below blocks) [1].
11
2.2.2.3 Source Follower Gain: Physically, the source follower is one of the transistors in a 3T/4T CMOS.
It’s job is to buffer (amplify) the voltage output of the sense node.
2.2.2.4 Analog to Digital Converter Gain The job of the analog-to-digital converter is to take the voltage
determined by the floating diffusion region, transferred through the column bus and sent to it across the row bus,
and convert that voltage to digital numbers. More about the ADC in Section 4.7.2.1.
S(DN )
= QE1 · η1 · ASN · ASF · ACDS · AADC
P
• P : Incident Photons;
• P1 : Interacting Photons;
• S: Sense Node Electrons;
• S(VSN ): Sense Node Voltage;
• S(VSF ): Source Follower Voltage;
• S(VCDS ): Correlated Double Sampling Voltage;
• S(DN ): Analog to Digital Converter Signal.
With the discussion of structure and architecture out of the way, there remains one thing to discuss before
proceeding to sources of CMOS imager noise, and that is how it is quantified.
12
3 The Mathematical Analysis of Noise
Before continuing on to discuss different noise sources and mechanisms, it is helpful to understand the math and
notation used when discussing noise. This is going to be a very brief presentation, giving enough information to
continue into the following sections. There are only a handful of essential mathematical tools to understand that
are used in the analysis of imager noise. These are presented below.
Mathematically, noise phenomena is typically modeled as continuous random variables, and noise waveforms
are modeled as random processes. This is done because, in many cases, we do not specifically know what
magnitude of noise is affecting the signal at any particular moment.
for
fx (x) ≥ 0
The mean squared represents the average power of some signal X. The square root of this is the RMS power
of the signal. RMS power represents an equivalent signal with constant power that is equivalent to the average
power of X.
The variance of X, then, is:
2
σX = X¯2 − (X̄)2
The variance of the signal X is used in noise analysis as an estimate of noise power. The square root of the
variance is σ, the standard deviation, which is used to express the quantity of noise resulting from a specific
source. The sigma value, while representing the standard deviation of the noise component from the signal, can
be read as the number of electrons of noise that are associated with a signal composed of some quantity of
photoelectrons. As an example, for photon shot noise,
√
σPS = S
which can be taken to indicate that, say, for a real signal of 100 photoelectrons produced by a light source, 10
photoelectrons of noise will be produced via the mechanism of photon shot noise.
13
3.2.1 Poisson Distribution
The Poisson distribution is a discrete probability distribution. It is given by the equation:
λx −λ
P (X = x) = f (x) = ·e
x!
for:
x≥0
which can be read as the probability that x events occur with λ being the average rate of the occurrence of the
events.
The Poisson distribution has the following properties:
Mean = µ = λ
Variance = σ 2 = λ
√
Standard Deviation = σ = λ
Note that an important feature of the Poisson distribution is that its variance is equal to its mean. Use of the
Poisson distribution to describe photon shot noise is discussed in Section 4.2.
The variables here follow the discussion of Section 3.1. The Gaussian distribution has the following properties:
Mean = µ
Variance = σ 2
Standard Deviation = σ
3.3 Quadrature
It should be noted that noise sources add their contributions in quadrature. If we have a classification of noise
that contains noise from several sources, the total contribution from those sources would be:
1
2 2 2
σCLASS = (σS1 + σS2 + σS3 + . . .) 2
where ’xx’ describes the situation under which the ratio is being calculated, and σSOURCE is the total of all noise
source relative to the situation, added in quadrature.
14
For instance, if we are calculating the signal-to-noise ration of an imager under flat-field illumination, we would
write:
S S
=
N FF σTOTAL
where σTOTAL is the total of all noise sources affecting the sensor added in quadrature.
Signal-to-noise can also be calculated for individual noise sources, or classifications of noise. If we wanted to
know the contribution of read noise to the sensor’s overall flat field noise profile, for instance, we would write:
S S
=
N FF,READ σREAD
15
4.2.2 Mechanism
To understand the phenomena, consider a particle beam with a constant flux rate ϕp . In time t, the average
number of particles incident on a surface is ϕp · t. If each particle has the same probability of incidence, P(i), the
number of incident particles obeys a Binomial Distribution. As the probability P(i) approaches 1, the binomial
distribution approaches a Poisson distribution:
Because it is a Poisson distribution, it has the property that its variance is equal to its expectation value, such
that:
E(n) = σ 2 = ϕp · t
Therefore,
p
σSHOT = ϕp · t
All shot noise phenomena originate in the fundamentally unpredictable nature of quantized processes, such as,
in the case of photon shot noise, the nature of fluorescence. The randomness in direction and timing of photon
emission from a fluorescent source leads to a beam of photons whose cross sectional density displays Poisson
statistics. The way these photons spatially arrive at the detector then gives rise to Poisson variance, which is the
noise associated purely with the interaction of photons with the sensor. In the block diagram, this is:
√
σP = P
Note, however, that while photon shot noise is signal noise, there are many sources of shot noise that fall
into the sensor noise classification. All are related, in one way or another, to the flow of electrons. Shot noise is
essentially a particle phenomena, and the flow of electrons obeys Poisson statistics in the same way that photons
do. For more on other sources of shot noise, see Section 2.2.5.
4.2.3 Mitigation
Because photon shot noise is inherent to the nature of light, and is temporal in nature, it cannot be eliminated.
It can be √
reduced, however, by increasing exposure time or the intensity of the incident beam, because as P
increases, P increases much more slowly.
16
4.3.2 Mechanisms
Dark current charge generation occurs in the PPD through a variety of mechanisms, and is then moved by a
potential gradient through the transfer gate into the potential well. The mechanisms underlying dark current
charge generation are:
4.3.2.1 Depletion Region Generation One of the primary sources of dark current generation occurs in the
Depletion Region of the PPD (see Figure 1). The depletion region is an area underlying the PPD structure that is
devoid of electrons and holes, but contains traps. These traps participate in a process called ’trap assisted carrier
generation’, also called ’hopping generation’, the efficiency of which increases with temperature increase [1].
4.3.2.2 Diffusion Dark Current Diffusion Dark Current starts in the Field Free Region, a region of the area
underlying the pixel structure that has a negligible electric field, is thermally stable, but is populated with charge
carriers. The active pixel structure itself, however, does have a field, and electrons from this region diffuse into
the potential well, a process whose efficiency again is increased with increasing temperature.
4.3.2.3 Dark Current Shot Noise (σD ) The combination of depleted area generation and diffusion dark
current produces a steady flow of diffuse current which flows into the potential well. Above and beyond the
undesired electrons it deposits in the potential well, which is a form of Gaussian noise, the current density is
Poisson distributed and therefore has a form of shot noise associated with it, called Dark Current Shot Noise,
that occurs as it passes the potential well boundary. It is given by:
p
σD = (ND )
where ND is the average number of dark current electrons generated in a given integration time.
4.3.3 Mitigation
The only way to reduce the effect of dark current is to cool the image sensor, which reduces the flow of thermal
electrons to the potential well and thereby decreases noise from this source substantially.
There is, however, persistent patterning to the production of the electrons generated by these mechanisms
which varies from pixel to pixel but remains consistent over time. This fixed patterning is the spatial component
of dark current, known as dark current fixed pattern poise (DFPN). This patterning is a component, along with
pixel fixed pattern noise (PFPN), column fixed pattern noise (CPFN), Dark Signal Non-Uniformity (DSNU) and
Dark Signal Non-Uniformity (DSNU), of fixed pattern noise, discussed below in Section 4.6. Further reduction of
the effect of dark current, then, involves characterizing the sensor’s fixed pattern noise and subtracting it off in
post-acquisition processing).
4.4.2 Mechanisms
4.4.2.1 Potential Well Non-Ideality The first mechanism underlying transfer noise has to do with the fringing
fields that are used to structure the shape of the potential well. It is frequently the case that due to variation in
pixel structure resulting from the manufacturing process, the field is not applied consistently, and the resulting
change of potential in the well causes electrons to cluster at the transfer gate. This leads to slower transfer (lag),
called Charge Transfer Inefficiency (CTI). Further, some of these electrons diffuse back into the diffusion region
(spill back lag). This underlies the phenomena of image lag. Sensors that exhibit substantial amounts of image
lag most likely had issues during manufacture that may affect other performance parameters.
17
4.4.2.2 Transfer Gate Non-Ideality The second mechanism concerns the transfer gate itself. Again, because
of the manufacturing process, the Si-SiO2 area underlying the transfer gate frequently contains traps. Electrons
flowing from the diffusion region to the potential well literally become trapped at the gate. Because this represents
a current flow incident on the boundary of a region, it is also a form of spill back lag.
Spill back, because it is a form current flow, is a source of temporal shot noise, and is given by:
σCTI = (CTI · N)
p
where CTI is a measured quantity determined by taking multiple sets of two consecutive readouts of large in-
tegration time, averaging each of the first exposures and then the second exposures, and determining the ratio
between the two averages. This should be done during characterization to detect subtle manufacturing defects in
any particular imager.
4.4.3 Mitigation
Transfer noise is temporal noise, and therefore falls into the classification of noise that can’t really be dealt with
directly but can potentially be dealt with during post acquisition processing with white noise routines (Section 7).
4.5.2 Mechanisms
4.5.2.1 Thermal Noise (σTH ) Thermal electronic noise is a separate phenomena from dark signal. Thermal
noise in this sense refers to noise generated by electrical components, resulting from fluctuation in the velocity
of charge carriers in conductive material due to thermal excitation [1]. It is generated in every resistive element
and connection in a circuit. It is modeled similarly to the noise generated by current moving through a resistor,
or current moving through a resitor/capacitor combination.
Thermal noise takes two forms. The first, MOS transistor thermal noise, is the noise generated by the flow of
current through resistive channels in large area MOS transistors. Under normal operating conditions, MOS noise
is negligible.
The second, kTC noise, is noise caused by a voltage fluctuation across a capacitive device resulting from the
thermal noise generated by a resistive element connected to it. Thermally generated kTC noise is also, under
normal operating conditions, negligible.
4.5.2.2 Sense Node Reset Noise (σRESET ) As discussed in Section 2.1.2, the reset transistor in a 3T type
sensor is a strong source of kTC noise, leading to the creation of the correlated double sampling system in the 4T
design. Most of this noise is effectively cancelled by that system; the remaining portion becomes a component of
source follower noise.
4.5.2.3 1/f Noise (σf ) The origin of 1/f noise, also called flicker, or amplifier noise, is not well understood,
but is pervasive in electronic devices. It is most likely the result of Silicon Oxide contacting the substrate in
the region of the diffusion area, creating electron traps that then capture and release electrons randomly. This
manifests itself as low frequency noise. As CMOS technology has scaled, flicker noise has increased as a component
of a sensor’s noise profile. It is a form of low frequency noise that is visible to the eye. Flicker noise is manufacturing
quality dependent, and can be a considerable component of the noise profile.
18
4.5.2.4 RTS Noise (σRTS ) Perhaps one of the more obvious noise effects in CMOS imagers is Random
Telegraph Signal Noise, or Salt and Pepper Noise (SAP) [4], also called ’Impulse Noise’. It is quite evident in
dark frames, as shown in Figure 6. Similar to 1/f noise, RTS has increased with the upscaling of CMOS imagers.
SAP is a highly destructive form of noise, as it corrupts pixels by essentially overwriting their DN value with a
zero or max DN, which makes it a difficult form of noise to correct. Also, SAP increases dramatically as a result
of exposure to proton irradiation, the result of fundamental changes to the active pixel structure.
4.5.2.5 Leakage Current Shot Noise (σLC ) Leakage current is undesired charge transport that occurs in
either depleted regions or through insulators in all electronic devices; in CMOS imagers, they are effects the occur
primarily in the sense node and transfer gate regions. They are the result of atomic-level stochastic errors inherent
to the CMOS manufacturing processes. Fortunately, they are a minor contributor to the overall noise profile of any
particular sensor; analysis shows that typical levels are around 0.001 e- and occurs between reset and sampling.
4.5.2.6 Pixel Source Follower Noise (σSF ) Collectively, these different forms of electronic noise are referred
to as source follower noise. Whereas source follower noise in CCD imagers is domianted by flicker noise, RTS
noise is the dominating factor in CMOS sensors. The overall composition of pixel source follower noise is then:
1
2
σSF = (σTH + σf2 + σRTS
2
)2
For bookkeeping purposes, leakage current shot noise and reset noise are added to the overall noise profile
separately. See Section 5.
4.5.3 Mitigation
Like all forms of temporal noise, electronic noise is difficult, if not impossible to mitigate. RTS noise, in particular,
is destructive, increases over time in exposure to radiation, and has been extremely difficult to address. Both
temporal noise and RTS noise mitigation are the subject of Section 7.
19
I should mention here that systematic defects in a sensor... such as dust motes, vignetting, shading, interference
fringing, etc., all fall under the heading of FPN sources.
4.6.2 Mechanisms
4.6.2.1 Photo Response Non-Uniformity (σPRNU ) PRNU originates primarily in two different regions of the
pinned photo diode. One component is dominate at high signal levels, and is a result of pixel-to-pixel variations
in quantum efficiency, pin voltage, and full-well capacity. The other component, which dominates at low signal
levels, is a form of conversion gain mismatch, resulting from differences in the structure of the sense node junction,
the transfer gate and the source follower. It is represented as follows:
σPRNU = PN · S
where σPRNU is the photo response non-uniformity in rms e- and PN is called the fixed pattern noise quality factor,
which is approximately 0.01 for both CCD and CMOS type sensors [1].
4.6.2.2 Column Fixed Pattern Noise (σCFNU ) Sometimes called Vertical Gain Mismatch, this form of spatial
noise results from inconsistency in the gain of CMOS column level amplifiers, and appears as easily perceived
vertical lines in the image. Column level gain in CMOS is implemented with switched capacitor amplifiers, and
hence gain differences are the result of capacitor mismatch, with typical mismatch values having a standard
deviation on the order of 0.01% [3].
4.6.2.3 Offset Spatial Variation Noise (σOFPN ) This form of noise primarily results from non-idealities in
the structure of the active pixel’s transfer gate and is a response to the switching of the transfer gate’s voltage.
Although most of this effect is cancelled by the CMOS CDS circuitry, theses structural issues can result in pixel
gain variation even with CDS.
4.6.2.4 Dark Signal Non-Uniformity (σDSNU ) Dark signal non-uniformity is the result of applying bias to
pixels to account for low signal levels whose noise may drive the pixel’s reported voltage to negative values,
resulting in negative DN numbers after ADC conversion. This patterning is what the creation of bias frames in
traditional noise reduction technique is meant to address. It is important to note that this value is not constant
across modes (gain settings, binning, etc.) and shows a weak temperature dependence, making the creation of
bias frames for different mode combinations mandatory, and including different exposure lengths as a factor should
be considered.
4.6.2.5 Dark Current Fixed Pattern Noise (σDFPN ) As discussed in Section 4.3.3, Dark Signal Non-
Uniformity is the persistent patterning resulting from an active pixel’s generation of dark current. One measure
of DSNU is given by:
σDFPN = D · DN
where D is the average dark current in electrons, and DN is the dark current FPN quality factor, which varies
between 10% and 40% for CMOS imagers.
4.6.2.6 Total Fixed Pattern Noise (σFPN ) For all sources of fixed pattern noise, we have:
1
2 2 2 2 2
σFPN = (σPRNU + σCFNU + σOFPN + σDSNU + σDFPN )2
4.6.3 Mitigation
Spatial noise is effectively mitigated, at least in ground-based applications, with traditional methods of astronom-
ical image noise reduction that are utilized in post-acquisition processing, such as the application of dark frames,
bias frames and flat fields. This is the topic of Section 7.2.
20
4.7 Other Noise Sources
4.7.1 Definition
This section contains two items: ADC quantizing noise and system noise. The second, system noise, is potentially
composed of dozens of different sources, all dependent on the specific CMOS architecture and the manufacturing
processes by which they were made. Whereas the first is well quantified and predictable, the second is stochastic
and non-synchronous from frame-to-frame. Non-synchronous in this usage means that noise levels alter signifi-
cantly from one frame to the next under conditions of similar signal levels.
4.7.2 Mechanisms
4.7.2.1 ADC Quantizing Noise (σADC ) Analog to digital quantizing (or quantization) noise is basically a
rounding error between the input signal voltage and the digital number output. All quantization processes require
rounding, and the difference between the desired voltage representation and the final converted DN is called the
quantization error. Quantizing a sequence of numbers produces a series of quantization errors which manifest as
quantization noise.
As the effect of the conversion error is to create a variance in DN readings, this is almost a virtual form of
noise (but quite real in effect), as it is not the result of errant electrons.
To a certain extent, ADC noise is predictable. For an ideal ADC,
21
1
σADC(DN) = · AS = 0.2887S
12
In terms of virtual electrons, this amounts to:
-
σADC = 0.2887 · KADC · (e /DN) · S
Quantizing noise is dependent on the quantity KADC · (e /DN), called the ADC sensitivity. ADC sensitivity is
-
a quantity that can either be determined from the manufacturer’s specification sheet information, or can be
characterized with a powerful characterization tool known as a photon transfer curve (PTC) (see Figure 8 for an
example of a photon transfer curve.) Sensors with higher ADC sensitivity values can produce noise ranging from
being minimally apparent at the low end to overcoming the other components of read noise at the higher end.
4.7.2.2 System Noise (σSYS ) The sources of system noise are almost too many to enumerate here, but a
partial list would include [1]:
• Preamp noise;
• Transient noise;
• Settling and Ringing noise;
• Ground Bounce noise;
• Clock Phase Jitter noise;
• Circuit Crosstalk noise;
• Power Supply noise, resulting from unstable power supply voltage;
• Oscillation noise.
System noise is temporal noise and considered to be dynamic, with levels changing from frame to frame
(non-synchronous noise), and increasing significantly with each doubling of the frame rate. For long exposures,
system noise is generally negligible. All forms of system noise are typically represented collectively as σSYS .
4.7.3 Mitigation
As a form of temporal noise, system noise is difficult to address. At longer exposure times, it is typically neglibgible,
but White noise algorithms that address temporal noise can be applied. See Section 7.3.
21
Figure 7: CMOS Imaging sensor block diagram, with all discussed noise sources included. At the top of the
diagram is the physical structure that each transfer function is associated with. Under that are the average signal
values at individual stages of signal processing, with ’P’ representing photons, ’S’ representing electrons, ’S(V)’
representing various voltages, and ’S(DN)’ representing digital numbers. Under this are the transfer, or gain
functions as defined in Section 2.2.1. Under this are the noise sources discussed in this paper, grouped into three
classes (Shot, Read, and FPN), with their horizontal positioning indicating at what stage in the signal process
chain that they combine with the signal. [1].
Next, we’ll write out the components of the noise and the equations of their contributions to the sensor’s SNR
for each of these three classes.
22
Also, because one of the most important measures of image quality is the sensor’s Signal-to-Noise ratio (SNR),
its helpful to write out the SNR contributions from each class as well. The SNR equations here apply to uniform
flat-field illumination.
5.1.1.1 Shot Noise SNR Contribution Shot noise increases as illumination level increases, but signal also
√
increases. Because the signal increases much faster than the shot noise, which is modeled as σSHOT = S, shot
noise SNR generally improves proportionally as illumination levels increase. The SNR within the shot noise class
is given by:
S S S 1
SNR(SHOT) = = = 1 = S2
N FF σSHOT S2
This indicates that the SNR from shot noise sources increases by the square root of the signal, which is a line
with a slope of 1/2 on a log-log plot.
5.1.2.1 Read Noise SNR Read noise from all sources is independent of the input Signal. The read noise
SNR is given by:
S S
SNR(SHOT) = =
N FF σREAD
This indicates that the SNR from read noise sources is proportional to the signal, which is a line with a slope
of 1 on a log-log plot.
5.1.3.1 Fixed Pattern Noise SNR FPN noise increases proportionally to signal. The fixed pattern SNR is
given by:
S S S 1
SNR(FPN) = = = =
N FF σFPN PN · S PN
This indicates that the SNR from fixed pattern sources is independent of signal, which is a line with a slope
of 0.
23
Figure 8: Ideal signal versus noise photon transfer curve, showing the four signal regimes. [1].
5.2.1 Description
The plot shows RMS noise versus average input signal (essentially a representation of exposure time). At the
left end of the x-axis, zero represents no input signal, such as in the situation of taking a dark frame. Signal in
this regime is fully dominated by sources of read noise, represented by the blue horizontal line. The red angled
line beneath it represents photon shot noise. As illumination level increases, the point where photon shot noise
exceeds read noise marks the start of the shot noise regime, where the noise profile is dominated by shot noise,
of which photon noise is proportionally the largest contributor. The characteristic 1/2 slope of shot noise is easily
24
seen. Underneath the red line is a green line, representing fixed pattern noise. The point at which FPN exceeds
photon shot noise marks the start of the FPN regime, where fixed pattern noise dominates the noise profile.
The fourth region begins as pixels begin to hit their full well capacity. There is a rapid dropoff in noise as
saturation is reached, although some CMOS types will show a continuing increase in FPN. This dropoff point is
the way in which one characterization value, the full well value, can be determined from the PTC curve.
An important note here concerns ADC quantizing noise. As discussed in Section 4.7.2.1, the degree of ADC
quantizing noise is dependent on the particular ADC’s sensitivity value, expressed in electrons per DN. Lower is
better. ADC sensitivity values can range between 2 and 100 electrons per DN. Whereas manufacturers rarely list
the sensitivity of the ADC, they sometimes will list the ADC bit value. In general, the higher the bit level, the
lower the quantization noise. For higher sensitivities and/or lower ADC bit values, ADC noise can dominate the
noise floor, producing greater amounts of noise than either reset or RTC sources. This makes it imperative to
include determining the ADC sensitivity during characterization.
25
Leakage current shot noise is an interesting phenomena in that it inversely scales with the technology node
size of the sensor node. As APS technology evolves to produce smaller and smaller pixel sizes, the sense node
technology node size decreases, and the leakage current value increases. The cutoff on node size is roughly 300nm
(note that this does not refer to the actual size of a physical structure, but indicates the manufacturing process
that creates the physical structure). At this point, leakage current shot noise is negligible, at about 0.001 e-
RMS. As this size begins to decrease, the noise begins to increase rapidly by orders of magnitude, and leakage
current shot noise can become the dominant form of shot noise.
Of the remaining forms of shot noise, the only other source that can be potentially of consequence is charge
transfer non-ideality shot noise. This is largely dependent on the manufacturing process behind any particular
CMOS imager. Typically it is negligible, but CTI should be determined in characterization to insure that it is.
With these points in mind, we can rank shot noise sources as follows. The assumption here is that the camera
is cooled, the sensor in question has a sense node created with 300nm or greater manufacturing technology, and
that the sensor was not manufactured in a way that CTI characterization indicates that it has a high CTI factor.
σP2 ≫ σD
2 2
≫ σLC 2
> σCTI
σP2 ≫ σD
2 2
> (σSH(V SN )
2
+ σSH(V SP )
2
+ σSH(V CDS )
2
+ σSH(V ADC )
)
The defects component of fixed pattern noise is not included here, because defects result from unexpected
mistakes in the manufacturing process and can vary greatly in effect. In certain cases, this type of defect can
have huge implications for FPN. For instance, a slight misalignment in the parallel-plane geometry between the
sensor’s surface and the optical window of the camera can result in interference fringing, which can drive the
noise percentage from FPN sources to as high as 10% of the signal.
5.3 Signal-To-Noise
5.3.1 Sensor Signal-to-Noise
In the same way that plotting noise versus signal gives us insight into which noise sources dominate in different
signal regimes, we can determine when a sensor has its best signal to noise behavior by plotting signal-to-noise
ratio versus noise. Such a plot is shown in Figure 9.
26
Figure 9: A photon transfer curve (left) and its corresponding SNR plot (right) for an imaging sensor under
flat-field illumination. Several quantities are indicated on the plots. On the PTC plot, PN = 0.02 is the sensor’s
fixed pattern noise quality factor. SFW = 350, 000e- is the sensor’s full well capacity. The sigma value for read
noise is given, and the lines are identified by the sigma values they represent. On the SNR plot, KADC is the ADC
sensitivity factor in electrons per DN, and the read noise is given as 5 electrons RMS. Each curve represents the
sensor’s SNR response to different levels of fixed pattern noise, with 0.02 matching the PTC plot on the left. [1].
The figure shows two plots. The one on the left is a typical photon transfer curve for an imaging sensor
illuminated with flat-field light. The one on the right is a corresponding signal-to-noise ratio versus signal plot.
The plots have the same x-axis values, so that they can be easily matched. I have highlighted the regimes in
Janesick’s plot so that their boundaries are easier to see. I have also added a horizontal dotted blue line at a SNR
value of ten, generally considered to be the cut off under which an image is deemed unacceptably noisy.
One of the things that can be gleaned from this plot is that, for a fixed pattern noise quality factor of
PN = 0.02, the highest signal to noise ratio possible is 50:1, and this occurs in the signal range between 20,000
and 300,000 electrons per pixel. We can also determine the signal level cutoff for acceptable noise, which occurs
at 100 electrons per pixel.
The usefulness of the SNR plot, besides its easy to use eyeballing of appropriate signal ranges, is that it allows
us to determine the signal-to-noise ratio for any signal level, for any sensor that has been characterized using
photon transfer curve methodologies, from which appropriate exposure times can be calculated to achieve desired
imager performance.
S
MI · S I
=
N I σI,TOTAL
where MI is the image modulation constant, which varies with the image contrast and is obtained from the
modulation photon transfer curve. SI is the signal level of the image, and σI,TOTAL is the total image noise, also
obtained from the modulation PTC.
The important thing about this is that there is an important relationship between the image signal-to-noise
and the sensor’s flat-field signal-to-noise:
S S
= MI ·
N I N FF
What we get from this is that, if we optimize the sensor’s flat field signal-to-noise performance, we produce
27
the highest signal-to-noise ratio for the images it creates. It also tells us, however, that the image SNR will allows
be smaller than the sensor’s flat-field SNR by a factor of MI .
28
From these results, we can also extrapolate that other noise mechanisms related to the same active pixel
structures may well increase. Leakage current shot noise, for instance, is a noise mechanism tightly tied to the
transfer gate. Charge Transfer Inefficiency generated noise, another transfer gate related noise mechanism linked
to image lag, may also be affected.
And, perhaps, more troublesome is the increase in hot pixels following proton irradiation. This will affect both
dark signal non-uniformity and fixed pattern noise in general, slowly degrading the performance of the library of
calibration frames created during characterization (see Section 7.2).
7 Noise Reduction
7.1 Introduction
There are fundamentally only two ways to reduce a CMOS imaging sensor’s noise: by modifying the sensor’s
electronics, and by post acquisition processing, sometimes referred to as denoising. Reduction of the spatial
component of noise is fairly straightforward and effective; reduction of temporal noise generated by the CMOS
structure and circuitry is far more difficult and an active subject of investigation. Each generation of CMOS
technologies pushes the noise floor lower, but nature assures, primarily through photon and current signal shot
noise, that noise will never entirely be defeated.
Since we have no control over the design of the sensors we use, modifying the sensor’s electronics is not an
option. That leaves us with post-acquisition processing.
29
known quantity, affects the level of the signal, and this is what we remove in the process of calibration. Temporal
noise is never fully removable, but can be somewhat addressed by the techniques in the next section (Section 7.3).
Calibration, then, is a pixel-by-pixel process in which each pixel’s deviation from the normal pixel response to
even illumination, caused by the level of fixed pattern noise, bias, etc., is addressed by either subtracting some
value from the pixel’s DN count, or by using some value to scale the pixel’s DN count. The values used to do
this are provided by calibration images.
• Flat Fields: These are exposures taken under various exposure times and other settings that match the
settings of the image to be reduced. Their purpose is to remove pixel-to-pixel variation in photoresponse.
• Dark Frames: These are long exposures taken under conditions of no illumination. Their primary purpose
is to remove dark current fixed pattern noise.
• Bias Frames: These are zero second ’exposures’, sometimes referred to as offset frames. In order to keep a
pixel’s output from going negative, an arbitrary DN value is added to every pixel to ensure that every pixel
reports a minimum of 0 DN. This is called the bias signal, or offset. Bias frames remove this.
Each of these are discussed in more detail below. An important note to remember about calibration frames
is that they themselves can ALSO introduce noise. It is therefore important that not only do these frames need
to be acquired with good methodology and careful attention, but that noise reduction techniques also be carried
out on these frames through combination and averaging.
For our purposes, calibration images need to be acquired during the characterization process, when the camera
is already in a controlled environment and under conditions of appropriate illumination and exposure control. This
necessitates that decisions be made about camera settings, exposure times, and operational conditions before
characterization can commence.
30
Dark frames automatically include the camera’s bias, discussed more fully in the bias frame section below.
When using unscaled dark frames, bias is accounted for in calibration along with dark current fixed pattern noise
through the application of the unscaled dark frame, and bias frames are not required. The use of unscaled dark
frames assumes that a dark frame can be created for each particular exposure duration that will be used in
imaging. This is typically how calibration is done. An image is taken at a certain exposure duration, and then a
dark frame is taken at the same exposure duration and is then used in calibration.
For our purposes, however, the use of scaled dark frames may be more practical, for the reasons discussed
in Section 7.2.5. A scaled dark frame uses the fact that thermal dark current scales linearly with exposure
time. Bias, however, does not scale linearly with exposure time (but it can change with camera temperature;
see Section 7.2.4), as it is a set value. Therefore, bias frames are taken, which allows the separation of the time
varying dark current fixed pattern noise from the constant bias signal. The bias is subtracted from the dark frame,
creating a master dark frame, which is then scaled by exposure time, and then the bias frame taken under the
particular camera settings in use is applied in calibration, along with the scaled dark frame. (It is important to
remember here that scaled dark frames can only be scaled downwards in exposure length from a master dark
frame, never upwards. The selection of master dark frames for this purpose must be selected carefully to ensure
that all required image exposure durations are below the length of the available master dark frames.)
The effectiveness of both methods is a matter of contention amongst technical photometrists with strong
personal opinions, although in actuality it is mostly a matter of camera performance. Linearity in pixel response
is the determining factor here. If the signal range under which an exposure is acquired falls within the range in
which a camera has strictly linear response, then both methods achieve very similar and equally acceptable results.
Besides, for those situations in which taking unscaled dark frames for every exposure length is impractical, the
use of scaled dark frames is really the only other option.
31
calibration frames quite easily after images have been taken, and then use those frames to photometrically adjust
the images. This on-the-fly form of calibration is de rigueur for most photometry, with the exception of pipeline
photometry, as it removes the necessity of having a large and comprehensive database of calibration frames
and expending the associated time that goes into creating it. It allows the precise application of appropriate
photometric adjustment, tailored to each individual exposure.
When this functionality is not available, however, the only solution is to fully understand not only the instrument
settings under which images will be produced, but to also understand every different exposure time that could be
required. This information is then used to create libraries of either unscaled dark frames and flat fields, or master
dark frames, flat fields and bias frames, to be used in post acquisition data reduction. Another caveat, however...
these frames will only work at maximum effectiveness while the sensor is in the same condition under which the
frames were created.
This is an exceptionally important point. Lessons learned from the JUICE mission’s CMOS-based JANUS
camera, as discussed in Section 6 indicate that exposure to space radiation, especially high energy protons and
gamma radiation, will begin to change the response characteristics of active CMOS pixels over time, degrading
signal-to-noise and drastically increasing the prevalence of random telegraph signal noise, which is unfortunately
the most destructive of noise types. It is a given, then, that these calibration frames will become less effective
against fixed pattern noise as time progresses.
where I is the image, S is the signal, and N is the noise, assumed to be of Gaussian distribution with standard
deviation of σn .
Mathematically, the problem is that this is an inverse equation; we are asked to find S = I − N , and the
solution is not unique. Hence, the problem is considered to be ill-posed.
That doesn’t stop clever people from trying, however. Any solution that attempts to find I must meet several
criteria. The solution must:
32
The improvement in an image is measured using various image quality metrics. The most common are:
There are two main classifications of denoising solutions: the ’classical method’, called spatial domain denois-
ing, and the relatively new transform domain denoising.
7.3.2.1 Spatial Domain Filtering For CMOS imaging purposes, we need only consider linear filtering, as
non-linear filtering addresses multiplicative noise sources, such as the speckle noise found in radar and synthetic
aperture imaging [19].
Conventional imaging produces additive noise. Various filters are commonly used, but they all operate on one
principle: noise occurs at higher spatial frequencies, so apply various formulations of low-pass filters to remove it.
This sledgehammer approach leads to loss of detail and blurring. Filters that are commonly used are linear filters,
mean filters, Wiener filters, and bilateral filters (technically, a bilateral filter is a multiplicative filter type, but has
been shown to be quite effective on additive noise as well).
All filters result in some degree of blurring and loss of high spatial frequency detail. For consumer imaging
purposes, these have been refined to the point where this loss of detail is almost imperceptible to the eye. For
scientific applications, where pixel DN count is critical, these should be considered to be inappropriate. Of all
the filter types, the Bilateral filter [20] is the least destructive and preserves the most detail, but is extremely
computationally expensive. Another filter type, the Weiner filter, evolved from median type filters, and while
not useful in itself for our purposes, forms an essential component of a transform domain denoising technique
discussed in the next section.
7.3.2.2 Variational Denoising - RTS In variational denoising, the image gradient of an image is utilized to
detect areas of excessive noise. The image gradient is the directional change in an images intensity. Similar to
all uses of the gradient operator in two dimensional space, it produces a 2D vector at every pixel in the image,
pointing in the direction of maximum intensity increase, with its length giving the rate of change in that direction.
The assumption here is that noise creates excessive increases in intensity in reference to surrounding pixels. When
the image gradient is integrated, these excessive increases produce a high value for the image’s total variation,
from whence the technique gets its name. Denoising, then, involves decreasing the total variation of the image,
thereby reducing the noise while maintaining detail.
There are several methods of implementing this procedure: total variation regularization, non-local regular-
ization, sparse representation and low rank minimization. Although the details of these methods are beyond this
review paper, a specific implementation of the low rank-minimization, called weighted nuclear norm minimization
(WNNM) is very promising for addressing both white noise and RTS noise (and is one of the only methods shown
to substantially mitigate RTS noise without making affected pixel values unreasonable). It achieves extremely
high scores on noise reduction metrics, is robust to high levels of noise strength, and outperforms most other
methodologies [21, 18]. It’s downside is its relatively high computational cost.
For a discussion of the method, description of the algorithm, and access to sample code, see Kanavalau’s
Implementation of the Weighted Nuclear Norm Minimization for Image Denoising [22].
33
7.3.3 Transform Domain Denoising Techniques
Transform domain techniques are the newest evolution of denoising methodologies. They all rely on one basic
observation: the properties of signal information and noise information have very different characteristics in
transform domains. This observation allows a noisy image to be transformed into some other domain, and then
the noise is filtered out according to its different characteristics. The most significant domains here are the wavelet
domain and the block matching and three dimensional domain (BM3D), which is a highly effective modification of
the wavelet transform. Recent improvement in the BM3D methodology has made it the most effective denoising
technique available. The two downsides to this methodology are that it can produce artifacts in image flat areas
depending on how noise levels vary, and is, again, computationally expensive. And while it is quite good at
addressing most forms of temporal noise, it is not as effective on RTS noise as the WNNM technique.
BM3D relies on several preceding noise reduction techniques. It works like this:
• Patches of an image with similar characteristics are identified by block matching and stacked;
• These blocks are then transformed into the wavelet domain;
• Coefficient Weiner spatial domain filtering is performed;
• The resulting coefficients are inverse transformed;
• The patches are reassembled to form the image.
For a complete description of the method, and an implementation of the algorithm, see Marc Lebrun’s. An
Analysis and Implementation of the BM3D Image Denoising Method [23].
8 Conclusions
This paper has reviewed a considerable amount of information to familiarize the reader with noise sources in
CMOS imagers and ways to reduce them. In this concluding section, I wish to present the points that I consider
to be most relevant to the task at hand... that is, making sure that a relatively unprecedented and untested use
of CMOS sensors has the highest chance of succeeding in its purpose for the longest period of time possible. To
that end, these are the take-away lessons from this rather lengthy exercise.
1. The use of CMOS imagers in space has been limited to date, and there are not a lot of references in
the literature concerning their effectiveness, durability, or reliability in this environment. The best set of
references available are the papers concerning the JANUS camera aboard the ESA JUICE mission.
2. The structure of, and manufacturing processes behind, mass produced CMOS imagers have inherent flaws
that both create and exacerbate types of noise that are not common to CCD imagers.
3. CMOS sensors are not known, or proven to be, consistent in performance parameters from one to another
instance of the same design. It is also largely untested as to whether this difference occurs from wafer to
wafer, or within units manufactured from the same wafer. The assumption that one camera will therefore
behave the same as another is unresolved, as the use of a large number of CMOS imagers for scientific
purposes has, as best as I can tell, not been attempted as of yet.
4. The best preparatory action that can be taken to mitigate these unknowns is the comprehensive and thorough
characterization of each camera. It is entirely possible that, in the course of this process, some cameras will
found to be faulty in ways which will obviate their use.
5. The results of characterization, specifically in the creation of photon transfer curves, should play an important
role in calculations of exposure times to assure that signal levels fall with noise regimes that produce the
highest signal-to-noise ratios.
34
6. The absence of the ability to perform traditional calibration will demand the creation of large libraries of
calibration frames to be used in post-image processing. This demands knowledge of camera settings, such
a binning. gain, temperature of operation, and exposure durations.
7. As calibration frames will have to be accumulated during the characterization process, knowledge of these
settings will have to be determined before the characterization process can begin.
8. Radiation exposure will degrade CMOS imagers. This is an unfortunate fact; the only unknown here is to
what extent. CMOS structure is particularly vulnerable to high energy proton radiation and somewhat less
vulnerable to Gamma radiation. Shielding will prevent some of the influence of proton radiation, but proton
radiation interacting with aluminum produces gamma radiation via Bremsstrahlung. And no shielding, with
the exception of lead and concrete, is capable of shielding against gamma radiation. Determination of orbit
should be accomplished as soon as possible to help accurately simulate the radiation environment.
9. The degradation of performance from radiation exposure will erode the calibration library’s ability to reduce
even the most reducible forms of noise. Estimating the time period over which this will happen is dependent
on the simulation of the radiation environment.
10. As sensor imaging quality degrades due to an increasing noise profile, and calibration efforts lose their
efficiency, we will have to consider white noise reduction techniques to increase the camera’s functional
lifetimes. The unfortunate side effect of this is an increase in computational cost and time, and the
resulting slowing of vital image-reliant control loops.
35
.
References
Note: All illustrations in this work are borrowed from published articles, with the reference citation indicated in
the image captions. Image 1 was modified to include features not indicated in the original. Image 5, the original,
was modified to become Image 7
36
[21] JunFang Wu and XiDa Lee. “An improved WNNM algorithm for image denoising”. In: Journal of Physics:
Conference Series. Vol. 1237. 2. IOP Publishing. 2019, p. 022037.
[22] Andrei Kanavalau. Implementation of the Weighted Nuclear Norm Minimization for Image Denoising. 2022.
url: https://web.stanford.edu/class/ee367/Winter2022/report/kanavalau_report.pdf.
[23] Marc Lebrun. “An analysis and implementation of the BM3D image denoising method”. In: Image Processing
On Line 2 (2012), pp. 175–213.
37