Light Metering with Digital Cameras
Yulia Gryaditskya1
Tania Pouli1,2
Erik Reinhard 1,2
Hans-Peter Seidel1
1
Max-Planck Institute for Informatics, Saarbrücken, Germany
2
Technicolor Research and Innovation, Rennes, France
0
a. Sky Mask
b. Sky dome
(ground truth/estimated)
c. Tonemapped after
scaling with our estimate
d. Tonemapped after
scaling with ground truth
10
e. CIE ΔΕ94 between c and d
Figure 1: Our algorithm employs sky and weather models to estimate a scaling factor that can convert images to absolute luminance values.
(a) We detect and segment sky pixels, which are (b) analyzed to determine the properties of the sky dome as well as some camera parameters.
(c) By scaling the image with our estimated scale factor, we obtain absolute radiometric values, for instance allowing images to be processed
by appearance models such as Reinhard et al. [2012]. (d) The same processing was applied to the image after scaling it with the ground truth
scale factor. (e) The CIE ∆E94 differences between (c) and (d) indicate good correspondence between our estimate and the ground truth.
Abstract
Image calibration is often seen as the process of adjusting pixel values so that image intensities bear a linear relationship to scene luminances. In this paper we go one step further and propose a technique to estimate a one-to-one relationship between pixel values
and scene luminances. Rather than rely on camera characterization,
which tends to be a lengthy and complicated process, we achieve
this by analysing individual images and their meta-data, obviating
the need for both expensive measuring devices such as photometers
and reliance on specific camera and lens combinations. The analysis correlates sky pixel values to values that would be expected
based on geographical meta-data. Combined with multi-exposure
high dynamic range imaging, which gives us the camera response
curve, and thereby linear pixel data, our algorithm allows us to find
absolute luminance values for each pixel — effectively turning digital cameras into absolute light meters. To validate our algorithm
we have collected and annotated a calibrated set of high dynamic
range images. We discuss various applications in graphics, image
processing and computer vision for which absolute luminance values would be crucial, and demonstrate the utility of our method in
the context of calibrated color appearance reproduction as well as
lighting design.
1
Introduction
Image processing and computer vision algorithms often require
their input images to be linear. This means that the relationship
between pixel values and scene luminance is linear. As a consequence, linearized images represent scene luminance up to a constant. Recently, however, several developments have led to the need
for an algorithm to create images whereby pixel values directly represent absolute luminance values. This occurs, for instance, whenever images need to be accurately reproduced while taking into account the state of adaptation of the observer. Consider, for instance,
the SIM2 HDR47E high dynamic range display1 . This display has
the unique ability to take pixel values and map them to absolute
luminance values. A display such as this has the ability to exactly
reproduce scenes values, affording interesting applications in for in1 www.sim2.com/HDR/
stance areas as diverse as lighting design and visual psychophysics.
However, this can only be successfully achieved if the input imagery is not only linear, but also absolute.
Another application would be in image-based lighting [Debevec
2002]. Here, light probes captured from real scenes are used to
realistically illuminate artificial scenes. Although such light probes
are linear, they are not normally specified in absolute units. As a
result, to achieve an accurate representation of the illumination, the
user is required to adjust gain parameters that scale the strength of
the illumination. Consequently, image-based lighting is normally
used in non-critical applications, such as entertainment. With absolute light probes, however, image-based lighting could become a
powerful tool for lighting designers. As an example, the impact of
outdoor lighting environments on office spaces could be assessed.
Finally, scene luminance would be required in image reproduction,
and in particular color appearance modeling. Here, the scene represented by the image would induce a specific state of adaptation
in the observer. The display and the display environment together
would induce a different state of adaptation. Color appearance
models account for this mismatch in adaptation [Fairchild 2005]
but we can only use such algorithms effectively if we know absolute light levels in both the scene and the display environment. The
latter can often be controlled and measured to a sufficient degree.
However, images are usually not calibrated in absolute light levels
as that would currently require measurements of a standard target
in every scene captured, using specialized equipment.
To calibrate images to accurate radiometric values they need to be
linearized first, and then scaled. Linearization has received significant interest from the research community, and as a result several off-the-shelf solutions exist. In our case, we use high dynamic
range (HDR) images, which are created from multiple exposures.
As the camera response curve is estimated and accounted for during this process, we do not have to worry about linearization: such
HDR images are linear by construction.
On the other hand, algorithms for estimating absolute luminance
values in images are few and far between, in particular those that
do not use extraneous equipment. Of course, it is always possible
to insert an 18% grey card or a color checker into the scene, and use
a photometer to measure it. Then, a photograph taken with the test
target present would allow absolute values to be reconstructed as a
post-process. Nonetheless, this would require equipment that is not
universally available to most photographers.
Instead, we note that most cameras record and store in EXIF fields
significant amounts of metadata. Moreover, with the increased popularity of geotagging, modern cameras and mobile phones often include a built-in GPS system, or such units can be bought as cheap
accessories. Thus, it is straightforward to know when an image
was taken, and where. From this information, we can compute how
much light would reach earth from straight overhead (the zenith).
A similar estimation can also be made using the image pixels themselves, and in particular the sky pixels (see Figure 1). The ratio
between these two amounts gives us a scale factor that we can apply to the image to produce absolute luminance values. Of course,
this limits us to outdoor HDR images that include sky pixels. In
our opinion this is a sufficiently large class of images to be useful.
Moreover, our algorithm does not depend on any single camera and
lens combination, as would be the case with algorithms based on
camera calibration.
To validate our algorithm, we require a set of calibrated HDR images with associated EXIF and GPS metadata. We acquired a set
of ground-truth images2 and show that our algorithm, detailed in
Section 4, functions with sufficient accuracy (Section 6). We then
demonstrate the algorithm’s utility in the context of color appearance modeling as well as lighting design (Section 7) and draw conclusions (Section 8). In summary, our work offers the following
contributions and benefits:
• To our knowledge, this is the first algorithm to infer absolute
luminance values from an image.
• A calibrated and annotated HDR dataset which includes GPS
and EXIF metadata.
• No specialist equipment is required, nor is there a need to
characterize the camera or lens.
• The algorithm takes as input a single HDR image (with metadata) as input and does not require training on external data.
• The method’s utility is demonstrated in the context of color
appearance reproduction and image-based lighting design.
2
Background
To create radiometrically calibrated imagery, usually the requirement is to have pixel values linearly correlate with scene luminance
values. There are several ways to accomplish this. Perhaps the simplest approach is to take the RAW output that many digital cameras
offer. As little firmware processing has been applied to these images
and sensor response is linear with respect to the number of incident
photons, a linear relationship between pixels and scene luminance
can be assumed [Holst 1998; Janesick 2001]. Other images can be
linearized by estimating the camera’s response function [Lin et al.
2004], which effectively reverse-engineers the in-camera firmware
processing. To make linear images absolute, typically a photometer
is employed to measure a part of the scene.
In high dynamic range imaging, images are usually created by
merging a stack of differently exposed images of the same scene.
In this case, the camera response curve can be estimated from the
exposure stack [Mann and Picard 1995; Debevec and Malik 1997;
Mitsunaga and Nayar 1999; Robertson et al. 2003]. As merging
exposures into HDR images is now commonly available in many
software packages, we use HDR images as input, and thereby solve
the problem of having to linearize the data. In addition, this gives us
increased precision as images less often suffer from exposure and
quantization problems.
2 We
aim to make this dataset generally available.
Beyond camera response recovery, more elaborate HDR camera
characterization and calibration can further improve the accuracy of
the linearization process [Inanici and Galvin 2004; Krawczyk et al.
2005], including color accuracy [Kim and Kautz 2008]. In each
of these cases, however, creating absolute image data continues to
require measuring hardware. In this paper, we explore the possibility of relaxing this constraint for outdoor images, instead analysing
commonly available meta-data available through GPS and EXIF
headers, and combining this with the analysis of pixel data in the
image itself. In particular we analyse sky pixels to infer luminance
incident from the zenith and correlate this with a similar estimate
derived from meta-data.
The analysis of natural illumination in images can be achieved in
several different ways and for different purposes. Color constancy
algorithms, for instance, aim to factor out the color of the illuminant
[Finlayson et al. 1994]. Other approaches may estimate illumination together with reflectances [Basri et al. 2007] or they may be
able to determine the dominant direction and location of lights in
the scene depicted in an image [Lopez-Moreno et al. 2010]. The
problem of illumination estimation may be simplified if extra information about the scene is available, either in the form of a 3D model
[Sun et al. 2009], as a time-lapse sequence [Sunkavalli et al. 2008;
Lalonde et al. 2008; Lalonde et al. 2010] or as photo-collections
[Laffont et al. 2012].
The estimation of the sun position is important in several areas, including lighting design [Ward Larson and Shakespeare 1998], and
in the estimation of electricity produced by photo-voltaic cells at
ground level [Mills 2004]. This has given rise to advanced atmospheric sun-light transport models [Bird and Riordan 1986; Kasten
and Young 1989], as well as algorithms to detect sun position on
the basis of shadow extraction [Kim and Hong 2005; Junejo and
Foroosh 2008].
The sky dome is the second source of illumination in outdoor
scenes. Although these can be captured directly [Stumpfel et al.
2004] or represented statistically (see Lalonde et al. [2012] for an
overview), there exist several parametric sky models that can be
used in various interesting ways. For instance, they have been used
to relight 3D architectural models [Yu and Malik 1998], render
skies [Preetham et al. 1999], and insert 3D geometry into a scene
[Lalonde et al. 2012]. Using time-lapse sequences, skies can be
used as geometric calibratrion targets to reverse-engineer intrinsic
camera and lens parameters, including the focal length of the lens,
the orientation of the camera as well as its geolocation [Lalonde
et al. 2010]. In a sense, our work is complementary to this analysis:
we eschew the use of image sequences, assume camera parameters
and GPS location to be available, and instead use sky models to
infer zenith illumination for the purpose of finding absolute pixel
values. Due to the relevance to our work, a more detailed discussion of parametric sky models follows below.
3
Parametric Sky Models
When sunlight enters the atmosphere, it interacts with particles
and molecules before reaching ground level. Small particles
cause Rayleigh scattering (which leads to blue skies and a yellow
sun), whereas larger molecules create Mie scattering, which is not
strongly wavelength dependent but leads to haze. The latter is often described with a single parameter, called turbidity [McCartney
1976]. The luminance distribution of the sky dome thus depends
on atmospheric conditions as well as the position of the sun. Various models exist that predict this distribution. In our work, we use
the Perez sky model as a prior, which was shown to out-perform
competing algorithms [Ineichen and Molineaux 1994]. It allows us
to infer the luminance arriving from the zenith, based on the sky
Zenith
Given this relation, we will use f (θp , γp , ρ) and f (θp , γp , t) interchangeably, but note that turbidity only describes clear to hazy
skies; it is not appropriate for overcast skies.
Sun
3.2
North
West
Camera
direction
East
South
Image plane
Figure 2: Angles defining the position of the sun, a camera and a
sky pixel p in the image.
region visible in the input HDR image.
3.1
The Relative All-Weather Perez Sky Model
The model introduced by Perez et al. [1993] consists of two factors.
The first factor describes vertical gradations of luminance values
and is therefore called the gradation function:
fgrad (θp , a, b) = 1 + a exp(b/ cos θp ),
The sky area in the image can be a rich source of information, especially given knowledge of the luminance distribution f (θp , γp , t)
of the sky dome. This can for instance be used to recover camera
parameters such as focal length fc , zenith angle θc and azimuth angle φc [Lalonde et al. 2008; Lalonde et al. 2010] by minimizing the
difference between observed sky pixel values and the parametric
luminance distribution of the sky dome. To compare image intensities with the sky luminance distribution two operations should be
performed. First, the relation of pixel in position (up , vp ) in the image and the corresponding sky element p in the sky dome should be
set (see Figure 2). This is obtained by expressing the sky element
zenith and azimuth angles in terms of the camera parameters and
pixel coordinates. The zenith angle θp of the sky element and its
angle with the sun γp are [Lalonde et al. 2010]:
!
vp sin θc + fc cos θc
−1
p
(6)
θp = cos
fc2 + u2p + vp2
γp = cos−1 (cos θs cos θp +
sin θs sin θp cos(φp − φs )) ,
where φs is the sun azimuth angle and the sky element azimuth
angle φp is found from the equation:
tan(φp ) =
(2)
where γp is the angle between a sky element and the position of the
sun (Fig. 2) and c, e, d are parameters describing the sky appearance near the sun. Parameter c ≥ 0 is proportional to the luminance
of the circumsolar region, while d is its width. Finally, e accounts
for the relative intensity of backscattered light. Combining these
parameters into a vector ρ = (a, b, c, d, e), Perez’s full model is
then the product of these two factors:
(3)
The values returned by Perez sky model are not absolute and can
be normalized to any desirable range of values, using a known luminance value of any sky element. Typically, the zenith luminance
lz is used:
f (θp , γp , ρ)
lp = lz
,
(4)
f (0, θs , ρ)
where f (0, θs , ρ) is the relative luminance at the zenith, predicted
by (3), as in this case the zenith angle θp of the sky element is equal
to zero, and the angle of the sky element with the sun γp is equal
to the sun zenith angle θs (see Figure 2). The five parameters in
Perez’s model directly relate to turbidity t, especially for relatively
low turbidities (t ∈ [2, 6]) that describe atmospheres that are between clear and thin fog [Preetham et al. 1999];
0.179 −0.355 −0.023
0.121 −0.067
T
ρ = t 1
−1.463
0.428
5.325 −2.577
0.370
(5)
fc sin φc sin θc − up cos φc − vp sin θc cos φc
.
fc cos φc sin θc + up sin φc − vp cos θc cos φc
Now, Equation (3) can be expressed in terms of camera parameters
and pixel coordinates:
The second factor describes the influence of the sun on the sky and
is given by the indicatrix function:
find (γp , c, d, e) = 1 + c exp(dγp ) + e cos2 γp ,
(7)
(1)
where θp is the zenith angle of the considered sky element (see
Figure 2), a and b are two parameters which describe sky properties.
The first parameter a corresponds to darkened (a > 0) or lightened
horizon regions (a < 0), which is indicative of whether the sky is
overcast or not. The second parameter b ∈ [−1, 0] refers to changes
of the luminance gradient near horizon. This factor describes sky
luminances away from the sun.
f (θp , γp , ρ) = fgrad (θp , a, b)find (γp , c, d, e)
Recovery of Camera Parameters
p
f (θp , γp , ρ) = g(up , vp , θc , φc , fc , θs , φs , ρ),
(8)
Second, the luminance values produced by g should be normalized to the range of the intensities in the image. For this, we use
Equation (4). As we are not interested in absolute values and the
zenith rarely can be observed in the image, an optimization is also
performed for the zenith luminance value lz , which is in this case
relative to luminance values in the image lp .
3.3
Absolute Zenith Luminance Models
We are interested in finding absolute values for all the pixels in the
image. This requires the aid of the aforementioned sky luminance
distribution models, although this can only give us relative values.
There is, however, one position in the sky with has special relevance, which is the zenith. There exist several models which correlate the luminance arriving from the zenith to the solar elevation as
well as turbidity for cloudless skies [Karayel et al. 1984; Preetham
et al. 1999; Soler and Gopinathan 2000] as well as for overcast
skies [Soler et al. 2001]. The time of year and the luminous clearness index of the sky is also indicative of zenith luminance [Soler
and Gopinathan 2001].
For clear skies, we selected zenith luminance models from Karayel
et al. [1984] and Soler and Gopinathan [2000], as these were found
to perform well in a comparative study [Zotti et al. 2007]. Zenith luminance LKarayel
can be computed from turbidity t and sun elevation
z
angle ζ [Karayel et al. 1984]:
LKarayel
= (1.376t − 1.81) tan(ζ) + 0.38,
z
(9)
Symbol
θ
φ
Subscript c
Subscript p
Subscript s
γp
(up , vp )
fc
ζ
t
lz
Lz
ξ
Meaning
Zenith angle
Azimuth angle
Referring to the camera
Referring to a sky element in the image
Referring to the sun
Angle between the sun and a sky element
Coordinates of sky element in image
Focal length of the camera
Sun elevation angle
Sky turbidity
Relative zenith luminance
Absolute zenith luinance
Absolute scaling factor
Table 1: Symbols used in the description of the algorithm.
where t is turbidity and ζ is the sun elevation angle. Soler and
Gopinathan [2000] propose to fit fifth order polynomials to measured data as function of sun elevation angle ζ:
!
5
X
Soler
i
ki ζ
(10)
Lz
= exp
i=0
where ki are coefficients. They derived a set of five models for
various different conditions, which are described along with the coefficient values in the supplemental materials.
For overcast skies, models of zenith luminance can be derived for
different sky types ranging from completely overcast to near overcast [Kittler et al. 1998; Soler et al. 2001]. In particular, Kittler et
al. [1998] distinguish five different types of overcast skies. Zenith
luminance of a given sky type can be found using [CIE 2003]:
Dv B(sin (ζ))C
+
E
sin(ζ)
,
(11)
Lovercast
=
z
Ev
(cos(ζ))C
where Dv is diffuse sky illuminance, Ev is extraterrestrial horizontal illuminance, ζ, as previously, is the sun elevation angle, and B,
C and E are parameters determined by the given sky type. A further discussion, as well as values for all parameters for each of these
five sky types are given in the supplemental materials. Dependent
on the content of a given image, we use one or a combination of
these models to derive zenith luminance, as explained in the following section.
4
Algorithm
The goal of our algorithm is to convert a linear HDR image of
an outdoor scene into absolute radiometric measurements of that
scene. To achieve that we compute a scale factor ξ which can be
used to convert relative luminance values to absolute luminance.
The input to our algorithm is a single linear HDR image, GPS data
indicating where the image was taken and a small set of metadata,
which can be extracted from the EXIF data of the image itself (date
and time of capture, sensor size, focal length).
To compute a scaling from relative intensities to absolute luminance
values, we rely on regions of sky in the image, which are found using a state-of-the-art semantic image segmentation algorithm. Unlike most parts of an outdoor scene, the appearance of the sky is
predictable and can be modelled with reasonable accuracy given a
small set of parameters, as discussed in the previous section. This
observation forms the main motivation for our framework; since
pixels within an HDR image are linearly related, it is sufficient to
estimate a scaling factor for sky pixels in the image.
To achieve this, our algorithm first estimates the luminance at the
zenith of the sky dome relative to the sky pixels visible in the image
through an optimization procedure. To simplify this process and reduce the number of unknowns, we take advantage of the increased
availability of GPS coordinates and the EXIF data that is typically
attached to the image. Using the output of the optimization procedure and some of the information within the metadata, we can also
obtain an absolute estimate of the luminance at the zenith of the sky
dome. Since the original zenith estimate is given relative to the values of sky pixels in the image, this second absolute estimate allows
us to compute an absolute scale factor that can finally be applied
to all pixels. Figure 3 illustrates the flow of the algorithm and the
following sections will discuss each step in detail.
4.1
Sky Segmentation and Classification
Our algorithm processes regions of visible sky in images to determine the position of the sun relative to the depicted scene. This
information is in turn used to estimate the luminance at the zenith
of the sky dome. Consequently, the first step of our algorithm is to
extract portions of the image that depict sky.
We segment the image using the segmentation method of Felzenszwalb et al. [2004] and use the surface layout recovery method of
Hoiem et al. [2007] to separate sky regions from the rest of the image (see for example the left-most image in Figure 1). We wish
to determine if this sky segment represents a clear or an overcast
sky. The segment’s pixel data is conditioned by excluding 20%
of the lightest pixels, as regions near the sun tend to be unreliable
due to exposure problems. We note that parameter a of the Perez
sky model is indicative of whether sky pixels near the horizon are
lighter than sky pixels away from the horizon (a < 0, indicating
clear sky), or vice-versa (a > 0, idicating overcast sky). Fitting the
pixels of the sky segment to this model therefore allows us to select
whether we should use a clear sky model or an overcast sky model
in the following. For our datasets, we find that this approach correctly selects the type of sky for 95% of our clear images as well as
for more than 80% of our overcast images, as shown in Figure 4. In
the case of clear sky images, the segmentation can be further finetuned using k-means clustering, whereby the cluster closer to blue
(RGB = [001]) is selected.
4.2
Relative Zenith Luminance
We are interested in recovering the zenith luminance lz relative to
the luminance values in the image. This task is performed by minimizing the differences between intensity values of sky pixels in the
image and the normalized values predicted by (3). One can think
about the value of lz as the value that would be recorded if the image could be extended so that the zenith were visible.
To obtain an accurate estimate of lz , a number of parameters are
necessary. Although we rely on the optimization step presented below for the estimation of some of the parameters, information in the
image and its metadata can be used to estimate the values for some
of the unknowns. Since more often than not the sun will not be visible in the image, we use the date and time of capture and the GPS
data to recover the sun zenith θs and azimuth φs angles. To achiever
that we employ the solar model by Reda & Andreas [2003].
Additionally, one of the unknowns in (8) is the focal length fc of the
camera, which is necessary for computing the zenith and azimuth
of a given sky element in the image. Here, we take advantage of
some of the metadata that is automatically recorded when the image
is taken (EXIF data), specifically the focal length (usually given
in mm) and the size of the sensor. Using this information, the
Exposure brackets
Relative zenith luminance lz
EXIF/GPS metadata
Absolute zenith luminance Lz
Initial estimation of model parameters
Overcast sky type selection
Relative HDR
Sky segmentation
Model selection
Perez model optimization
Absolute HDR
Overcast model
Perez model optimization
Clear sky model
Figure 3: The flow of our algorithm. Dashed lines indicate input to the algorithm.
a4
3
2
position of the horizon line:
Overcast sky
Clear sky
Mis-classified
Separator
π
θc = + sign(h) tan−1
2
1
0
−1
a. Example mask
Images
b. Classification based on Perez
Figure 4: Fitting the Perez sky model to the detected sky region
allows us to select overcast or clear sky models in the remainder of
the algorithm.
h
fc
.
(13)
No constraints are imposed on φc . The optimization of Equation
(12) is a non-linear least-squares minimization problem with inequality constraints. As an iterative approach is required for its
solution, we choose to use a trust-region algorithm. We use the implementation of the trust-region-reflective algorithm from the M ATLAB Optimization Toolbox. This implementation allows to solve
non-linear least-squares problems with constraints and thus fully
satisfies our requirements.
Since the appearance of overcast
skies is significantly different to that of clear skies, a different approach is required in this case. The estimation of the relative zenith
luminance lz for overcast images is performed in two steps.
Images with overcast skies.
focal length can be converted to pixels, allowing us to eliminate
one unknown parameter from (8).
The appearance of the sky in
the image and therefore the amount of light depends on the level
of turbidity in the atmosphere. To capture these variations, we allow this parameter to vary from 2.17, which corresponds to the CIE
Standard Clear Sky [CIE 2003] to 6, thus gaining a more exact representation of the sky. We restrict the turbidity to this range to coincide with the range for which Preetham’s mapping [1999] is defined
(given in (5)). Now the function g (Equation (8)) only depends on
up , vp , θc , φc and t, as the other parameters are determined by the
given image and its metadata.
Images with clear sky regions.
To obtain estimates of these parameters we solve the following optimization problem:
min
θc ,φc ,t,lz
X
p∈P
g(up , vp , θc , φc , t)
lp − lz
f (0, θs , t)
2
,
(12)
where P is a set of sky pixels, lp is the observed intensity of pixel p
in the image. Additionally, we impose constraints on the turbidity
t as described before. Following Lalonde et al. [2010], the lower
boundary for lz is set to zero while the upper boundary for the horizon line h is set to the position of the lowest sky pixel in the image.
The value of camera zenith angle θc is then computed based on the
I. First, we solve a continuous optimization problem similar to
Equation (12) to obtain an initial estimate of the model parameters.
In the case of overcast skies however, we can not use the mapping
(5) and are obliged to use the representation (8) of the Perez sky
model, which requires additional parameters ρ. Consequently, the
optimization problem is formulated in this case as:
min
θc ,φc ,ρ,lz
X
p∈P
lp − lz
g(up , vp , θc , φc , ρ)
f (0, θs , ρ)
2
,
(14)
optimizing by θc , φc , ρ, lz . In the remainder of this paper, the solution of this step will be written as θcest , φcest , ρest , lzest .
II. Following this initial optimization process, we estimate the type
of overcast sky iopt present in the image as one of five overcast
sky types [Kittler et al. 1998]. Examples of the different types are
given in the supplemental materials. Several suitable approaches
can be used for this step, which offer similar performance. We
describe several approaches below and compare their performance
in Section 6.
II.a Let ρi be the set of sky parameters of one of the five overcast
sky types, i ∈ [1, 5]. Then, the sky type iopt can be taken as that
with the lowest l2 -norm of the differences between its vector of sky
parameters ρiopt and the estimated optimal vector ρest of for a given
image. Afterwards, the problem (14) is solved again to find a better
estimate of the relative zenith luminance lz , with ρ set to the values
of sky parameters ρiopt of the found sky type.
luminance value Lz , the scale factor ξ can be computed by substituting these values into (16):
II.b Instead of comparing the optimal vector ρest of sky parameters
with a vector of parameters describing certain type of a sky, we can
take the camera zenith θcest and azimuth φcest angles as exact. Then,
for each sky type i the problem (14) can be solved with θc , φc and
ρ set to θcest , φcest and ρi . The sky type iopt can then be determined
as one for which the value of the objective function of the problem
(14) in the solution is minimal.
ξ = Lz /lz .
II.c In some cases, specifically when the camera is facing away
from the sun, only the zenith angle of the camera θc plays an important role in the calculations while errors in the azimuth angle φc
do not affect the accuracy of the result. To cover this scenario, we
also consider the case when θcest is assumed to be an exact value of
the zenith angle. In this case, we fit the value of the camera azimuth
angle φc in the process of solving (14) for each sky type i.
Note that although each of these models is suitable for some images, we have found the model II.a to be the most robust overall.
However we refer the reader to the validation section of our paper
for a more detailed comparison of the performance of these models.
4.3
Absolute Zenith Luminance
The previous section described the steps for recovering an estimate
of the zenith luminance relative to the image pixels. To be able to
scale the image to absolute radiometric quantities, we also need an
estimate of absolute luminance. For images with clear skies, the absolute zenith luminance value can be obtained from Karayel’s models (9) [Karayel et al. 1984] and one of Soler’s models (10) [Soler
and Gopinathan 2000], where the coefficients are given in the supplemental materials. We found that while both options give reasonable results, their accuracy depends on the sun elevation angle.
We propose a weighted combination of LKarayel
and LSoler
(Equations
z
z
(9) and (10); the latter using one of Soler’s models, as given in
the supplementals), interpolating according to sun elevation angle
ζ ∈ [0, 75] (degrees):
Lz = cos2 (3ζ)LKarayel
+ sin2 (3ζ)LSoler
z
z .
(15)
Based on the above interpolation, Soler’s models (10) are prioritized for values between 15 and 45 degrees, whereas Karayel’s
model receives a higher weight for all other angles. Note that the
interpolation weights always sum to 1.
For images with overcast skies the absolute zenith luminance Lz is
computed from the CIE daylight distribution model given in Equation (11) using standard values of parameters for the appropriate
overcast sky type iopt (given in the supplementals) .
4.4
Final Scale Factor
Once an estimate of the absolute zenith luminance is obtained, the
last step of our algorithm is to estimate a factor ξest that will allow
us to scale the intensity values le of each pixel (ue , ve ) in the image to absolute luminance values Le as would be measured in each
corresponding real world location e at the moment of capture:
ξe , Le /le .
(16)
As we work with HDR images, luminance values are linearly related to the illumination in the scene. Consequently, the scale factor estimated from the sky region can be used in the whole image,
assuming no over- or under-exposed pixels are present. Knowing
the relative zenith luminance value lz as well as the absolute zenith
(17)
Using the factor ξ, the input HDR image can now be scaled such
that each pixel represents an absolute measurement of the luminance values in the scene, effectively turning a conventional camera
to a high resolution light meter.
5
Ground Truth Dataset
To validate our algorithm, we require a calibrated set of HDR images that are annotated with both EXIF and GPS data. To date, only
the HDR Survey [Fairchild 2007] provides all the information that
we require. However, this dataset does not have a sufficient number
of images with skies regions to adequately test our algorithm. For
this reason, we have created an additional calibrated HDR image
dataset, annotated with EXIF as well as GPS location and orientation information. All images in this dataset contain a significant
portion of clear or overcast sky. We plan to make this dataset available upon publication of this paper. In the following sections, we
compare the performance of various absolute zenith models in the
context of our light metering algorithm.
Our HDR dataset was photographed with a Nikon D2h professional
camera, which using its autobracketing feature acquires up to 9 differently exposed images each spaced up to 1 eV apart. The white
point was set to a fixed 6700K (the whitepoint nearest to D65 that
this camera supports) and the exposures were saved in the sRGB
color space. The camera was mounted on a tripod to minimize camera motion. Images were assembled into linear HDRs using Greg
Ward’s Photosphere application, an application which we have also
used to derive the response curve of the Nikon D2h. We placed
both an 18% grey card and a GretagMacbeth color checker in each
scene for calibration purposes. Measurements of Yxy components
of the grey card and several patches from the ColorChecker were
taken with a Minolta CS100 color and luminance meter. To enable
the validation of our algorithm with this dataset, we use the in situ
measurements taken off the test targets to scale the HDR images to
absolute values. We also transformed the images to the D65 white
point, as per the sRGB standard. Using this set-up we collected 24
calibrated HDR images containing clear skies with varying turbidity and 9 images with overcast skies.
6
Algorithm Validation
In the following, we compare variants of our model for clear skies
on a selection of images from the HDR Photographic Survey that
contain sufficiently large areas of clear skies. We then compare
various versions of our model using our own images for both clear
skies and overcast skies.
6.1
HDR Photographic Survey
From the HDR Photographic Survey [Fairchild 2007] we selected
15 images with clear skies and 5 with overcast skies. Images with
partly cloudly skies were segmented manually. The scale factor that
allows the creation of absolute pixel values, is estimated from Equation (17), using the various absolute zenith models from Section 3.3
to calculate Lz . We use relative error in our comparisons:
δ , (ξ − ξmeas )/ξmeas ,
(18)
where ξmeas is a scale factor taken from the HDR Photographic Survey. We now have the option of either fixing turbidity t to a fixed
t = 2.17
t = opt.
Karayel
Solert
Soler 131
Soler 132
2
1
Karayel
Solert
Soler 131
Soler 132
Relative error
Relative error
3
0.5
Our images
Solert
Soler 131
Soler 132
−0.5
−1.0
0
10
20
30
40
50
60
70
Sun elevation ζ (degrees)
Figure 5: Relative errors for images with clear sky regions from
the HDR Photographic Survey.
Zenith
Lum.
Model
Karayel
Solert
Soler131
Soler132
t = 2.17
all ζ
0.459
0.586
0.577
0.795
value (we choose t = 2.17 for clear skies [CIE 2003]), or to include
it in the optimization procedure of Equation (12). This allows us to
determine whether our optimization algorithm is sufficiently stable
to include turbidity in the calculations.
The results are shown in Figure 5, and aggregated in Table 2. For
variants of Soler’s model (see Equation (10)) the results improve if
we include turbidity in the optimization. However, we found that
this is not the case for Karayel’s model (Equation (9)). We note
that for sun elevation angles below 15 degrees Karayel’s model
performs best, whereas Soler’s models perform better for sun elevations between 15 and 50 degrees. Our weighted combination of
both models (Equation (15)) accounts for this and produces errors
smaller than can be achieved with either model. This is shown in
the left plot of Figure 6. Finally, Table 3 shows that interpolation
between Karayel’s method and any of the three Soler algorithms
produces comparably good results.
Clear-Sky Dataset
The analysis presented in the preceding section was also carried
out for our own clear-sky dataset, with results for different zenith
luminance models given in Figure 7, with means and variances tabulated in Table 4. As this dataset depicts scenes of different turbidity, we find that including turbidity t in the optimization provides a
significant advantage over fixing turbidity to the constant value of
t = 2.17, which is the CIE-defined turbidity for a clear sky [CIE
2003]. The results obtained with our own clear sky images confirm those obtained with the HDR Photographic Survey, showing
that our weighted combination of zenith luminance models outfperforms each individual model, as shown in Figure 3.
Outdoor illumination conditions are highly variable, changing dramatically over the course of a day. They can change almost instantly
in the presence of fast moving clouds. The accuracy of our results is
significantly higher than the variability of the mean luminance values observed in our calibrated datasets, confirming that the quality
of our results is sufficiently high.
0
20
40
60
0
Sun elevation ζ (degrees)
10
20
30
Sun elevation ζ (degrees)
Figure 6: Relative errors for images with clear sky regions from
the HDR Survey [Fairchild 2007] (left) and from our own dataset
(right). Lz is estimated using Equation (15), while the value of lz
is obtained with optimization over turbidity t.
µ(|δ|)
t optimized
all ζ
ζ < 15 ζ > 15
0.469
0.515
0.450
0.563
0.863
0.442
0.554
0.815
0.449
0.572
0.875
0.450
Table 2: Mean µ of relative errors magnitudes |δ| for images from
the HDR Photographic Survey.
6.2
HDR Survey images
1.0
0.0
0
−1
1.5
Algorithm
Solert
Soler131
Soler132
HDR Survey
µ(|δ|) σ(|δ|)
0.440
0.334
0.445
0.345
0.445
0.346
Our data
µ(|δ|) σ(|δ|)
0.309
0.251
0.305
0.255
0.307
0.256
Table 3: Mean µ and standard deviation σ of relative error magnitudes |δ| for images with clear sky regions, when Lz is computed
with Equation (15).
6.3
Overcast Sky Dataset
Due to the limited number of overcast skies in either the HDR Photographic Survey and our own dataset, we pool the results from both
image collections in the following.
A comparison of relative errors for scale factors estimated using
approaches II.a, b or c is shown in Figure 8. Approach II.a gives
the lowest overall error. Moreover, this approach requires fewer
computations than II.b and c. Nonetheless, our results show that
solving the optimization problem for a second time with sky parameters fixed to the parameters of the estimated sky type produces
more exact results, compared with using lz as output by step I directly (Equation (12)).
We deem the results obtained with algorithms for both clear as well
as overcast skies sufficiently reliable. We note, however, that overcast skies are more difficult to predict than clear skies due to the
highly variable nature of clouds. This makes estimating the sky
type somewhat ambiguous. While many images are estimated with
errors close to zero, for some images we obtain larger errors (see
Table 3). As shown in Figure 8, our results are more reliable when
the camera is not pointed towards the sun. Given that photographers
as a rule keep the sun behind them, we deem this limitation of minor importance. Furthermore, in many cases, and especially when
capturing sky domes, it is possible to actively select pixels away
from the sun’s position, thereby improving reliability. Finally, in
the context of color appearance modeling, the supplemental materials show that the amount of error in our estimates does not have a
significant impact on the quality of the results obtained.
7
Applications
Direct display of absolute image data on calibrated HDR displays
is one obvious application of our work. HDR video encoding also
requires absolute image data [Mantiuk et al. 2004]. In this section,
we explore two specific applications of our work. In particular,
absolute image data is crucial in two scenarios, which are color ap-
t = 2.17
4
t = opt.
Karayel
Solert
Soler 131
Soler 132
3
2
Karayel
Solert
Soler 131
Soler 132
Relative error
Relative error
5
2.5
I
IIa
IIb
IIc
2.0
1.5
Mean error:
0.492
0.450
0.572
0.469
1.0
1
0.5
0
−1
0.0
0
5
10
15
20
25
30
35
Sun elevation ζ (degrees)
Figure 7: Relative errors for our clear-sky image dataset.
Zenith
Lum.
Model
Karayel
Solert
Soler131
Soler132
t = 2.17
all ζ
0.503
0.759
0.745
0.773
µ(|δ|)
Opt. t
all ζ
ζ < 15
0.354
0.416
0.416
0.769
0.396
0.724
0.413
0.768
ζ > 15
0.319
0.218
0.211
0.212
0
20
40
60
80
100
120
140
160
180
Figure 8: Relative errors for images with overcast skies as function
of the azimuth difference between sun and camera. Here we compare various algorithms that estimate the type of overcast sky. For
step I, we used lz ← lzest in conjunction with an estimate for Lz
obtained using step IIa.
Table 4: Mean values µ of relative error magnitudes |δ| in our
clear-sky image collection.
pearance modeling as well as lighting design. In the former case,
images scaled to absolute values are required as this is the only
way to accurately estimate what the state of adaptation of a human
observer would be in the scene represented by the image. In the
latter case, images of sky domes can be used to illuminate architectural models, enabling predictive lighting under varying conditions.
Without absolute calibration of the sky dome, image-based lighting
would not be an appropriate technology in lighting design.
7.1
b. Daylight Simulation
Figure 9: Using the estimated scaling factor from our algorithm,
the sky dome in (a) was scaled to absolute values and used to simulate what this conference room would look under daylight illumination (b). Such a simulation would be useful for lighting simulation
in architectural applications.
Image-Based Lighting Design
Image-based lighting is a technique which uses captured image data
to illuminate 3D geometry [Debevec 2002; Reinhard et al. 2010].
It currently finds frequent employ in the movie industry. To enable its use for predictive purposes in lighting design, the technique
described in this paper could be applied to generate absolute sky
domes. This would allow spatial variability in the illumination of
architectural models that could not be achieved with procedural
models, for instance due to cloud cover. The supplemental materials show how to set-up the calculations and show results demonstrating the accuracy of the method in this application. An example
rendering obtained in this manner are shown in Figure 9.
7.2
a. Sky Probe
Color Appearance and Tone Reproduction
Color appearance modeling techniques aim to accurately reproduce the appearance of a scene under different viewing conditions [Fairchild 2005]. They rely on radiometric measurements
of the illumination. Although specialized equipment can be used
to obtain such measurements at the time of capture, this is often
not practical. Additionally, existing content captured without such
measurements cannot be accurately processed by CAMs currently
as innacuracies in the scaling of the HDR images can adversly affect the resulting appearance.
Figure 10 illustrates this issue. In the top row, a non-calibrated image was scaled to different mean luminance levels and tonemapped
with the calibrated model of Reinhard et al. [2012]. As this model
expects absolute values, if the mean luminance of the image is
over or under-estimated, the tonemapped result will be too dark or
too light. The bottom row of the same figure shows the tonemapping results using the same images but scaled using the factor estimated from our algorithm. Figure 1 presents an additional result
obtained with our method, demonstrating that the differences with
the ground truth remain well below visible threshold, according the
CIE ∆E94 color difference metric. Finally, the supplemental materials present additional examples.
Some appearance models strive to simulate appearance phenomena
for a wider range of illumination. For instance, in low light conditions, the behavior of the human visual system changes and its
ability to perceive color diminishes [Fairchild 2005; Reinhard et al.
2008]. iCAM06 predicts this behavior by incorporating a model
of scotopic vision [Kuang et al. 2007]. Consequently, if the input
images are dark, the results will be desaturated as would appear if
viewed under scotopic conditions. This can be seen in Figure 11
where an HDR image given in relative values was processed with
no scaling (a). The result after scaling the image with our estimated
factor is given in (b) while the result after scaling with the ground
truth factor is shown in (c)3 .
3 The image was taken from the HDR Photographic Survey which
provides absolute scaling factors computed from measurements in the
scene [Fairchild 2007].
Uncalibrated
CIE. 2003. Spatial distribution of daylight — CIE standard general
sky. CIE Standard S 011/E:2003CIE Central Bureau, Vienna.
Calibrated
D EBEVEC , P., AND M ALIK , J. 1997. Recovering high dynamic
range radiance maps from photographs. In Proceedings of SIGGRAPH ’97, 369–378.
D EBEVEC , P. 2002. Image-based lighting. IEEE Computer Graphics and Applications 22, 2, 26–34.
FAIRCHILD , M. D. 2005. Color appearance models. Wiley.
Mean lum. = 0.1
Mean lum. = 1E+3
Mean lum. = 1E+5
Figure 10: An HDR image was scaled to different mean luminance
levels and tonemapped using the appearance model by Reinhard et
al. [2012]. In the top row the images were not calibrated, while
in the bottom row luminance values were corrected using our estimated scale factor. Despite the differences in mean luminance, the
absolute scaling estimated using our algorithm remains consistent.
8
Conclusions
We present an automatic algorithm to estimate absolute pixel values from single HDR images with no more extraneous information
than could be obtained from modern digital cameras. This includes
EXIF header information as well as GPS information. As more and
more cameras include GPS units for the purpose of geotagging, we
find this does not diminish the practicality of the approach. To our
knowledge, this is the first algorithm to estimate absolute data without requiring the use of expensive measuring equipment.
The method is targeted at outdoor scenes, which are analysed to recover camera parameters as well as the turbidity of the atmosphere.
Using a model of sky luminance distribution the relative zenith luminance is estimated and subsequently correlated with the absolute
zenith luminance calculated from GPS data. We compare several
zenith luminance models and developed a trigonometric interpolation between two such models to obtain higher accuracy over a
wider range of sun elevation angles.
The algorithm was validated with the use of an existing HDR image
dataset. We also collected a further calibrated and annotated HDR
image collection to help with the algorithm’s validation. We aim to
make this dataset available in due course. The utility of the method
was then demonstrated in the context of two independent applications, namely image-based lighting design and combined color appearance modeling and tone reproduction. These are two applications that directly benefit from absolute calibrated image data.
Finally, with the advent of calibrated HDR displays which can take
absolute HDR data and map this 1:1 to emitted luminance values,
we think that there will be an increasing demand for images that are
not only calibrated (i.e. linearized) but that are scaled to absolute
values. Our algorithm helps create such data, and can therefore be
seen as a tool that turns cameras into high resolution light meters.
References
BASRI , R., JACOBS , D., AND K EMELMACHER , I. 2007. Photometric stereo with general, unknown lighting. International
Journal of Computer Vision 72, 3, 239–257.
B IRD , R. E., AND R IORDAN , C. 1986. Simple solar spectral
model for direct and diffuse irradiance on horizontal and tilted
planes at the earth’s surface for cloudless atmospheres. Journal
of Climate and Applied Meteorology 25, 1, 87–97.
FAIRCHILD , M. 2007. The HDR photographic survey. In Proceedings of the 15th Color Imaging Conference, vol. 15, The Society
for Imaging Science and Technology, 233–238.
F ELZENSZWALB , P. F., AND H UTTENLOCHER , D. P. 2004. Efficient graph-based image segmentation. International Journal of
Computer Vision 59, 2, 167–181.
F INLAYSON , G. D., D REW, M. S., AND F UNT, B. V. 1994. Color
constancy: generalized diagonal transforms suffice. JOSA A 11,
11, 3011–3019.
H OIEM , D., E FROS , A. A., AND H EBERT, M. 2007. Recovering
surface layout from an image. International Journal of Computer Vision 75, 1, 151–172.
H OLST, G. C. 1998. CCD Arrays, Cameras, and Displays, 2nd ed.
JCD Publishing, Winter Park, FL.
I NANICI , M., AND G ALVIN , J. 2004. Evaluation of high dynamic
range photography as a luminance mapping technique. Tech.
Rep. LBNL-57545, LBNL.
I NEICHEN , P., AND M OLINEAUX , B. 1994. Sky luminance data
validation: Comparison of 7 models with 4 data banks. Solar
Energy 52, 4, 337–346.
JANESICK , J. R. 2001. Scientific Charge-Coupled Devices. SPIE
Publications, Bellingham.
J UNEJO , I. N., AND F OROOSH , H. 2008. Estimating geo-temporal
location of stationary cameras using shadow trajectories. In Proceedings of ECCV. 318–331.
K ARAYEL , M., NAVVAB , M., N EEMAN , E., AND S ELKOWITZ ,
S. E. 1984. Zenith luminance and sky luminance distributions
for daylighting calculations. Energy And Buildings 6, 3, 283–
291.
K ASTEN , F., AND YOUNG , A. T. 1989. Revised optical air mass
tables and approximation formula. Appl. Optics 28, 22, 4735–
4738.
K IM , T., AND H ONG , K.-S. 2005. A practical single image based
approach for estimating illumination distribution from shadows.
In Proceedings of ICCV, vol. 1, 266–271.
K IM , M. H., AND K AUTZ , J. 2008. Characterization for high
dynamic range imaging. Comp. Graphics Forum 27, 2, 691–697.
K ITTLER , R., P EREZ , R., AND DARULA , S. 1998. A set of standard skies characterizing daylight conditions for computer and
energy conscious design. US SK 92 052 Final Report, CA SAS
Bratislava, Polygrafia Bratislava.
K RAWCZYK , G., G OESELE , M., AND S EIDEL , H.-P. 2005. Photometric calibration of high dynamic range cameras. Tech. Rep.
MPI-I-2005-4-005, MPI for Informatics.
K UANG , J., J OHNSON , G. M., AND FAIRCHILD , M. D. 2007.
iCAM06: A refined image appearance model for HDR image
10
a. Unscaled input
b. Our scaling
c. Ground truth scaling
d. CIE ΔΕ94 (a vs. c)
e. CIE ΔΕ94 (b vs. c)
0
Figure 11: An HDR image from the HDR Photographic Survey [Fairchild 2007] was tonemapped using the iCAM06 model [Kuang et al.
2007], which expects absolute luminance measurements. No scaling was used in (a) while our estimated scaling factor was applied in (b)
and the ground truth computed from scene measurements in shown in (c). Panels (d) and (e) show CIE ∆E94 color differences.
rendering. Journal of Visual Communication and Image Representation 18, 5, 406–414.
L AFFONT, P.-Y., B OUSSEAU , A., PARIS , S., D URAND , F., D RETTAKIS , G., ET AL . 2012. Coherent intrinsic images from photo
collections. ACM Transactions on Graphics 31, 6.
L ALONDE , J.-F., NARASIMHAN , S. G., AND E FROS , A. A. 2008.
What does the sky tell us about the camera? In Proceedings of
ECCV. 354–367.
L ALONDE , J.-F., G., N. S., AND E FROS , A. A. 2010. What do the
sun and the sky tell us about the camera? International Journal
on Computer Vision 88, 1, 24–51.
R EINHARD , E., K HAN , E. A., A KUYZ , A. O., AND J OHNSON ,
G. 2008. Color Imaging: Fundamentals and Applications. A K
Peters, Wellesley, Ma.
R EINHARD , E., H EIDRICH , W., D EBEVEC , P., PATTANAIK , S.,
WARD , G., AND M YSZKOWSKI , K. 2010. High Dynamic
Range Imaging: Acquisition, Display and Image-Based Lighting, 2nd ed. Morgan Kaufmann, San Francisco.
R EINHARD , E., P OULI , T., K UNKEL , T., L ONG , B.,
BALLESTAD , A., AND DAMBERG , G. 2012. Calibrated image appearance reproduction. ACM Transactions on Graphics
(Proceedings of SIGGRAPH Asia) 31, 6, article 201.
L ALONDE , J.-F., E FROS , A. A., AND NARASIMHAN , S. G. 2012.
Estimating the natural illumination conditions from a single outdoor image. Int. Journal of Computer Vision 98, 2, 123–145.
ROBERTSON , M., B ORMAN , S., AND S TEVENSON , R. 2003.
Estimation-theoretic approach to dynamic range enhancement
using multiple exposures. Journal of Electronic Imaging 12,
219–228.
L IN , S., G U , J., YAMAZAKI , S., AND S HUM , H.-Y. 2004. Radiometric calibration from a single image. In Proceedings of IEEE
CVPR, vol. 2, II–938–II–945.
S OLER , A., AND G OPINATHAN , K. K. 2000. A study of zenith
luminance on Madrid cloudless skies. Solar Energy 69, 5, 403–
411.
L OPEZ -M ORENO , J., H ADAP, S., R EINHARD , E., AND G UTIER REZ , D. 2010. Compositing images through light source detection. Computers & Graphics 34, 6, 698–707.
S OLER , A., AND G OPINATHAN , K. K. 2001. Analysis of zenith
luminance data for all sky conditions. Renewable Energy 24, 2,
185–196.
M ANN , S., AND P ICARD , R. 1995. On being ‘undigital’ with digital cameras: Extending dynamic range by combining differently
exposed pictures. In Proceedings of IS&T 48th annual conference, 422–428.
S OLER , A., G OPINATHAN , K. K., AND C LAROSA , S. 2001. A
study on zenith luminance on Madrid overcast skies. Renewable
Energy 23, 1, 49–55.
M ANTIUK , R., K RAWCZYK , G., M YSZKOWSKI , K., AND S EI DEL , H.-P. 2004. Perception-motivated high dynamic range
video encoding. ACM Transactions on Graphics 23, 3, 733–741.
M C C ARTNEY, E. J. 1976. Optics of the Atmosphere: Scattering
by molecules and particles. John Wiley & Sons, New York.
M ILLS , D. 2004. Advances in solar thermal electricity technology.
Solar Energy 76, 1, 19–31.
M ITSUNAGA , T., AND NAYAR , S. 1999. Radiometric self calibration. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, vol. 1, 374–380.
P EREZ , R., S EALS , R., AND M ICHALSKY, J. 1993. All-weather
model for sky luminance distribution-preliminary configuration
and validation. Solar Energy 50, 3, 235–245.
P REETHAM , A. J., S HIRLEY, P., AND S MITS , B. 1999. A practical
analytic model for daylight. In Proceedings of SIGGRAPH ’99,
ACM Press/Addison-Wesley Publishing Co., 91–100.
R EDA , I., AND A NDREAS , A. 2003. Solar position algorithm
for solar radiation applications. National Renewable Energy
Laboratory.
S TUMPFEL , J., T CHOU , C., J ONES , A., H AWKINS , T., W ENGER ,
A., AND D EBEVEC , P. 2004. Direct HDR capture of the sun
and sky. In Proceedings of Afrigraph, ACM, 145–149.
S UN , M., S CHINDLER , G., T URK , G., AND D ELLAERT, F. 2009.
Color matching and illumination estimation for urban scenes. In
IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), 1566–1573.
S UNKAVALLI , K., ROMEIRO , F., M ATUSIK , W., Z ICKLER , T.,
AND P FISTER , H. 2008. What do color changes reveal about
an outdoor scene? In IEEE Conference on Computer Vision and
Pattern Recognition, 1–8.
WARD L ARSON , G., AND S HAKESPEARE , R. A. 1998. Rendering with Radiance: The art and science of lighting visualization.
Morgan Kaufmann, San Francisco, CA.
Y U , Y., AND M ALIK , J. 1998. Recovering photometric properties of architectural scenes from photographs. In Proceedings of
SIGGRAPH, 207–217.
Z OTTI , G., W ILKIE , A., AND P URGATHOFER , W. 2007. A critical review of the Preetham skylight model. In WSCG 2007 Short
Communications Proceedings I, University of West Bohemia,
V. Skala, Ed., 23–30.