GE10 New
GE10 New
GE10 New
Wind and solar are powering a clean energy revolution. Here’s what you need to know
about renewables and how you can help make an impact at home.
Renewable power is booming, as innovation brings down costs and starts to deliver on the
promise of a clean energy future. American solar and wind generation are breaking
reliability.
This means that renewables are increasingly displacing “dirty” fossil fuels in the power
sector, offering the benefit of lower emissions of carbon and other types of pollution. But
considering the impact on wildlife, climate change, and other issues. Here’s what you
should know about the different types of renewable energy sources—and how you can use
Renewable energy, often referred to as clean energy, comes from natural sources or
processes that are constantly replenished. For example, sunlight or wind keep shining and
power has long been used for heating, transportation, lighting, and more. Wind has
powered boats to sail the seas and windmills to grind grain. The sun has provided warmth
during the day and helped kindle fires to last into the evening. But over the past 500 years
or so, humans increasingly turned to cheaper, dirtier energy sources such as coal and
fracked gas.
Now that we have increasingly innovative and less-expensive ways to capture and retain
wind and solar energy, renewables are becoming a more important power source,
also happening at scales large and small, from rooftop solar panels on homes that can sell
power back to the grid to giant offshore wind farms. Even some entire rural
As renewable use continues to grow, a key goal will be to modernize America’s electricity
grid, making it smarter, more secure, and better integrated across regions.
Dirty energy
Nonrenewable, or “dirty,” energy includes fossil fuels such as oil, gas, and
long time to replenish. When we pump gas at the station, we’re using a finite resource
refined from crude oil that’s been around since prehistoric times.
Nonrenewable energy sources are also typically found in specific parts of the world,
making them more plentiful in some nations than others. By contrast, every country has
access to sunshine and wind. Prioritizing nonrenewable energy can also improve national
Many nonrenewable energy sources can endanger the environment or human health. For
example, oil drilling might require strip-mining Canada’s boreal forest, the technology
plants foul the air. To top it off, all these activities contribute to global warming.
Solar Energy
Humans have been harnessing solar energy for thousands of years—to grow crops, stay
warm, and dry foods. According to the National Renewable Energy Laboratory, “more
energy from the sun falls on the earth in one hour than is used by everyone in the world in
one year.” Today, we use the sun’s rays in many ways—to heat homes and businesses, to
Solar, or photovoltaic (PV), cells are made from silicon or other materials that transform
sunlight directly into electricity. Distributed solar systems generate electricity locally for
homes and businesses, either through rooftop panels or community projects that power
entire neighborhoods. Solar farms can generate power for thousands of homes, using
mirrors to concentrate sunlight across acres of solar cells. Floating solar farms—or
Solar supplies a little more than 1 percent of U.S. electricity generation. But nearly a third
of all new generating capacity came from solar in 2017, second only to natural gas.
Solar energy systems don’t produce air pollutants or greenhouse gases, and as long as they
are responsibly sited, most solar panels have few environmental impacts beyond the
manufacturing process.
Wind Energy
We’ve come a long way from old-fashioned wind mills. Today, turbines as tall as
produces electricity.
Wind, which accounts for a little more than 6 percent of U.S. generation, has become
the cheapest energy source in many parts of the country. Top wind power states include
California, Texas, Oklahoma, Kansas, and Iowa, though turbines can be placed anywhere
with high wind speeds—such as hilltops and open plains—or even offshore in open water.
Hydroelectric Power
Hydropower is the largest renewable energy source for electricity in the United States,
though wind energy is soon expected to take over the lead. Hydropower relies on water—
typically fast-moving water in a large river or rapidly descending water from a high point—
and converts the force of that water into electricity by spinning a generator’s turbine
blades.
restricting access for animal and human populations that rely on rivers. Small hydroelectric
plants (an installed capacity below about 40 megawatts), carefully managed, do not tend
Biomass is organic material that comes from plants and animals, and includes crops, waste
wood, and trees. When biomass is burned, the chemical energy is released as heat and can
Biomass is often mistakenly described as a clean, renewable fuel and a greener alternative
to coal and other fossil fuels for producing electricity. However, recent science shows that
fossil fuels. There are also negative consequences for biodiversity. Still, some forms of
biomass energy could serve as a low-carbon option under the right circumstances. For
example, sawdust and chips from sawmills that would otherwise quickly decompose and
If you’ve ever relaxed in a hot spring, you’ve used geothermal energy. The earth’s core is
about as hot as the sun’s surface, due to the slow decay of radioactive particles in rocks at
the center of the planet. Drilling deep wells brings very hot underground water to the
electricity. Geothermal plants typically have low emissions if they pump the steam and
water they use back into the reservoir. There are ways to create geothermal plants where
there are not underground reservoirs, but there are concerns that they may increase the
Tidal and wave energy is still in a developmental phase, but the ocean will always be ruled
by the moon’s gravity, which makes harnessing its power an attractive option. Some tidal
energy approaches may harm wildlife, such as tidal barrages, which work much like dams
and are located in an ocean bay or lagoon. Like tidal power, wave power relies on dam-like
Solar Power
At a smaller scale, we can harness the sun’s rays to power the whole house—whether
through PV cell panels or passive solar home design. Passive solar homes are designed to
welcome in the sun through south-facing windows and then retain the warmth through
Some solar-powered homes generate more than enough electricity, allowing the
homeowner to sell excess power back to the grid. Batteries are also an economically
attractive way to store excess solar energy so that it can be used at night. Scientists are
hard at work on new advances that blend form and function, such as solar skylights and
roof shingles.
Geothermal Heat Pumps
your fridge are a mini heat pump, removing heat from the interior to keep foods fresh and
earth (a few feet below the surface) to cool homes in summer and warm houses in winter
Geothermal systems can be initially expensive to install but typically pay off within 10
years. They are also quieter, have fewer maintenance issues, and last longer than
A backyard wind farm? Boats, ranchers, and even cell phone companies use small wind
turbines regularly. Dealers now help site, install, and maintain wind turbines for
Depending on your electricity needs, wind speeds, and zoning rules in your area, a wind
Wind- and solar energy–powered homes can either stand alone or get connected to the
larger electrical grid, as supplied by their power provider. Electric utilities in most
electricity than you use, your provider may pay you retail price for that power.
Advocating for renewables, or using them in your home, can accelerate the transition
toward a clean energy future. Even if you’re not yet able to install solar panels, you may be
able to opt for electricity from a clean energy source. (Contact your power company to ask
if it offers that choice.) If renewable energy isn’t available through your utility, you can
Generating energy that produces no greenhouse gas emissions from fossil fuels and
Wind, geothermal, solar, hydro, and other renewable technologies are a widely popular
source of energy throughout the world today. Countries, corporations, and individuals are
adopting renewables for a number of great benefits. In this article, we’ll dive into some of
Using renewable energy over fossil fuels has a number of advantages. Here are some of
Renewable energy technologies use resources straight from the environment to generate
power. These energy sources include sunshine, wind, tides, and biomass, to name some of
the more popular options. Renewable resources won’t run out, which cannot be said for
many types of fossil fuels – as we use fossil fuel resources, they will be increasingly difficult
to obtain, likely driving up both the cost and environmental impact of extraction.
In most cases, renewable energy technologies require less overall maintenance than
generators that use traditional fuel sources. This is because generating technology like
solar panels and wind turbines either have few or no moving parts and don’t rely on
Using renewable energy can help you save money long term. Not only will you save on
maintenance costs, but on operating costs as well. When you’re using a technology that
generates power from the sun, wind, steam, or natural processes, you don’t have to pay to
refuel. The amount of money you will save using renewable energy can vary depending on
Renewable energy generation sources emit little to no greenhouse gases or pollutants into
the air. This means a smaller carbon footprint and an overall positive impact on the natural
environment. During the combustion process, fossil fuels emit high amounts of
greenhouse gases, which have been proven to exacerbate the rise of global temperatures
The use of fossil fuels not only emits greenhouse gases but other harmful pollutants as
well that lead to respiratory and cardiac health issues. With renewable energy, you’re
healthier atmosphere.
With renewable energy technologies, you can produce energy locally. The more renewable
energy you’re using for your power needs, the less you’ll rely on imported energy, and the
Renewable energy has many benefits, but it’s not always sunny when it comes to
renewable energy. Here are some disadvantages to using renewables over traditional fuel
sources.
While you can save money by using renewable energy, the technologies are typically more
expensive upfront than traditional energy generators. To combat this, there are
often financial incentives, such as tax credits and rebates, available to help alleviate your
2. Intermittency
Though renewable energy resources are available around the world, many of these
resources aren’t available 24/7, year-round. Some days may be windier than others, the sun
doesn’t shine at night, and droughts may occur for periods of time. There can be
unpredictable weather events that disrupt these technologies. Fossil fuels are not
3. Storage capabilities
Because of the intermittency of some renewable energy sources, there’s a high need for
energy storage. While there are storage technologies available today, they can be
expensive, especially for large-scale renewable energy plants. It’s worth noting that
energy storage capacity is growing as the technology progresses, and batteries are
4. Geographic limitations
The United States has a diverse geography with varying climates, topographies,
vegetation, and more. This creates a beautiful melting pot of landscapes but also means
that there are some geographies that are more suitable for renewable technologies than
others. For example, a large farm with open space may be a great place for a residential
wind turbine or a solar energy system, while a townhome in a city covered in shade from
taller buildings wouldn’t be able to reap the benefits of either technology on their
property. If your property isn’t suitable for a personal renewable energy technology, there
are other options. If you’re interested in solar but don’t have a sunny property, you can
often still benefit from renewable energy by purchasing green power or enrolling in
When it comes to renewable energy, the positives outweigh the negatives. Transitioning
to renewables on a personal, corporate, or governmental level will not only help you save
money but also promote a cleaner, healthier environment for the future.
the EnergySage Solar Marketplace, you can compare multiple quotes from local, pre-
screened installers to see what solar costs and savings for your property. The quotes will
also include estimates of the amount of carbon dioxide emissions you will offset over 20
years, and what this equates to in both trees planted and gallons of gasoline burned.
discipline as
"…the study of ecosystems that include humans living in cities and urbanizing landscapes. It is
an emerging, interdisciplinary field that aims to understand how human and ecological
processes can coexist in human-dominated systems and help societies with their efforts to
become more sustainable. … Because of its interdisciplinary nature and unique focus on
humans and natural systems, the term "urban ecology" has been used variously to describe the
study of humans in cities, of nature in cities, and of the coupled relationships between humans
and nature. Each of these research areas is contributing to our understanding of urban
ecosystems and each must be understood to fully grasp the science of Urban Ecology."
A single generation from today, by 2030, the population of the world's cities will grow by 2
billion more people. At present, about half of the humans on earth live in urban areas. In
2030, according to The CIA World Factbook, 60 per cent, or almost two-thirds of people will
live in cities. In addition to space in which to live, all of these people will need breathable air,
drinkable water, and food, which will mostly be grown outside of cities and transported into
them.
In short, the entire planet is becoming more urbanized, a phenomenon which is already
having a profound effect on the natural systems that maintain breathable air, drinkable
But large areas of green spaces exist within cities. Lawns, parks, golf courses, and nature
preserves created decades ago and now surrounded by development help filter pollution in
air and water, produce oxygen, mitigate heat absorption by asphalt and concrete, and
In the past quarter century, scientists have recognized that understanding the interactions
of the living and nonliving components of these urban ecosystems is vital to the future of all
Within the science of ecology, urban ecology is defined as the study of structure,
dynamics, and processes in urban ecological systems. Urban ecology is the study of
interactions of these organisms with the native and built physical environment, and
information within individual urban systems and between urban and nonurban
systems. Urban ecology applies the methods and concepts of the biological science
of ecology to urban areas, but requires and integrates with the concerns, concepts,
demographic system.
Urban ecology is important because it brings the insights and knowledge from
contemporary biological ecology to bear on urban areas [1]. It replaces the earlier
and superseded versions of ecological science that had been used by social
between natural and human system components, probabilistic system change, and
Urban ecology is also important because urban habitats are increasing worldwide.
The United Nations estimates that more than 50% of the global population now
resides in urban areas, as defined by the various member nations. In addition, the
next three billion people to be added to the world population are expected to live
in urban areas. Hence, urban systems are becoming the predominant habitat of
humanity, and are an increasingly widespread land cover type worldwide. In the
USA, constructed surfaces now cover an area equivalent to that of the state of Ohio
[2].
If the disciplines and practices of urban planning and design, ecological restoration,
knowledge and data, then the science of urban ecology will become an increasingly
Brief History
Urban ecology has emerged as a subdiscipline of biological ecology only in the last
30 years [3]. It began as an ecological science in the study of the species and biotic
communities of conspicuously green patches in cities and metropolises. Parks,
vacant lots, disturbed, and derelict lands were the first focal areas of the discipline
[4]. More recently, ecologists began to examine areas actively inhabited and
tradition in urban ecology focuses on the coarser scale, to quantify energy and
material budgets of cities. This focus, sometimes called urban metabolism , deals
with the energy, matter, and information that flow through and are transformed by
cities. In all cases, how the biological components and fluxes affect the well-being
ecology differs from the past traditions. First, all areas in the city are now subject to
ecological analysis, not just the conspicuous green areas. Second, even in the
transformations within the larger metropolis. Finally, the fully hybrid nature of the
systems is acknowledged, so that cities are seen as neither fully human nor fully
natural entities. Rather, they are inextricably both human constructions and
biophysical features [6, 7]. Urban ecology was once a study of green spaces in the
city. Now it is the study of the ecology of the entire urban area, including biological,
Principal among these has been sociology. This use originated in the 1920s at the
University of Chicago under the leadership of Robert Park and Ernest Burgess, who
current in biological ecology into their new discipline of sociology. Human ecology,
which has roots in geography, anthropology, and other social sciences, is closely
related to urban ecology when the study subject is urban populations and their
interactions. However, other disciplines tend to neglect the physical and biological
Introduction
Urban ecology has been used by several disciplines which have different foci and
History
Urban ecology has a long history. The first flowering of urban ecology was a
the 1920s. Although this was a sociological pursuit, it was centrally informed by
analogies from the biological science of ecology, for which the University of
Chicago was one of the founding schools. Park, Burgess, and their students
explained the unprecedented growth and social change in Chicago in terms of
isolation between different communities and functions in the city. These scholars
were disturbed by the doubling of the population of Chicago at the time, and the
role of new migrants from the American South or from eastern and southern
Europe. The racial, ethnic, and class novelty in the city begged explanation and
incited the Chicagoans to seek explanatory and predictive models to serve in the
face of such unprecedented changes. This approach to urban ecology was informed
communities as the paragons of human societies. One of the central tenets of the
Chicago school was that cities had a life cycle, analogous to the expected, but
incorrect, prediction that ecological communities had predictable life cycles starting
from invasion, extending through competition and sorting, and ending in a mature
state. This phase of urban ecology ended when social science critics prompted a
mainstream ecology at about the same time. Even though the academic community
moved beyond the deterministic, life-cycle approach to cities, urban policy in the
USA continued to assume life-cycle patterns through the 1960s, basing urban
the birth of sociology and being widely applied in urban systems, most biological
ecologists heartily ignored cities and urban systems. European and Japanese
ecologists began to explore ecology in urban contexts after World War II. The
manifest destruction in the cities in which they lived invited their interest as
derelict sites? How would the newly established biotic communities change over
time? What benefit might they provide the cities in which they occurred? The
standard ecological questions, but asked in a novel location. This tradition became
linked with urban planning in Europe and has remained active in that form [8].
The second wave of urban ecology rose in the 1970s in the USA. Associated with the
birth of environmentalism and its concern with the Earth’s exponential human
population growth, the urban ecology of this era tended to assume that humans
case of the human impact that was beginning to worry scientists and the public. A
key document from this era is the volume by Stearns and Montag [9]. In it, the
problems of urban areas are outlined, and the nature of potential ecologically
informed solutions is suggested. However, the ecology of the time was rather
Furthermore, although failure of the old ecological ideas that had informed the
Chicago School was evident, no clear replacement had emerged. Urban ecology in
Hence, this approach can be characterized as ecology in the city [3]. Parks,
Another feature of this second wave of urban ecology was a budgetary, systems
approach. Epitomized by work in Hong Kong [11], this approach to urban ecology
addressed energy and material budgets of cities, and detailed the human costs of
ecology of the city. It shares with the early Chicago School an assumption of the
urban metabolism are branches from this tradition. Both of these schools of
thought analyze the material and energetic inputs, efficiencies, and outputs of
that aims to reduce the use of resources and the generation of wastes associated
with contemporary material use. This era of urban ecology did not persist in the
features that differentiate it from prior instances of urban ecology, and make it
more comprehensive than earlier approaches. First, it attempts to unify social and
limiting nutrients and pollutants. Contemporary urban ecology brings the three
Will this current interest in urban ecology wane, as did the previous ones in the
USA? One difference between the current manifestation of urban ecology and the
Japan, and the USA did not have long-lasting research support. As a result, their
pioneering efforts were sometimes short-lived. Now there are two urban Long-
Term Ecological Research (LTER) sites in the USA, and International Long-Term
Ecological Research programs and Zones Ateliers are including urban areas among
their rosters. Already the US LTER urban sites are 13 years old. Such longevity
accumulation of lengthy data runs which can expose causal links and the role of
pulse events [13]. Acknowledging that urban areas both contribute to and are
vulnerable to global changes [13] will tend to keep them in focus in ecological
science.
Examples
Urban ecology is such a diverse science that examples are required to give a sense
of its breadth.
Patterns of diversity and abundance associated with urbanization are complex and
find that species–area relationships are preserved in urban patches [14]. However,
in some studies, patch size influenced species composition rather than species
from smaller patches [15]. Attempts to directly quantify the extinction and
species composition in a patch is the result of species colonizing the novel habitats
formed by urbanization along with those remaining after local extinctions due to
urban biodiversity is that urban habitats are not always less diverse than rural
patches. Rather, diversity depends on the sum of extinction and colonization rates,
which differ regionally and taxonomically. At moderate levels of urbanization,
and the water table [18]. This disconnection limits the capacity of urban riparian
landscapes has suggested that riparian restoration, inserting woody and grass
accomplish such mitigation was examined in Baltimore, MD, USA, it was discovered
that riparian zones had become disconnected from the groundwater sources that
control their ability to convert nitrate to nitrogen gas. With reduced infiltration of
stormwater into the ground due to impervious surfaces, and with high incision
support the anaerobic conditions and high organic matter required to fuel
may not always occur [19]. This example demonstrates that knowledge obtained in
problems since the industrial revolution. Numerous studies have shown that our
sustainability depends critically on cities, and urban ecology can – and needs to –
play a key role in the transition toward sustainability. In this paper, I review
and key issues, and propose a framework to help move the field forward. After
social sciences. The most salient thrust of current research activities in the field is
many ways, we do know a lot about its patterns, processes, and effects. More
specifically, we know a great deal about urban growth patterns in space and time,
complex adaptive systems. As such, the dynamic trajectory of cities can never be
fully predicted or controlled, but can and should be influenced or guided in more
desirable directions through planning and design activities that are based on
Graphical abstract
an urban landscape. All the components and their relationships are influenced
ecology and sustainability of urban landscapes and regions should not only
consider how urbanization affects these key components but also how their
relationships change in time. Human well-being is the primary focus for urban
connections among the key components and their linkages across spatial
(landscape–region–globe) and temporal (year–decade–century) scales should be
A geographic information system (GIS) is a computer-based tool for mapping and analyzing
feature events on earth. GIS technology integrates common database operations, such as
query and statistical analysis, with maps. GIS manages location-based information and
provides tools for display and analysis of various statistics, including population
characteristics, economic development opportunities, and vegetation types. GIS allows you
to link databases and maps to create dynamic displays. Additionally, it provides tools to
visualize, query, and overlay those databases in ways not possible with traditional
spreadsheets. These abilities distinguish GIS from other information systems, and make it
valuable to a wide range of public and private enterprises for explaining events, predicting
Remote sensing is the art and science of making measurements of the earth using sensors
on airplanes or satellites. These sensors collect data in the form of images and provide
specialized capabilities for manipulating, analyzing, and visualizing those images. Remote
sensed imagery is integrated within a GIS. For more, see the Principles in Remote
SYSTEMS
6.1 Introduction
Referring back to Figure 1.3 (p.6) it can be seen that we are now at the stage where the data
streams, in their various forms, converge into the “box” labelled GIS. Entering this box are “flows”
of data - both directly relevant and proxy, as well as the three map “flows” - it is these streams which
collectively combine to form the inputs to GIS. In this chapter we intend to show how GISs have
evolved, what the major processes are that make up the system, and then to examine something of
the technology necessary for making the system function. We also intend to look at benefits and
problems of GIS and then to give some guidance on system selection, including the support facilities
available. We will conclude by attempting, in this complex and fast changing field, to examine some
likely future trends in GIS. A varied selection of functional GISs, as applied to aquaculture, inland
GIS is a branch of science, or a field of learning, which has evolved, and is very much still evolving, at
such a rapid pace that a definition of what it is or what it does, has changed and expanded so much
that the only thing that we can be certain of is that any definition we give now would not describe
what is being done in perhaps 5 or 10 years time! This rapid evolution, which is described in more
detail in section 6.3, has meant that there is much controversy over not only a definition of GIS, but
also where GIS lies in a hierarchy of similar fields and on what basis a typology of GIS should be
determined.
Though it would appear that the nomenclature of “Geographic(al) Information System(s)” is coming
to the fore and is becoming universally accepted as pivotal or central to those processes which we
describe in this chapter (Clarke, 1986), there is still a body of opinion which considers GIS to be a
“narrow term”, only one strand of several systems which, although similar, should retain their
separate identities (Shand and Moore, 1989). Other names synonymous with GIS include:
“Geo-data systems”,
It is likely that most of these names will give way in favour of GIS, though LIS is likely to hold its
ground for a while, along with other associated or specific application areas such as “Computer
Actual definitions of GIS will be variable and range from the very simple: “A computer system
capable of holding and using data describing places on the Earth's surface”, through the rather
limited: “A GIS then is a software package,.....” (Butler, 1988, p.31) and through the novel: “GIS are
simultaneously the telescope, the microscope, the computer and the xerox machine of regional
analysis and synthesis.” (Abler, 1988, p.137), eventually to extremely “wordy” definitions. We would
suggest that an actual definition is not as important as the basic ideas which GISs convey, e.g. the
i. That being “geographical” it contains data and concepts which are concerned with spatial
distributions.
ii. That “information” implies some notion of conveying data, ideas or analyses, usually as an
aid to decision-making.
iii. That being a “system” it involves the sequence of inputs, processes and outputs.
iv. That the three strands mentioned above are given functionality within a recent technological
In very practical terms GIS comprise a collection of integrated computer hardware and software
which together are used for inputting, storing, manipulating and presenting geographical data
(Figure 6.1). The data may be in any textual, map or numeric form which are capable of being
integrated within a single system. GIS exist in a variety of forms and embody the potential for an
enormous range of applications. No single typology for GISs has yet emerged and clearly a number
of categorizations are possible. For those interested we recommend Clarke (1986) and Bracken and
Webster (1989).
6.3 The Evolution of GIS
The rapid evolution of GIS, especially over the last decade, has been caused by a complex amalgam
of major factors, plus a number of minor ones. Here we identify the major factors before briefly
examining the historical sequence of GIS development. For those interested further in these areas,
details are given in Burrough (1986), Jackson and Mason (1986), Dept. of Environment (1987),
Smith et al (1987), Crosswell and Clark (1988), Goodchild (1988), Tomlinson (1989) and Star and Estes
(1990).
Over the last two decades there has been a surge in data volume, much of which has been available
in digital format, e.g. from RS sources, from censuses and from the major mapping agencies. This
surge was in response to the perceived need to have banks of information, in an easily manipulated
form, so as to maximize the use of expensively procured data. Much of this data has been accessible
using various on-line facilities associated with computer networking and communications.
processing costs have fallen by a factor of 100 in the past decade, and that this is likely to continue.
Figure 6.2 illustrates how processor performance has also increased in terms of speed obtained
relative to investment made. Performance increases are now tending to blur the traditional
Advancing on the tide of the explosion in computing power and capability have been a number of
parallel developments. These include: computer aided design (CAD), remote sensing (RS), spatial
analysis, digital cartography, surveying and geodesy, etc. All these fields have a spatial perspective
and can be inter-related, though other fields such as IT, image processing, computer graphics and
photogrammetry have also contributed. GIS has emerged as a core methodology allowing for
integration to occur if desirable, or allowing for each of the separate fields to greatly enhance their
own efficiency. Thus GIS is “…the result of linking parallel developments in many separate spatial
data processing disciplines.” (Burrough, 1986. p.6), and it allows for some considerable
Paper maps have traditionally formed the basis of spatial enquiry and these were needed at a large
range of scales. Paper maps occupy much space, are easily damaged, they date quickly, they are
expensive to produce and data cannot be rapidly extracted from them. The inception of GIS has
changed much of this. Both private and governmental organizations have quickly realized the
tremendous social, environmental and commercial value of GIS for a range of applications - the main
fields are in market location analysis, property management, social resource allocation, resource
and development. GIS has allowed decision makers, in all organizations, to explore a range of
possibilities or scenarios before large investments are made or before plans and actions are
implemented.
Though there have been claims for very early GISs, e.g. the British Domesday Book of the late 11th
century, GIS as we recognize it had its origins in the Canadian GIS of 1964. This embodied the early
recognition of what might be possible in terms of using computers for handling numerical data and
with out-putting useful and timely information. GIS development was limited in the 1960s and early
1970s because of cost and technical limitations, though during this period the development of the
minicomputer was important as was the creation of some original mapping packages, e.g. SYMAP,
During the 1970s there was a rapid rise in the related, parallel fields (section 6.3.1.3). Advantages
were seen in linking data sets, utilizing spatial data in more ways and GIS associated equipment was
beginning to be acquired by universities, research organizations and small private companies. By the
late 1970s computer mapping had made rapid advances. There were hundreds of computer systems
for many applications. Interactive capability was achieved and there were great advances in output
devices capable of generating high resolution displays and hard copy graphic products. There were
In the 1980s GIS had really taken off, especially during the latter part of the decade, and it is now a
growth industry of major proportions. We list some of the developments which have occurred
Improved instructions, menus, manuals, etc. have made GIS accessible to non-GIS specialists.
c. Distributed computing via networks for the sharing of resources and data.
e. Significant microprocessor developments have allowed for cost reductions and for huge
f. A trend from the use of, or digitizing of, specific maps towards having archives of digitized
data in a cartographic data bank which can be manipulated, analyzed and displayed in any
desirable form.
g. A proliferation in the support side of GIS - journals, courses, education, symposia, etc.
h. Governments, utilities and other enterprises seeking increased efficiency in data handling.
During the whole recent developmental period there has been a “leap-frogging” of developments
within specific areas of GIS in terms of them being applications-driven or technology-driven. Most of
the developments have been occurring in North America though some have come from Europe. In
most countries the government has played a large part in GIS progress since it has been the
generator of large volumes of data, since it created needs in departments such as forestry, land use
planning and natural resource development and since it is being increasingly called upon to take a
leading role with environmental concerns. The global market for GIS systems and data is currently
(1990) estimated at $4 billion, and is growing at 20% per annum (Tomlinson, 1989), and Figure 6.3
exemplifies a breakdown of the likely U.K. GIS market till 1999 (Rowley
Figure 6.1 showed the overall functioning of GISs in a simplified form. In this section we describe the
elements displayed within the GIS “box”, and show how they are integrated for the successful
functioning of the system. There are a great number of functions which a GIS might be required to
perform and the most important of these are set out in Table 6.1. The list is compiled from a variety
of sources and the interested reader should consult the following: Knapp and Rider (1979), Rhind
(1981), Dangermond (1983), Burrough (1986), Smith et al (1987) and Rhind and Green (1988).
Throughout this section we will briefly discuss those peripheral hardware items which are directly
related to GIS. Space prohibits a review of general computing hardware, e.g. processors, disk drives,
alphanumeric terminals, tape drives, V.D.Us and other monitors, even though they may be essential
to GIS. In section 6.7 we do look at ways of optimizing hardware systems configurations. Further
details on GIS hardware can be obtained from Letcher (1985), Megarry (1985), Walsh (1985), Croswell
and Clark (1988), Kadmon (1988) and Dangermond and Morehouse (1989).
All data being input to a GIS must be in digital format, in either numeric or alphanumeric form. Data
may be input via a variety of mediums including computer compatible tapes (CCTs), floppy disks,
Compact Disc-Read Only Memory optical discs (CR-ROMs), etc. It is obvious that if the data is
originating from multifarious sources, then each data set may differ structurally. Given that “The
creation of a clean, digital database is a most important and complex task upon which the usefulness
of the GIS depends.” (Burrough, 1986. p.57), then, from a GIS viewpoint, it would be ideal and far
ii. Disaggregated, i.e. so that users could select their own units (or areas) to manipulate or
analyze.
iii. Location referenced, i.e. to a National Grid or latitudes and longitudes or to any kind of
iv. Accurate - and therefore able to be referenced to the smallest aerial unit possible.
In the absence of this ideal, GIS systems software must either ignore these factors, compensate for
The input sources from where the data for GIS may originate have been discussed in Chapters 2 to 5
and are shown in Figure 6.1. In this section we need not discuss inputs from digital archives or inputs
from other GISs, since these will both be already available (captured), frequently in a format suitable
for immediate use. The choice of other capture methods will be governed largely by the available
budget and the type of data being input. The methods of capture for data sources are:
Capture may be by using manual methods, aided by a keyboard and VDU, to interactively create data
bases or files, i.e. for entering the results of field work or questionnaire surveys. Data might be filed
in either a standard spreadsheet package or in any of the many specialist data entry modules
associated with particular computer packages or programs, e.g. statistical packages such as SPSS or
Minitab, computer cartography packages such as MICROMAP or survey analysis packages such as
SNAP. These programs or packages require that data is entered in a structured format - the data can
then be edited and corrected and any numerical manipulations can be performed.
Data loggers can be used. These are specialist devices that automate the process of collecting and
recording data in the field. They may be automatic or semiautomatic. They carry out a limited range
of functions recording data on variables such as soil moisture content, water flow, sediment particle
size, climatic variables, etc. For analogue data loggers, the data will need to be digitized. Specialist
data entry devices are available which semi-automate the field collection of live questionnaire data,
i.e. answers are fed directly into a pre-programmed memory within a battery operated, hand held
portable terminal. There are a variety of other microcomputers which are now becoming available as
a result of microprocessor advances and subsequent price reductions. Maguire (1989) provides
Maps may be captured by the use of digitizers or various types of scanners, e.g.:
i. Electromechanical digitizing involves using a tiltable table, or tablet, on which the map is
positioned (Figure 6.4), with an in-built Cartesian surface (grid), having energized
intersections typically resolving to 0.01mm. Attached to the table is a pen or tracking cross
(cursor or puck) which can be moved along lines or to points, and can detect the signal at any
intersection of the grid. Cursors can be equipped with up to 16 buttons which are used for
additional program control, e.g. to move from point to line or for adding identifier labels. The
analogue signal detected is coded by the computer into a digital x,y co-ordinate, measured
from a user-defined origin. These digitizers may work in either point or line/stream mode. In
the former points are recorded at a signal from the operator; in the latter mode the digitizer
records co-ordinates at fixed time or distance intervals. Some digitizers can operate in all
three modes. Digitizers are increasingly linked directly to VDUs for monitoring purposes,
and/or linked to the host computer for direct input of data and to allow the computer to set
Space precludes much discussion of the procedures necessary for the integration of RS imagery into
GIS - Jensen (1986) and Goodenough (1988) provide more detail on this. Here we will mostly
We have shown, in section 4.6, that RS imagery is preprocessed to a variety of user-defined levels,
and then filed on CCTs. This data can then be integrated into the GIS, perhaps via external packages
such as ERDAS or GEMS, which have been used to perform additional processing. Some GIS software
contains its own RS processing programs. The most important task in integration is ensuring that RS-
derived data is referenced to exact ground co-ordinates so that registration with other GIS data is
possible.
Before integrating the RS data (being held on a CCT) it is important to know the levels of pre-
processing which may have been performed. If only crude radiometric and geometric corrections
have been done then any of the further pre-processing levels described in section 4.6 might be
necessary. Actual integration may give rise to a number of problems. The RS data may only be
classified into 256 class levels, whereas GIS are capable of handling far larger arrays. This may make it
difficult to assign detectable RS image features to classes in the GIS. There is inevitably difficulty in
matching RS images to other thematic data which has been derived from topographic map sources
or elsewhere - this is especially true in areas having varied relief. Goodenough (1988) found that it
was not uncommon to have displacements of 200 meters at the 1:50 000 scale. Other problems
include differences in land area shapes and sizes as well as the image interpretation problems
described in Chapter 4.
Some authors argue that there has been very little success in integrating RS into GIS. Young and
Green (1987) say generally that this is because of differences between the potential and the
operational realization of this potential, and more specifically Wilkinson and Fisher (1987) note that
too much RS data is available at a resolution which is not reliable for realistic GIS. Robinson Barker
(1988) puts the lack of success down to a cooling off from an initial period of great interest in the
mid-1970s and to government indecision and inaction. There was also “too much technology and
data” and very sophisticated techniques were needed so that even now low cost interactive RS data
does not exist. Finally, RS data is in gridded (raster) format whilst the majority of GIS work in vector
format. It is important to bring attention to these limitations, not only to warn the potential user, but
also to show that there is still a huge amount of research necessary to ensure reliable integration,
and it could well be that the future of RS depends upon its ability to integrate successfully with GIS.
A bird's eye view of the world, as depicted on a mapped surface, reveals that the surface consists of
either points, lines or 2D areas which are cartographically called polygons. Thus in Figure 6.7 (a)
roads would be lines, houses are usually points and gardens or fields are polygons. All information
captured by any method shown in section 6.4.1.1 must be capable of being displayed, and therefore
must be appropriately encoded to show any of these three forms. There are two basic organizational
modes which the computer may work in to display spatial forms, i.e. vector or raster mode
mapped object such as a telephone box, a building or a settlement, i.e. depending on the
scale of the map. A series of points can be defined and joined to show a line - this might
represent a field boundary, road, river, etc. Lines too can be defined and joined so as to
enclose an area (a polygon) which might represent any 2D feature such as a field, lake or a
Digitizing in the vector mode can be extremely accurate, e.g. in representing non-straight
around a curve. The vector mode is usually employed where it is necessary to integrate
manual and computer graphics techniques and where annotations are frequently required.
Because vector modes use quite complex data structures, the technology is expensive as is
B. Raster Mode. Here the whole mapped surface is composed of a grid of cells which form a
matrix of rows and columns. The size of each cell determines the resolution (or detail) of the
mapped surface. Very small cells are referred to as pixels (as in RS imaging), and each cell or
pixel is a data element showing, by digital encoding or by colour coding of the final map, the
Raster graphics are usually used where it is necessary to integrate topographical and
thematic map data, either together or with RS data. The main problem of this mode is that
the use of cells means that recognizable structures can be lost and there can be a serious loss
of information. However, each of the two modes will have several advantages over the other
(Table 6.2) and this means that they are best seen as complementary rather than
competitive. Though in some ways the issue of structure mode is critical, because once
established it is difficult to change it, GIS are increasingly able to handle data in both vector
information about the Earth, and geographic information systems (GIS) are a
methodology for handling all of this geographic data. The marriage of the two
disciplines has allowed us to carry out large scale analyses of the Earth's surface
and, at the same time, provide increasingly detailed knowledge on many planetary
variables and improve our understanding of its functioning. These analyses are
global change.
quality, and a capacity to deal with any errors that arise. It is in this particular area,
within the research line of GIS and remote sensing methods and applications, that
CREAF has the highest degree of expertise. This work is carried out by the
Spatial technologies are these days being used for several purposes which include mapping the spread of
diseases, discovery of natural resources, monitoring of natural disasters, monitoring of soil and vegetation
conditions etc. All this has helped a great deal towards enhancing and accomplishing the Millennium
In the past two or three decades our capacity to survey and map the global environment has seen a
“makeover” through the use of Geographic Information Systems (GIS), Remote Sensing (RS) and Global
Positioning System (GPS). While GIS application enables the storage, management and analysis of large
quantities of spatially distributed data which are associated with their respective geographic features;
Remote Sensing is used to gather information about the surface of the earth from a distant platform,
usually a satellite or airborne sensor. The two merge when the remotely sensed data used for mapping
and spatial analysis is collected as reflected electromagnetic radiation, which is processed into a digital
image that can be overlaid with other spatial GIS data of the same geographic site. With their continuous
technological development and improvement, Remote Sensing information is increasingly being utilised
in undertaking socio-economic developments and technological uplifting of the country, in the federal
ministries and provincial departments, public sector organisations, international agencies and private
sectors.
GIS and Remote Sensing, either individually or in combination, spans a wide range of applications with
degree of complexity. More complex applications take advantage of the analytical capabilities of GIS and
RS software. These complex applications might include classification of vegetation for predicting crop
yield or environmental impacts and modelling of surface water drainage patterns etc of which some are
already being used in Africa. These software are of great use in geological and mineral exploration, hazard
assessment, oceanography, agriculture and forestry, land degradation and environmental monitoring
around the world. Each sensor in Remote Sensing devices was designed with a specific purpose. The
design for optical sensors focuses on the spectral bands to be collected while in Radar imaging, the
incidence angle and microwave band used to play an important role in defining which applications the
Example of the areas in which satellite remote sensing technology has been broadly applied in Pakistan,
Agriculture
Environmental monitoring
Mineral exploration
Telecommunication
Some of the above projects like Agriculture in Crop Monitoring have been undertaken by Space Upper
Atmosphere Research Commission (SUPARCO) in the recent past. It is a good example to African
During the recent years, more than eight African countries have embraced the idea of spatial technology
and even more countries are joining due to its numerous benefits in today’s ever growing population in
Africa. Examples of some of the African countries that have already implemented GIS and RS are as
below.
Remote Sensing and GIS in Different African Countries
Generally in Africa, GIS and RS have been used in different fields. For weather monitoring, a software
package is used to generate daily weather data for Latin America and Africa with the use of Decision
Support System for Agrotechnology Transfer DSSAT crop Model. On the health care front, which is an
important field today, Africa and other continents like Asia and America monitor vector arthropod
surveillance in order to control the anopheles mosquitoes. Communications through television and radio
systems also depends mostly on RS and GIS to communicate information from one part of the world to
another. RS and GIS also boost communication through mobile devices like telephones. Here are some of
the applications of GIS and Remote Sensing in some of the African countries.
What Is Bioremediation?
and toxins from soil, water, and other environments. Bioremediation may be used to
contaminants like oil, solvents, and pesticides for sources of food and energy. These
microbes convert contaminants into small amounts of water, as well as harmless
foods. The absence of these elements may prolong the cleanup of contaminants.
Bioremediation can either be done "in situ", which is at the site of the contamination
itself, or "ex situ," which is a location away from the site. Ex situ bioremediation may
be necessary if the climate is too cold to sustain microbe activity, or if the soil is too
excavating and cleaning the soil above ground, which may add significant costs to
the process.
The bioremediation process may take anywhere from several months to several years
to complete, depending on variables such as the size of the contaminated area, the
often takes place underground, where amendments and microbes can be pumped in
methodologies.
The bioremediation process creates relatively few harmful byproducts (mainly due to
the fact that contaminants and pollutants are converted into water and harmless
gases like carbon dioxide). Finally, bioremediation is cheaper than most cleanup
methods because it does not require substantial equipment or labor. By the end of
2018, the United States Environmental Protection Agency (EPA) had brought
Example of Bioremediation
In 1989, the Exxon Valdez oil tanker ran aground off the coast of Alaska; the tanker
ended up spilling approximately 11 million gallons of oil. Around this same time,
bioremediation was gaining traction as a viable option for oil cleanups. The EPA and
Exxon Mobil Corporation (XOM) both began testing different compounds. Initial
more than 2000 applications to the affected areas. By mid-1992, the cleanup was
considered complete, and the fertilizer had degraded nearly all the oil compounds.
contaminants and high molecular weight PAHs are not readily amenable
used with other physical and chemical treatment methods for complete
hence, there is a need for more research in this area. Efforts need to be
technique and another relevant technique that can sustain the effective
and adapted at larger scales for sustainable waste recycling, polluted soil
SOLUTION?
Over the past few months, Rebecca Philp, a PhD student from the Pirbright Institute, has been
working at the Microbiology Society as our Public Affairs intern. While researching for a policy
briefing, Rebecca learnt a lot about bioremediation. She explains a little about it in this blog.
The global population continues to rise at an astonishing rate, with estimates suggesting it will
be in excess of 9 billion in 2050. The intensive agricultural and industrial systems needed to
support such a large number of people will inevitably cause an accumulation of soil, water and
air pollution. Estimates have attributed pollution to 62 million deaths each year, 40% of the
global total, while the World Health Organization (WHO) have reported that around 7 million
people are killed each year from the air they breathe. Water systems fare little better, with an
estimated 70% of industrial waste dumped into surrounding water courses. The world generates
1.3 billion tonnes of rubbish every year, the majority of which is stored in landfill sites or dumped
compounds and absorb inorganic substances. Currently, microbes are used to clean up pollution
pollutants into non-toxic substances. This can involve either aerobic or anaerobic micro-
organisms that often use this breakdown as an energy source. There are three categories of
bioremediation techniques: in situ land treatment for soil and groundwater; biofiltration of the
Soil
Industrial soils can be polluted by a variety of sources, such as chemical spillages, or the
accumulation of heavy metals from industrial emissions. Agricultural soils can become
contaminated due to pesticide use or via the heavy metals contained within agricultural
products.
A visible example of where bioremediation has been used to good effect can be found in
London’s Olympic Park. The grounds that held the 2012 Olympics had previously been heavily
polluted, after hundreds of years of industrial activity. Bioremediation cleaned 1.7 million cubic
metres of heavily polluted soil to turn this brownfield site into one containing sports facilities
cleaned using a new bioremediation technique that saw archaeal microbes breaking down the
ammonia into harmless nitrogen gas. The converted park marked the London 2012 Olympic and
Paralympic Games as the “greenest” and most sustainable games ever held, only possible
with bioremediation techniques.
While some soil cleaning techniques require the introduction of new microbes, ‘biostimulation’
already present. Natural biodegradation processes can be limited by many factors, including
overcome these limitations, providing microbes with the resources they need, which increases
Cleaning up oil-polluted soil is an example of where stimulating microbial growth can be used to
good effect. Research has shown that poultry droppings can be used as a biostimulating agent,
providing nitrogen and phosphorous to the system, which stimulates the natural growth rate of
oil-degrading bacteria. Systems like these may prove cheaper and more environmentally friendly
Air
processes. While chemical scrubbing has been used to clean gases emitted from chimneys, the
newer technique of ‘biofiltration’ is helping to clean industrial gases. This method involves
passing polluted air over a replaceable culture medium containing micro-organisms that degrade
contaminates into products such as carbon dioxide, water or salts. Biofiltration is the only
In the UK, access to clean, potable water and modern sanitation is something we take for
granted. However, there are billions of people on Earth for which this is a luxury. The WHO
estimate that each year 842,000 people die as a result of diarrhoeal diseases, many of which
could be prevented if they had access to clean water and proper sanitation. Around 2.6 billion
people lack any sanitation, with over 200 million tons of human waste untreated every year.
Sewage treatment plants are the largest and most important bioremediation enterprise in the
world. In the UK, 11 billion litres of wastewater are collected and treated everyday. Major
components of raw sewage are suspended solids, organic matter, nitrogen and phosphorus.
Wastewater entering a treatment plant is aerated to provide oxygen to bacteria that degrade
organic material and pollutants. Microbes consume the organic contaminants and bind the less
soluble fractions, which can then be filtered off. Toxic ammonia is reduced to nitrogen gas and
The Future
Bioremediation is not a new technique, but as our knowledge of the underlying microbial
reactions grow, our ability to use them to our advantage increases. Frequently, bioremediation
requires fewer resources and less energy than conventional technology, and doesn’t accumulate
hazardous by-products as waste. Bioremediation has technical and cost advantages, although it
can often take more time to carry out than traditional methods.
Bioremediation can be tailored to the needs of the polluted site in question and the specific
microbes needed to break down the pollutant are encouraged by selecting the limiting factor
needed to promote their growth. This tailoring may be further improved by using synthetic
biology tools to pre-adapt microbes to the pollution in the environment to which they are to be
added.
Pollution is a threat to our health and damages the environment, affecting wildlife and the
sustainability of our planet. Damage to our soils affects our ability to grow food, summarised
in our policy briefing on Food Security. Bioremediation can help to reduce and remove the
pollution we produce, to provide clean water, air and healthy soils for future generations.
Principles of Bioremediation
destruction because they possess enzymes that allow them to use environmental
contaminants as food and because they are so small that they are able to contact
purpose that microorganisms have served in nature for billions of years: the
breakdown of complex human, animal, and plant wastes so that life can continue
from one generation to the next. Without the activity of microorganisms, the earth
would literally be buried in wastes, and the nutrients necessary for the continuation
the subsurface depends on three factors: the type of organisms, the type of
contaminant, and the geological and chemical conditions at the contaminated site.
This chapter explains how these three factors influence the outcome of a subsurface
what types of organisms play a role in in situ bioremediation. Then, it evaluates which
chemicals that will enable them to destroy the contaminants. The bioremediation
encouraging them to work by supplying them with the optimum levels of nutrients
and other chemicals essential for their metabolism. Thus, today's bioremediation
systems are limited by the capabilities of the native microbes. However, researchers
Human activities produce a tremendous variety of byproducts. Agriculture, mining, manufacturing and
other industrial processes leave organic and inorganic residual compounds behind. Some are inert and
harmless, but many are toxic and highly destructive to the environment, particularly the soil and
Bioremediation technology is invaluable for reclaiming polluted soil and water. In the simplest terms,
bioremediation is a waste management process using live organisms to neutralize or remove harmful
Bioremediation is an environmental science that amplifies natural biological actions to remedy or
remediate polluted groundwater and contaminated soil. Rather than using expensive environmental
remediation equipment to remove untreated toxic materials and dispose of them elsewhere,
Microbes are tiny organisms naturally found in the environment. These bacterial microorganisms are
nature’s helpers in decomposing, recycling and rectifying imbalanced chemical conditions in soil and
water. For countless years, nature has been correcting itself, while humans continue to display a
profound ability to make a mess and ignore their damage. But now, science has found an effective way to
remediate bad soil and groundwater conditions by applying natural organic substances and using their
inherent properties.
technique using naturally occurring organisms to attack hazardous materials and change them into less
toxic substances. Often, highly contaminated sites can become toxin-free using proper bioremediation
Bioremediation Works
The bioremediation process is a biological process that stimulates helpful microbes to use harmful
contaminants as their source of food and energy. Certain microorganisms eat toxic chemicals and
pathogens, digesting them and eliminating through changing their composition into harmless gases like
ethane and carbon dioxide. Some contaminated soil and water conditions already have the right counter-
microbes. Here, human intervention can speed up the natural remediation by boosting microbial action.
In other cases where the right microbes are low in numbers or entirely absent, bioremediation is
introduced by adding amendments — microbial actors like fungi and aerobic bacteria that are mixed into
the soil or water. This simple process is called bioaugmentation, and it’s highly effective to correct
conditions quickly, as long as the right environmental conditions are present. Critical conditions for
bioremediation include:
When all these conditions are in the right proportions, microbes grow at enormous rates. If the optimum
conditions are off-balance, microbial action is too slow or can die off altogether, and the contaminants
remain until nature eventually restores a balance. Re-balancing can take a long time in highly polluted
conditions. But proper bioremediation processes rectify most situations in relatively short time. That can
Oxygen has a strong effect on bioremediation. Some microbes thrive on air, while others are hindered
when exposed to excessive oxygen. This effect depends entirely on what particular toxin is being
remediated and what type of microbe is being encouraged. There are two groups or processes of oxygen
Aerobic is the presence of oxygen needed for microbial development. In contaminated soil
conditions, regularly tilling the soil is one aerobic enhancement method. This technique is also
a main activity in composting to oxygenate helpful fungi. Aerobic action is also introduced
mechanically through passive bioventing or by forcing compressed air into soil or under the
Anaerobic is the absence or reduction of oxygen in water or soil. This bioremediation form is
Bioremediation Classes
There are two main classifications of bioremediation. This refers to where remediation is carried out, not
In situ, where all bioremediation work is done right at the contamination site. This can be polluted
soil that’s treated without unnecessary and expensive removal, or it can be contaminated
groundwater that’s remediated at its point of origin. In situ is the preferred bioremediation
method, as it requires far less physical work and eliminates spreading contaminants through
is less desirable. It involves the big job of excavating polluted soil and trucking it offsite. In the
case of contaminated water, ex situ is rare, except for pumping groundwater to the surface
treatment site, three technique classes can be applied. One is landfarming, where soil is spread
and biologically decontaminated. Another is composting, which is an age-old process. The third
class involves biopiles: a hybrid of stacking material in silos, then composting as a biological
treatment.
Bioremediation technique classes are the prescribed physical activities or strategies used in microbial
remedies. The overall process starts with isolating contaminated site conditions and characterizing what
resident microbes exist. Scientists watch how these microbes already interact with the pollutants, then
conduct lab testing to map out colonization requirements. Catabolic activity is studied in the lab, from
which a field plan is developed. Once that’s implemented, the bioremediation process is monitored, and
Bioremediation Strategies
Bioremediation strategies plan how the field work is done. There are different technique applications that
depend on the site’s saturation degree and what contaminants need removal. They also depend on site
conditions such as soil composition, compaction and groundwater tables, as well as runoff characteristics
and whether in situ work is possible, or if the contaminated material requires ex situ removal.
Thanks to today’s advanced technology, most polluted properties can be treated onsite. There are three
main bioremediation strategies, each with individually designed equipment. The three applications are:
Bioventing is the most common approach. This process involves drilling small-diameter wells into
the soil that allows air ingress and passive ventilation where ground gases produced by
microbial action are released. This approach can be used for both soil and groundwater
problems, as it lets oxygen and nutrient rates be controlled by adjusting the vent rate.
Biosparging involves high-pressure air injection forced into the soil or under the groundwater
table. This process increases oxygen concentration and enhances biological Air sparging is
highly effective and affordable, compared to excavating and tilling contaminated soil or
species to the site. Augmentation works in conjunction with both bioventing and biosparging
applications, but has limitations. Non-indigenous microbes are not usually compatible with
There are other bioremediation strategies for contaminated soil and groundwater sites. Oil and
petroleum waste is a big problem in many spots. So is gassing off from methane produced by biological
action. Most regulatory bodies are strict about adding other pollutants into the environment, which is a
Oil is lighter than water and notoriously floats on the surface, creating a hazard for runoff and secondary
pollution. Methane gas is smelly and highly offensive when released in large quantities. This frequently
happens when contaminated soil is stirred, but passively occurs through bioventing and biosparging.
site.
Air strippers work to pull air from soil and clean it before releasing it back into the atmosphere.
This remediation assistance prevents polluted air from escaping the soil and getting out where
it can’t be contained.
Soil vapor extraction is a process where contaminated gases are collected from the soil and
dissipated through mechanical devices. This technique is often used alongside biosparging.
Like oil water separators and air strippers, soil vapor extractors are specialized pieces and
Bioremediation Uses
Bioremediation has become the main choice for contaminated site recovery in America. It’s commonly
used around the world for all sorts of situations where previous human activity has left the location
damaged and unusable without remediation. As the country’s population grows, there are less available
landfills to relocate polluted material. This makes bioremediation very attractive. Thanks to advancing
Contaminants in polluted soil and water cover a broad range of organic and inorganic compounds. They
also cover bacteriological and even radioactive parameters. Some of the uses for bioremediation
Petroleum stations can have corroded underground tanks. Gasoline and diesel fuel
leach into the ground and remain long after the station’s service life expired.
effluent. Heavy metals like lead and chromium are tough to remediate, but many
commonly leach into the soil and groundwater, but can be cleaned up through
bioremediation efforts.
Onsite sanitation systems contaminate soil and groundwater when septic tanks and disposal
fields fail. These sanitary system overflows are highly responsive to biological treatment.
Mine site tailings can be extremely toxic. Bioremediation efforts have proved very successful in
biological treatment. This includes petroleum discharges and even road salts.
Benefits of Bioremediation
The biggest benefit from using bioremediation processes is its contribution to the environment.
Bioremediation uses nature to fix nature. Properly applied by knowledgeable people using specialized
equipment designed for bioremediation, this is the safest and least invasive soil and groundwater cleanup
available.
Bioremediation works for organic pathogens, arsenic, fluoride, nitrate, volatile organic compounds,
metals and many other pollutants like ammonia and phosphates. It’s effective for cleaning insecticides
There are certain specialized pieces of bioremediation equipment available. Some of it takes
knowledgeable operation by trained and skilled people, but much bioremediation equipment is
relatively easy to use. Training and maintenance service is easily available from the right
This specialized equipment is also relatively inexpensive when compared to heavy machinery
and trucks required excavating and hauling off polluted soil. There is also no need for
complicated pumps and reservoirs needed for decontaminating groundwater. Here are
pre-piped turnkey operations that are factory tested and ready to use in the field.
They’re available with air sparging, biosparging and soil vapor extraction systems.
These systems also handle air stripping and oil-water Complete systems are
They set the standard for the entire industry. These complete custom-built systems
include standard air sparging and soil vapor extraction. There are dual-phase
extraction systems with thermal catalytic oxidizers, along with liquid and vapor-
Dual-phase recovery systems fill the gap. They do two jobs in one by using a vacuum
blower and a moisture separator. Gauges, NEMA IV control panels and lever
vessels, oxidizers and manifolds with flow indicators. These can be conveniently
trailer-mounted.
Soil vapor extraction systems include a blower and vacuum pump. All components
are fully integrated with marine-grade aluminum skids. They can also be mounted on
Air sparging systems have both a compressor and blower. Heat exchangers are
available if required. All controls, gauges and indicators can be custom-ordered and
low-maintenance.
Enhanced oil/water separators are used above the ground for surface spill cleanup.
500 GPM.
Noise pollution, unwanted or excessive sound that can
facilities and some other workplaces, but it also comes from highway,
between 120dB and 140 dB causing pain (pain threshold). The ambient
SPL in a library is about 35 dB, while that inside a moving bus or subway
source.
and so on. When sound intensity is doubled, on the other hand, the SPL
level of about 90 dB, then two identical drills operating side by side will
cause a noise level of 93 dB. On the other hand, when two sounds that
masked (or drowned out) by the louder sound. For example, if an 80-dB
combined SPL of those two sources will be measured as 95 dB; the less
louder than low-frequency sounds of the same amplitude. For this reason,
in the meters serve to match meter readings with the sensitivity of the
human ear and the relative loudness of various sounds. The so-called A-
used for impact noise levels, such as gunfire, and tends to be more
accurate than dBA for the perceived loudness of sounds with low
frequency components.
Noise levels generally vary with time, so noise measurement data are
are several ways to do this. For example, the results of a set of repeated
the levels were equal to or higher than 75 dBA for 90 percent of the time.
level (DNL or Ldn) accounts for the fact that people are more sensitive to
noise during the night, so a 10-dBA penalty is added to SPL values that are
measured between 10 PM and 7 AM. DNL measurements are very useful for
exposure, it can cause physical damage to the eardrum and the sensitive
loss, known as noise-induced hearing loss. Hearing loss does not usually
occur at SPLs below 80 dBA (eight-hour exposure levels are best kept
below 85 dBA), but most people repeatedly exposed to more than 105
Children living in areas with high levels of noise pollution may suffer
mate, communicate, navigate, find food, or avoid predators and thus can
of the world’s oceans are polluted with chaotic sounds from ships, seismic
the sea are from naval sonar devices, whose noise can travel hundreds of
miles through the water and is associated with mass strandings of whales
and dolphins.
United States under the Occupational Safety and Health Act of 1970 and
the Noise Control Act of 1972. Under these acts, the Occupational Safety
intervals during the day, the total exposure or dose (D) of noise is
level. Using this formula, the maximum allowable daily noise dose will be
allowable level of noise in octave bands over the entire audio spectrum.
The complete set of 11 curves specifies noise criteria for a broad range of
rumble and high-frequency hiss; hence, they are preferred over the older
goals for noise levels for a variety of different purposes. Part of the
curve; in the event that the sound level exceeds PNC limits, sound-
as static or rushing air, placed in the room, can mask the sounds of
to the ears of people working nearby. This type of device is often used in
which are held over the ears in the same manner as an earmuff. By using
sound level can be attained ranging typically from about 10 dB at 100 Hz
outside noise level falls within acceptable limits. These limits are generally
hours, during evening hours, and at night during sleeping hours. Because
of refraction in the atmosphere owing to the nighttime temperature
necessary for the sound to reach the observer. Another requirement for
this type of barrier is that it must also limit the amount of transmitted
Drivers honking the horn, groups of workers drilling the road surface, aircraft flying over us
in the sky... Noise, noise and more noise. Cities have become the epicentre of a type of
pollution, acoustics, which, although its invisibility and the fact that coronavirus crisis
reduced it until almost yearn it, is severely damaging to human beings. So much so that the
Not only does it hurt humans, it is bad for animals, too. According to the National Park
Service (NPS) in the United States, noise pollution has an enormous environmental impact
and does serious damage to wildlife. Experts say noise pollution can interfere with breeding
Not all sound is considered noise pollution. The World Health Organization (WHO) defines
noise above 65 decibels (dB) as noise pollution. To be precise, noise becomes harmful when
recommended noise levels be kept below 65 dB during the day and indicates that restful
There are many sources of noise pollution, but here are some of the main ones:
Traffic noise
Traffic noise accounts for most polluting noise in cities. For example, a car horn produces 90
There are fewer aircraft flying over cities than there are cars on the roads, but the impact is
Construction sites
Building and car park construction and road and pavement resurfacing works are very noisy.
Bars, restaurants and terraces that spill outside when the weather is good can produce more
than 100 dB. This includes noise from pubs and clubs.
Animals
Noise made by animals can go unnoticed, but a howling or barking dog, for example, can
As well as damaging our hearing by causing — tinnitus or deafness —, constant loud noise
can damage human health in many ways, particularly in the very young and the very old.
Physical
Psychological
Noise can cause attacks of stress, fatigue, depression, anxiety and hysteria in both humans
and animals.
Sleep and behavioural disorders
Noise above 45 dB stops you from falling asleep or sleeping properly. Remember that
according to the World Health Organization it should be no more than 30 dB. Loud noise can
Noise may affect people's ability to focus, which can lead to low performance over time. It is
Interestingly, our ears need more than 16 hours' rest to make up for two hours of exposure
to 100 dB.
International bodies like the WHO agree that awareness of noise pollution is essential to
beat this invisible enemy. For example: avoid very noisy leisure activities, opt for
alternatives means of transport such as bicycles or electric vehicles over taking the car, do
your housework at recommended times, insulate homes with noise-absorbing materials, etc.
preventive and corrective measures — mandatory separation between residential zones and
sources of noise like airports, fines for exceeding noise limits, etc. —, installing noise
insulation in new buildings, creating pedestrian areas where traffic is only allowed to enter
to offload goods at certain times, replacing traditional asphalt with more efficient options
Noise Pollution
Noise pollution can cause health problems for people and wildlife, both on land and in the
sea. From traffic noise to rock concerts, loud or inescapable sounds can cause hearing loss,
stress, and high blood pressure. Noise from ships and human activities in the ocean is
Noise pollution is an invisible danger. It cannot be seen, but it is present nonetheless, both
on land and under the sea. Noise pollution is considered to be any unwanted or disturbing
sound that affects the health and well-being of humans and other organisms.
Sound is measured in decibels. There are many sounds in the environment, from rustling
leaves (20 to 30 decibels) to a thunderclap (120 decibels) to the wail of a siren (120 to 140
decibels). Sounds that reach 85 decibels or higher can harm a person’s ears. Sound sources
that exceed this threshold include familiar things, such as power lawn mowers (90 decibels),
subway trains (90 to 115 decibels), and loud rock concerts (110 to 120 decibels).
Noise pollution impacts millions of people on a daily basis. The most common health
problem it causes is Noise Induced Hearing Loss (NIHL). Exposure to loud noise can also
cause high blood pressure, heart disease, sleep disturbances, and stress. These health
problems can affect all age groups, especially children. Many children who live near noisy
airports or streets have been found to suffer from stress and other problems, such as
Noise pollution also impacts the health and well-being of wildlife. Studies have shown that
loud noises cause caterpillars’ hearts to beat faster and bluebirds to have fewer chicks.
Animals use sound for a variety of reasons, including to navigate, find food, attract mates,
and avoid predators. Noise pollution makes it difficult for them to accomplish these tasks,
Increasing noise is not only affecting animals on land, it is also a growing problem for those
that live in the ocean. Ships, oil drills, sonar devices, and seismic tests have made the once
tranquil marine environment loud and chaotic. Whales and dolphins are particularly
navigate, feed, and find mates, and excess noise interferes with their ability to effectively
echolocate.
Some of the loudest underwater noise comes from naval sonar devices. Sonar, like
echolocation, works by sending pulses of sound down into the depths of the ocean to
bounce off an object and return an echo to the ship, which indicates a location for object.
Sonar sounds can be as loud as 235 decibels and travel hundreds of miles under water,
interfering with whales’ ability to use echolocation. Research has shown that sonar can
cause mass strandings of whales on beaches and alter the feeding behavior of endangered
blue whales (Balaenoptera musculus). Environmental groups are urging the U.S. Navy to stop
Seismic surveys also produce loud blasts of sound within the ocean. Ships looking for deep-
sea oil or gas deposits tow devices called air guns and shoot pulses of sound down to the
ocean floor. The sound blasts can damage the ears of marine animals and cause serious
injury. Scientists believe this noise may also be contributing to the altered behavior of
whales.
Among those researching the effects of noise pollution is Michel Andre, a bioacoustics
researcher in Spain who is recording ocean sounds using instruments called hydrophones.
His project, LIDO (Listening to the Deep Ocean Environment), collects data at 22 different
locations. Back in the lab, computers identify the sounds of human activities as well as 26
species of whales and dolphins. The analysis aims to determine the effects that underwater
noise is having on these animals. Andre hopes his project will find ways to protect marine
Limnology is the study of inland waters - lakes (both freshwater and saline), reservoirs, rivers,
streams, wetlands, and groundwater - as ecological systems interacting with their drainage
basins and the atmosphere. The limnological discipline integrates the functional relationships of
growth, adaptation, nutrient cycles, and biological productivity with species composition, and
describes and evaluates how physical, chemical, and biological environments regulate these
relationships.
The word limnology is derived from the Greek limne - marsh, pond and Latin limnaea - thing
pertaining to a marsh. Stated simply, limnology is the study of the structural and functional
interrelationships of organisms of inland waters as their dynamic physical, chemical, and biotic
Freshwater ecology is the study of the structure, function, and change of organisms in fresh
waters as affected by their dynamic physical, chemical, and biotic environments. Saline waters (>
Freshwater biology is the study of the biological characteristics and interactions of organisms of
fresh waters. This study is largely restricted to the organisms themselves, such as their biology,
inland aquatic ecosystems with the drainage basin, movements of water through the drainage
basin, and biogeochemical changes that occur en route, and within standing (lentic) waters and
exchanges with the atmosphere. The lake ecosystem is intimately coupled with its drainage area
and atmosphere, and with its running (lotic) waters and ground waters that flow, and
primary objective of limnology because of the premier importance of fresh water for the well
being of humankind. The greater our understanding, the higher the probability to predict
variables.
Definition
Our inland waters are vital and important resources. They provide us with drinking water,
recreation, bird and wildlife viewing, fishing, land protection, and so much
more. Limnology is the study of inland waters and their many different aspects. The word
comes from the Greek limne, which means marsh or pond. But limnology is so much more
than that. Limnology covers all inland waters, which may be lakes, rivers, and streams, but
also reservoirs, groundwater, and wetlands. These are often freshwater systems, but
Inland waters are diverse and fascinating places. Limnologists, or those who study
limnology, need to be familiar with many different aspects of inland waters and their
relationships with other water systems, including our atmosphere. For example, limnologists
may study:
Water flow
Pollution
Ecosystem structure
Light influences
Nutrient cycles
Sediments
Bacteria
Human influences
Ecosystems
Animal communities
Save
Timeline
Autoplay
Speed Normal
Video
Quiz
Course
22K views
Limnology incorporates many scientific disciplines into one, including physics, chemistry, and
biology. While the main thread of limnology is water, these water systems are
interconnected, host plant and animal life, and both influence and interact with weather
patterns.
Limnologists often create models to help predict how certain water systems will function
under given conditions. They may also interact with politicians to help guide policy, and they
may be utilized during times of crisis, such as after a pollution event or catastrophic storm.
We interact with inland waters on a daily basis through our drinking water, weather, and
other means, so despite the oceans making up a whopping 96.5% of the water on Earth,
other sciences and studies. One major branch of limnology is freshwater ecology. This
section specifically studies ecological systems and processes in freshwater environments, so
any waters that are less than 3 ppm (parts per million). Limnologists in this branch study
things such as nutrient cycling, structure of the ecosystem, the physical and chemical
Another large branch of limnology is freshwater biology. Limnologists in this branch study
This is different from freshwater ecology because freshwater biology focuses on the
The word Limnology comes from two Greek words. Greek ‘Limnos’ means lake or
submerged body of water and Greek ‘Logos; means knowledge. Therefore, the
search for knowledge on the lake is the main topic of limnology. From the origin
of this branch to the present day, various scientists have defined limnology in
different ways.
Below are some definitions:
Limnology is a branch of science that deals with the study of the biological,
all types of aquatic ecosystems and biology (Brezonik, 1996; Wetzel, 2003).
(2003), in the broadest sense, limnology is the acquisition of knowledge about the
Limonology is the scientific study of the world's inland water bodies such as lakes,
artificial reservoirs, rivers, ponds, wetlands, saline lakes, and coastal bays and
wetlands.
According to F A Forell (1892), the theory of the ocean of lakes is called limnology.
Lind (1989) defined the limnology as marine aquatic ecology, is called limnology.
According to Margallef (1983), the ecology of non-marine water is called
limnology.
scientific discussion of the process and conversion of energy and matter in a lake.
According to Welch (1952), Limnology is that branch of science which deals with
biological productivity of inland waters and with all the causal influences which
determine it.
characteristics, and its actual and potential aspects of limnology. Inland water
biological, climatic influences that determine the nature and extent of biological
production. Due to the different inland water bodies, the quality and quantity of
Historically, the term limnology has been associated with lakes, and the term
rheology has been applied to the science of flowing water. Currently, the term
rheology has been dropped from the term limnological. The term rheology is an
done on water flow and water-soluble components like oil, pigment, etc.
lakes, and rivers have some properties as a liquid medium. However, the sea is
wider in size and older than the inland water. The spread of inland water bodies is
diversity and expansion of plants and animals become more limited and
salt per liter of water, the main ingredient of which is sodium chloride (Nacl).
Inland freshwater, on the other hand, contains at least 0.01 g of salt per liter of
In many cases, inland saltwater lakes have a higher salt-to-salt ratio than
seawater. Such an ecosystem is of an unusual type and thus becomes the focus of
its limnological study. In the case of flowing reservoirs, the chemical processes
considering the history of its evolution and the subject matter, institutions, and
and Uno (196) describe the early history of limnology. Talling (2005) recently
and George. They have seen in research work. These research activities shed light
The discovery and initial discussion of marine plankton by Muller in 1845 aroused
describes and measures internal waves. J. Leslie (J. Leslie, 1838) was the first to
examine the thermal structure of deep lakes, the action of air, and the
(Goldman and Horne, 1983). Moreover, Morren and Morren (Morren and Morren,
Junge and Forbes were the first to call the lake a microcosm. In particular, the
research paper entitled "Lakes" refers to lakes as microcosms and describes the
18).
However, the F.A. Farrell's (F.A. Forel, 1901) research work is the first book on
limnology. Liman researched the biology, physics, and chemistry of lake through
In the early twentieth century, many limnological field stations and laboratories
were established near the lake. For example, the Auto Zacharias Limnological
Research Institute, Germany, was established in 1901 in Plon. To this day, it has
During this time, the research work of Thienemann (Thienemann, 1882-1960) and
Using Weber's theory (1906), these studies provide insights into oligotrophic and
lymphology. The classification of lakes based on nutrients was the first step in the
development of lymphology. Birge and Juday (1911) provide an idea of the types of
lakes. In this case, they consider the productivity of lake organic matter, the depth
of the lake, the morphology of the lake, and the interrelationship between
dissolved oxygen.
L. Agassiz (1850) was a pioneer in the development of limnology in North America.
Birge (1851-1950) and Jude (182-1944) studied the effects of heat and chemical
and trends (Juday and Birge, 1933). Moreover, they did a comparative study of
There are important differences between the early stages of the development of
cycles of the system and European researchers studied the biological community
(Margalef, 1983). For example, Birge and Juday use the amplitude and
as follows:
Limnology.
A laboratory was established in Wildermeier in 1931 to support the Freshwater
comparative idea to apply the same technique to freshwater and saltwater ponds
for food.
In the United States and Europe, laboratories have been established near lakes
on local aquatic ecosystems and collected scientific data. These studies have
temperate regions. Thenemann's research shows that Java, Sumatra, and Bali
meaning a lake rich in high-density humic material. Central and South American,
North American, and European influences are not the same. In South America,
researchers from the Max Planck Institute (Sioli, 1975) and the National Institute
significant rivers and deltas. Bonito (1985.1986), and others, Uruguay, studied the
Bermejo River
explorations on deep and shallow lakes (Bidley, 1981). Tropical limnology has
played a significant role in research on African lakes such as Lake Victoria (Tolling,
1985, 198) and other lakes (Tolling, 1989). Tolling and Limoli presented a more
moon (Carmok Ze et al. 1983) and Lake George (Ganf, 1974; Viner, 1975, 1977). Due
IBP was important for limnology. It establishes more dynamic and comparative
drainage dams in South America and Africa (Vander Heide, 1982). In Spain, 100
reservoirs were studied, which opened the door to reservoir types theory and
artificial aquifers reveals the processes that take place in them. Margalif's study
Extensive research over the last 30 years has led to the emergence of various
theories in limnology. In the twentieth century, Hutchinson (1958, 198, 1985, 1993)
Whipple (1927), Welch (1935,1948), Ruttner (1954), Dussart (1966), Hynes (1972),
Golterman (1975), Wetzel (1975), and Margalif (1983). 1991 and 1994), Goldman
wetlands has expanded considerably in the early 1970s due to the works of
development:
Phytoplankton succession and the processes influence this process, a spatial and
Climate and hydrology in different aspects of geography and their effects on early
(Straikraba, 1973; Le Cren and Lowc-McConnel, 1980; Talling and Lemoalle, 1998).
Over the past few decades, several studies have been conducted on fisheries
levels, especially lake morphometry, and the structure of artificial reservoirs and
ichthyofauna (Barthem and Goulding, 1997). Such research has been playing a role
in the formation of various ideas in theoretical ecology and its application. At the
ecosystem level, river hydrology, interaction with floodplains and lakes (Neiff,
1996; Junk, 1997), wetland control techniques (Mitsch, 1996), comparative study
Stralkraba et al. 1996), saline lakes ( Williums, 1996), the interaction between
terrestrial systems and aquatic systems (Decamps, 1996), and research on the
ecology of large and small rivers (Bonetto, 1994; Walker, 1995) has led to
remarkable work of Henderson and Sellers (1994) and Cooke et al. (1996) has
Publications
1. Advances in Limnology
3. Aquatic Conservation
4. Aquatic Ecology
7. Freshwater Biology
8. Hydrobiologia
11. Limnetica
12. Limnologica
33. Aquaculture
34.Aquatic Botany
37. Ecology
42.Freshwater Riview
43.Freshwater Biology
The last decade of the twentieth century has seen conceptual advances in
From the knowledge of the processes, it can be deduced that the lake is
not an inland internal body of water of the continent but depends on the
the reaction of the lake and the change in the remote area of the basin
are different.
Like other sciences, the study of limnology is essentially a search for principles.
Principles that involve several processes and management strategies that are
For example, comparing the hydrology of rivers, lakes, and reservoirs on the
fundamental functional aspects shows that there are some fundamental practical
aspects that influence the life cycle of aquatic organisms and their extent and
biomass.
Other important factors are the physiological study of phytoplankton and the
analysis of the response to changes in light intensity due to currents. This new
approach sheds light on hydrodynamics and its effects on the vertical structure as
approach has opened wide theoretical and practical doors in limnology research.
A notable aspect of this approach is that it is closely related to recent lim স ology,
around various types of waste reservoirs have resulted in acid rain, leading to a
gradual deterioration of inland water bodies. All these processes which are
damaging the inland water bodies can be brought back to normal by changing and
species) has brought about many structural changes in the aquatic ecosystem. In
eutrophication.
can be managed for a variety of purposes with little investment through the
comparing different lakes and reservoirs from the point of view of origin.
Conservation biology, said to be a "mission-oriented crisis discipline" (Soulé 1986), is a multidisciplinary science that has developed
to address the loss of biological diversity. Conservation biology has two central goals: 1. to evaluate human impacts on biological
diversity and 2. to develop practical approaches to prevent the extinction of species (Soulé 1986, Wilson 1992). The field seeks to
integrate conservation policy with theories from the fields of ecology, demography, taxonomy, and genetics. The principles
underlying each of these disciplines have direct implications for the management of species and ecosystems, captive breeding and
The concept of conservation biology was introduced by Dasmann (1968) and Ehrenfeld (1970). Soulé & Wilcox's (1980)
contribution, Conservation Biology: An Evolutionary Ecological Perspective, served as an impetus for the development of the
discipline. Over the next six years, many scientists began to refer to themselves as conservation biologists. Conservation Biology:
The Science of Scarcity and Diversity was published, a Society for Conservation Biology formed, and a journal was established (Soulé
1986).
Several factors contributed to the development of the field. Scientists began to realize that virtually all natural systems have been
damaged by what Diamond (1986) referred to as the "Evil Quartet": habitat loss and fragmentation, overharvesting, introduced
predators and competitors, and the indirect effects of these threats on ecological interactions. None of the traditional applied
disciplines, such as wildlife management, agriculture, forestry and fisheries, were comprehensive enough by themselves to
address critical threats to biological diversity (Primrack 1993). Also, these traditional applied disciplines often overlooked
threatened species that were of little economic or aesthetic value. Theories and field studies in community ecology, island
biogeography, and population ecology were subjects of major investigation and development in the 1960s and 1970s, and while
these disciplines have direct relevance to conservation, they traditionally emphasized the study of species in their natural
environments, in the absence of human activity. The growing separation of "applied" and "basic" disciplines prohibited the
exchange of new ideas and information between various academic assemblages and to management circles (Soulé 1980).
Conservation biology as a discipline aims to provide answers to specific questions that can be applied to management decisions.
The main goal is to establish workable methods for preserving species and their biological communities. Specific methods have
been developed for determining the best strategies for protecting threatened species, designing nature reserves, initiating
breeding programs to maintain genetic variability in small populations, and reconciling conservation concerns with the needs of
local people (Primrack 1993). For this to be successful, communication among all sectors of the conservation community is
necessary.
The interface between theory and practice in conservation biology, especially from the point of view of resource managers, has
been somewhat neglected (Soulé 1986). Because we do not understand community and ecosystem structure and function well
enough to make reliable predictions, uncertainty has inhibited scientists from providing concrete answers to managers. The
availability of statistical and computational tools has been integral in the development of analytical methods critical to addressing
the issue of uncertainty in conservation biology. Management tools such as population viability analysis (PVA), Bayesian statistics,
and decision analysis have been developed to provide "objective" methods for making conservation decisions. These approaches
have been key in the transformation of conservation biology from an idea to a discipline.
PVA is a process used to evaluate the likelihood that a population will persist for some particular time in a particular environment.
Gilpin and Soulé (1986) conceived of population vulnerability analysis as an integrative approach to evaluate the full range of forces
impinging on populations and to make determinations about viability. PVAs have become a cornerstone of conservation biology,
and it is likely that their importance will increase in the future. The precise role of PVA in conservation biology is still emerging.
In the 1970s, empirical studies and ecological and genetic theory converged on the idea that a species becomes exceptionally
vulnerable to extinction when it includes only a few small populations (MacArthur & Wilson 1967, Richter-Dyn & Goel 1972, Leigh
1975). The observation that once a population was reduced below a certain threshold, it began to dwindle toward extinction led to
the concept of minimum viable population size (MVP), the smallest number of individuals necessary to prevent a population from
going extinct. The concept of MVP officially emerged in response to an injunction from the United States Congress to the US
Forest Service to maintain "viable populations" of all native vertebrate species in National Forests (National Forest Management
Act of 1976, 16 USC 1600-1614; Gilpin & Soulé 1986). The concept encompasses theories that had been developed and tested to
varying degrees in the fields of population genetics and demography. The critical feature of MVP is that it allows a quantitative
MVP remains a tenuous concept among conservation biologists. In light of the complex and dynamic nature of single species
population dynamics, conservation biologists have frowned upon the "magic number" concept. They argue that the job of
conservation biologists should be to recommend or provide more than just the minimum number necessary for a species'
persistence (Soulé 1987). Yet the term has not been abandoned and actually remains a central theme in conservation biology. As
human population growth continues to encroach upon the habitat of endangered and threatened species, the MVP concept is
likely to become a critical tool for conservation biologists to assure the continued existence of species.
Decision analysis, which was developed for guiding business decisions under uncertainty, has been proposed as a useful tool for
endangered species management (Raiffa 1968, Behn & Vaupel 1982, Maguire 1986). Statistical approaches make explicit the logic
by which a decision is reached under conditions of uncertainty. Mace & Lande (1991) and the International Union for Conservation
of Nature (IUCN) have attempted to apply decision analysis theory to put the MVP and PVA concepts into practice for determining
Conservation biology has become a burgeoning discipline since it originated in the early 1980s. Theories from the fields of island
biogeography, genetics, demography, and population ecology have been broadly applied to the design and management of
reserves, captive breeding programs, and the classification of endangered species. Since 1980 we have witnessed the rapid
Nonetheless, the course of development of the discipline has not altogether been smooth sailing; lack of adequate funding
remains a critical problem. The financial and institutional supports for conservation biology, in both its research and educational
roles, need to be strengthened (Soulé 1986). Furthermore, while some advances have been made in the realm of interdisciplinary
cooperation and communication between scientists and managers, significant progress is necessary before the original goals of
extinction. It has become clear that PVA is not currently a viable method for predicting the precise time to extinction for a species.
Further, requiring quantitative data for conservation decisions may unduly place the burden of proof on scientists in a manner
detrimental to the species of concern. PVA is useful, however, for comparing the relative extinction risks among species and
Similarly, the MVP concept has thus far been limited in its potential for application to conservation decisions. Because lack of
genetic variability does not generally pose extinction risks for large populations, the concept is only relevant to small populations.
However, even for small populations, a temporary reduction below any MVP does not necessarily imply a high probability of
extinction. Consensus among conservation biologists about the selection of appropriate assumptions for estimating effective
population size and about the timeframe under which we are concerned about extinction, offers potential for the use of MVP as a
Because conservation decisions are often confounded by uncertainty, decision analysis appears to be a particularly useful method
for conservation biologists. The IUCN classification scheme offers a risk-averse approach to species classification in its use of
multiple criteria, wherein data would typically be available to evaluate at least one of the criteria. However, additional analyses are
necessary to develop and refine analytical tools suggested by the IUCN as status determination criteria.
Until these issues are resolved, the status of conservation biology as a predictive science will remain in serious doubt (Soulé 1986).
Given the imperfect nature of the analytical tools integral to the field of conservation biology, the apparent gap between theory
and practice, and the continued loss of biodiversity, what is the future for conservation biology? The models of today may
undoubtedly become the "broken stick models . . . and other strange and wonderful debris" that Soulé (1987) envisions as littering
the field of mathematical population biology. Nonetheless, population models will continue to evolve as critical tools to
conservation biologists.
The gap between theory and practice is narrowing as a function of the prominence of conservation biology as a field of study.
Because the field is interdisciplinary, it necessarily unites basic and applied scientists with natural resource managers. Scientists will
continue to work with policy makers in developing appropriate and workable approaches to species conservation.
A central theme in conservation biology is developing compromises between conservation priorities and human needs. However,
the precise role of conservation biologists as advocates has yet to be formalized. Soulé himself disobliges scientists from taking on
an advocacy role: "Most biologists and most economists are not trained to be advocates. They're trained to think and teach, to
encourage students and support and advance their disciplines. So to expect that most scientists will turn themselves into effective
Instead, the role of the conservation biologist remains simply to advocate for good science and to make salient findings available
to managers and scientists in other fields. Advocating "values" under the auspices of doing science undermines the objectivity of
science. The distinction between advocacy and science should be clear for conservation biology to persist as a legitimate discipline.
Finally, the dichotomy referred to by Caughley (1994) as the "small population paradigm," which needs more empirical evidence,
and the "declining population paradigm," which needs more theoretical development, has generated substantial debate among
conservation biologists about where the field is going. Caugley pointed out that many of the theoretical underpinnings of
conservation biology are misguided in that they treat an effect, such as small population size, as if it were a cause. He suggested
that conservation efforts should instead be focused on determining causes of population declines and the means by which agents
of a decline can be identified (Caughley 1994). This idea has reoriented many theoreticians to consider the broader scope of their
work and has encouraged field biologists to more closely align their research to conservation-related questions. Thus, the stage
has been set for the future development of both the theoretical constructs and the natural history investigations critical to the
An ecosystem comprises living and non-living components that interact with each other,
such as plants and animals with nutrients, water, air and sunlight. Ecosystems range in size
from a few square meters to millions of square kilometers. There are no set ecosystem
boundaries, rather they are defined by the particular component(s) that biologists are
interested in. For example, a biologist who wants to know how residential development has
affected the fish in a stream ecosystem might study the small streams that feed into a large
stream as well as the surrounding land. Such an ecosystem would cover many square
landscapes. This trend increases the probability that we will protect the large-scale
Biodiversity is the variety of all forms of life and it is essential to the existence
environments in which species can exist; these include ecosystems of all types
provides or supports the core benefits that humans derive from their
environment.
depend on for food, air, and water security, and multiple other natural
benefits.
Stressors and drivers of change
The growing human population and the land development that comes with
species.
Habitat loss is a challenge for virtually all species, as humans convert natural
extinction1.
activity for humans, but high numbers of visitors to an area can damage plant
extinction. Some invasive species that are found in the U.S., such as kudzu and
the Emerald Ash Borer Beetle, can completely alter ecosystems, affecting
overall biodiversity.
All forms of pollution, from chemicals to nutrient loading, can also pose
genetic diversity.
o Genes regulate all biological processes on the planet and increase the
the anti-tumor agent Taxol from the Pacific yew tree, the anti-malarial
artemisinin from sweet wormwood, and the cardiac drug digoxin from
drug ziconotide, which has been highly effective in relieving nerve pain
and severe pain in cancer patients and is derived from the venom of
congestive heart failure and multiple other illnesses may never have
been discovered.
o As conversion of habitats and subsequent losses in diversity take place,
the potential for losing cures for some of the world's most troubling
ailments increases.
In addition to the many medicinal benefits from biodiversity, human health can
has been linked to increases in life satisfaction and happiness, and decreases in
many benefits that this diversity provides for all species. Highly diverse
ecosystems, a steady food supply, and the multiple other benefits including
nations5.
What Is Environmental Justice?
people, regardless of race, color, national origin, or income, with respect to the
state, and local laws; regulations; and policies. Meaningful involvement requires
effective access to decision makers for all, and the ability in all communities to
Environmental justice (EJ) is the fair treatment and meaningful involvement of all
people regardless of race, color, national origin, or income with respect to the
and policies.
Fair treatment means no group of people should bear a disproportionate share of
Meaningful involvement means:
Decision makers will seek out and facilitate the involvement of those
potentially affected.
EPA's goal is to provide an environment where all people enjoy the same degree of
protection from environmental and health hazards and equal access to the decision-
making process to maintain a healthy environment in which to live, learn, and work.
EPA's environmental justice mandate extends to all of the Agency's work, including:
setting standards
permitting facilities
awarding grants
issuing licenses
regulations
environmental and public health issues and concerns. The Office of Environmental
Justice (OEJ) coordinates the Agency's efforts to integrate environmental justice into
all policies, programs, and activities. OEJ's mission is to facilitate Agency efforts to
protect environment and public health in minority, low-income, tribal and other
policies and activities.
income populations.
The Presidential Memorandum accompanying the order underscores certain
provisions of existing law that can help ensure that all communities and persons
Justice (EJ IWG) chaired by the EPA Administrator and comprised of the heads of 11
departments or agencies and several White House offices. The EJ IWG now includes
The statutes that EPA implements provide the Agency with authority to consider and
address environmental justice concerns. These laws encompass the breadth of the
Setting standards
Permitting facilities
Making grants
Public health
Cumulative impacts
Social costs
Welfare impacts
Moreover, some statutory provisions, such as under the Toxics Substances Control
Act, explicitly direct the Agency to target low-income populations for assistance.
standards. In all cases, the way in which the Agency chooses to implement and
Integrating EJ at EPA
Since OEJ was created, there have been significant efforts across EPA to integrate
environmental justice into the Agency's day-to-day operations. Read more about
through its programs, policies and activities, and support our cross-agency strategy
on making a visible difference in environmentally overburdened, underserved, and
Every regional and headquarter office has an environmental justice coordinator who
and organizations. To find out more about Agency efforts to address environmental
due to a local dispute over toxic waste dumping near a neighborhood of African-American
people. The movement emphasized from the beginning that environmental problems
cannot be solved without unveiling the practices maintaining social injustices. Many of the
The North American debate on environmentalism and justice has developed via deep
contradictions, which reflects the delicate historical nature of the issue. Racial and other
social questions are often intentionally avoided by dedicated nature conservationists and
this frames the whole tradition, initially established through the struggles for nature parks
and wilderness areas. The history of the Western idea of nature is part of the history of the
white middle class that had learned to appreciate the esthetic value of wilderness. Purified
and thus white areas of nature therefore symbolize the areas of white power. This was
made clear during the formative years of nature conservation in the USA when the
indigenous First Nations were forced to leave their homelands which were overlapping with
Environmental history in North America is rooted to the expansive colonial control over the
resources of newly settled areas. The European colonization of the continent turned into a
cruel genocide of the First Nations. The frontier of the settlers progressed through the
wilderness and this was considered synonymous with the dawn of civilization. The colonial
success was completed by the transatlantic slave trade and later immigration to urban
ghettos colored by unfair divisions of welfare. Environmental justice issues turn therefore
repeatedly into questions of environmental racism. The evidence telling that environmental
risks tend to accumulate on ethnic minorities starkly reminds the North Americans of their
The historically specific sense of justice, built on the awareness of the interethnic violence
behind the founding of modern North America, is present in the continuous re-articulation
of social and environmental inequalities, both local and global. The environmental justice
issues are accordingly dealt with in two main forums. They are routinely taken care of by law
experts in courtrooms while the critical alternatives are presented by the activist networks
worrying about local–global injustices. The pragmatic lawyers and the forward-looking
activists share with many Europeans the ideal of just decision making and the belief in
change for the better. However, despite the common background, the practical conditions
of environmental burdens in certain communities. It works to ensure a healthy environment for all
Environment: Definition
When you think about the environment, your mind might conjure up images of rambling
the environment would include your home, place of work, schools, and community parks.
These are the places you spend your time, and they play a big role in your overall health,
Those involved in the movement called environmental justice feel that a healthy
environment is a necessary component of a healthy life. In this lesson, we will learn about
environmental justice and its efforts to make everyone's environment clean, safe and
healthy.
Save
Timeline
Autoplay
Speed Normal
Video
Quiz
Course
77K views
Environmental Justice
fair treatment and meaningful involvement of all people regardless of race, color, national
environmental laws, regulations, and policies. In other words, your health should not suffer
The concept of environmental justice began as a movement in the 1980s due to the
realization that a disproportionate number of polluting industries, power plants, and waste
disposal areas were located near low-income or minority communities. The movement was
set in place to ensure fair distribution of environmental burdens among all people regardless
of their background.
environmental justice cover many aspects of community life. These burdens can include any
community or its residents. For instance, one of the environmental justice issues and
income or minority communities, often lack supermarkets or other sources of healthy and
affordable foods.
Another issue is inadequate transportation. While public transportation may be available in
urban areas, policies must be monitored to avoid cuts in service and fare hikes that make it
Air and water pollution are major environmental justice issues. Because many lower-income
or minority communities are located near industrial plants or waste disposal sites, air and
These communities may also contain older and unsafe homes. Older homes are more likely
to have lead-based paint that can chip and find its way into the dust and soil surrounding the
home, leading to illness. These houses may also be prone to structural problems, mold or
In 1991, principles of environmental justice were adopted at the First National People of
WE, THE PEOPLE OF COLOR, gathered together at this multinational People of Color
movement of all peoples of color to fight the destruction and taking of our lands and
our Mother Earth; to respect and celebrate each of our cultures, languages and beliefs
about the natural world and our roles in healing ourselves; to ensure environmental
liberation that has been denied for over 500 years of colonization and oppression, resulting
in the poisoning of our communities and land and the genocide of our peoples, do affirm
Justice (EJ)
1) Environmental Justice affirms the sacredness of Mother Earth, ecological unity and the
interdependence of all species, and the right to be free from ecological destruction.
2) Environmental Justice demands that public policy be based on mutual respect and
justice for all peoples, free from any form of discrimination or bias.
3) Environmental Justice mandates the right to ethical, balanced and responsible uses of
land and renewable resources in the interest of a sustainable planet for humans and other
living things.
4) Environmental Justice calls for universal protection from nuclear testing, extraction,
production and disposal of toxic/hazardous wastes and poisons and nuclear testing that
threaten the fundamental right to clean air, land, water, and food.
5) Environmental Justice affirms the fundamental right to political, economic, cultural and
6) Environmental Justice demands the cessation of the production of all toxins, hazardous
wastes, and radioactive materials, and that all past and current producers be held strictly
accountable to the people for detoxification and the containment at the point of
production.
7) Environmental Justice demands the right to participate as equal partners at every level
and evaluation.
8) Environmental Justice affirms the right of all workers to a safe and healthy work
unemployment. It also affirms the right of those who work at home to be free from
environmental hazards.
full compensation and reparations for damages as well as quality health care.
violation of international law, the Universal Declaration On Human Rights, and the United
11) Environmental Justice must recognize a special legal and natural relationship of Native
Peoples to the U.S. government through treaties, agreements, compacts, and covenants
12) Environmental Justice affirms the need for urban and rural ecological policies to clean
up and rebuild our cities and rural areas in balance with nature, honoring the cultural
integrity of all our communities, and provided fair access for all to the full range of
resources.
13) Environmental Justice calls for the strict enforcement of principles of informed
consent, and a halt to the testing of experimental reproductive and medical procedures
corporations.
16) Environmental Justice calls for the education of present and future generations which
emphasizes social and environmental issues, based on our experience and an appreciation
17) Environmental Justice requires that we, as individuals, make personal and consumer
choices to consume as little of Mother Earth's resources and to produce as little waste as
possible; and make the conscious decision to challenge and reprioritize our lifestyles to
ensure the health of the natural world for present and future generations.
Abstract
tested the effects of intergenerational justice, ecological justice and global justice
processes that might explain the relation between justice beliefs and pro-
responsibility and moral anger mediated the effects, with the former being a
are discussed in light of current societal debate and policy recommendations are
exemplified.
CalRecycle recognizes the varied cultural strengths in California, and we acknowledge the
different communication, environmental health, and economic needs within the state,
barriers, such as complex government processes and limited English language skills, may
hinder full participation in important environmental programs. We also recognize that many
Californians live in the midst of multiple sources of pollution, and some people and
Overall Improvements
Increase protection of public health and safety, and the environment, within disadvantaged
communities.
Expand our awareness of, and services to, Californian’s varied cultures.
Ensure our vision for solid waste recycling infrastructure includes minimizing negative
Highlight each person’s responsibility to preserve the earth’s natural and cultural resources
and to protect equal access, rights, and enjoyment for future generations.
Participation in Decisions
process, prior to the actual point when decisions are being made, so they have a say in
decisions that affect their well-being. This includes working with local enforcement agencies,
planning departments, cities, and counties for information sharing about local-level decisions
Resources
businesses and consumers, about reducing waste and increasing reuse, recycling, and
composting.
Ensure Environmental Justice interests are prioritized in CalRecycle grant funding decisions