Download

Download as pdf or txt
Download as pdf or txt
You are on page 1of 139

ENVIRONMENTAL MONITORING

ENVIRONMENTAL
MONITORING
ENVIRONMENTAL
MONITORING

Ruth McDonald
Environmental Monitoring
by Ruth McDonald

www.tdmebooks.com

© 2020 Tritech Digital Media

All rights reserved. No portion of this book may be reproduced in any form without
permission from the publisher, except as permitted by U.S. copyright law. For permissions
contact:
[email protected]

Ebook ISBN: 978-1-7996-9792-3

Published by:
3585 S Vermont Ave,
#7367 Los Angeles, CA, US, 90007
Website: www.tdmebooks.com
Contents

1. Detection and Monitoring of Pollutants 1


2. Biological Treatment for Wastewater 41
3. Health Impacts of Water Pollution 68
4. Effects of Soil Pollution 80
5. Measures of Radioactive Pollution 99
6. Major Causes of Biodiversity 118
Environmental Monitoring 1

Detection and Monitoring of Pollutants

A wide range of biological methods are already in use to detect pollution


incidents and for the continuous monitoring of pollutants. Long established
measures include: counting the number of plant, animal and microbial species,
counting the numbers of individuals in those species or analysing the levels
of oxygen, methane or other compounds in water. More recently, biological
detection methods using biosensors and immunoassays have been developed
and are now being commercialised. Most biosensors are a combination of
biological and electronic devices - often built onto a microchip.
The biological component might be simply an enzyme or antibody, or
even a colony of bacteria, a membrane, neural receptor, or an entire organism.
Immobilised on a substrate, their properties change in response to some
environmental effect in a way that is electronically or optically detectable.
It is then possible to make quantitative measurements of pollutants with
extreme precision or to very high sensitivities. The sensors can be designed
to be very selective, or sensitive to a broad range of compounds. For example,
a wide range of herbicides can be detected in river water using algal-based
biosensors; the stresses inflicted on the organisms being measured as changes
in the optical properties of the plant’s chlorophyll.
Microbial biosensors are micro-organisms which produce a reaction upon
contact with the substance to be sensed. Usually they produce light but cease
2 Environmental Monitoring

to do so upon contact with substances which are toxic to them. Both naturally
occurring light emitting microorganisms as well as specially developed ones
are used. Positively acting bacterial biosensors have been constructed which
start emitting light upon contact (and subsequent reaction) with a specific
pollutant. In the USA such a light emitting bacterium has been approved for
the detection of polyhalogenated aromatic hydrocarbons in field tests.
Immunoassays use labelled antibodies (complex proteins produced in biological
response to specific agents) and enzymes to measure pollutant levels.
If a pollutant is present, the antibody attaches itself to it; the label making it
detectable either through colour change, fluorescence or radioactivity.
Immunoassays of various types have been developed for the continuous,
automated and inexpensive monitoring of pesticides such as dieldrin and
parathion. The nature of these techniques, the results of which can be as simple
as a colour change, make them particularly suitable for highly sensitive field
testing where the time and large equipment needed for more traditional testing
is impractical. Their use is however limited to pollutants which can trigger
biological antibodies. If the pollutants are too reactive, they will either destroy
the antibody or suppress its activity and so also the effectiveness of the test.

DETECTION AND MONITORING OF MICROORGANISMS


USED FOR BIOREMEDIATION
When laboratory grown micro-organisms are inoculated into a
bioremediation site (bioaugmentation) it often becomes necessary to monitor
their presence and/or multiplication to check the progress of the process.
This is especially true and even required when genetically modified micro-
organisms are involved. The traditional technique to detect the presence of
micro-organisms in soil is direct plating on selective media.
This is greatly facilitated if the organism contains a marker which can be
selected for. Newer techniques include the above mentioned immunological
and light-based bioreporter techniques. The spatial distribution of specific
microorganisms in a sample can be determined microscopically and non-
invasively by using fluorescent in situ hybridisation (FISH) of micro-
organisms. The most sensitive and specific technique is the direct isolation
and amplification of DNA from soil, which is increasingly being used.

DETECTION AND MONITORING OF ECOLOGICAL EFFECTS


Bioremediation is aimed at improving the quality of the environment by
removing pollutants. However, the disappearance of the original pollutant is
Environmental Monitoring 3

not the only criterion by which the success of a bioremediation operation is


determined. (Even more) toxic metabolites may be produced from the
pollutant or the biodegrading bacterium may cause diseases or produce
substances that are harmful to useful micro-organisms, plants, animals or
humans.
All these negative effects, are of course, excluded as much as possible in
advance by getting as familiar as possible with the organism through extensive
literature searches and microcosm studies in which the bioremediation process
is simulated in the laboratory. To avoid unexpected effects, especially after
the release of new member of the eco-system like a genetically modified
organism, the monitoring of the ecological effects of a bioremediation
operation may be required.
The problem with monitoring ecological effects is what to monitor.
Numerous ecological effects are possible but not all of them may be relevant
or permanent or even the result of the bioremediation operation. The
parameters to be monitored are usually determined case-by-case. Monitoring
techniques may include all of those mentioned in the two previous subsections
on detection and monitoring.

GENETIC ENGINEERING
Recombinant DNA technology has had amazing repercussions in the last
few years. Molecular biologists have mapped entire genomes, many new
medicines have been developed and introduced and agriculturists are
producing plants with novel types of disease resistance that could not be
achieved through conventional breeding. Several of the previously mentioned
examples like the amylose-free potato and the indigo-producing bacterium
also involve the use of organisms genetically modified by recombinant DNA
technology.
Many enzymes are routinely produced by genetically modified organisms
too. Given the overwhelming diversity of species, biomolecules and metabolic
pathways on this planet, genetic engineering can in principle be a very
powerful tool in creating environmentally friendlier alternatives for products
and processes that presently pollute the environment or exhaust its non-
renewable resources. Politics, economics and society will ultimately determine
which scientific possibilities will become reality.
Nowadays organisms can also be supplemented with additional genetic
properties for the biodegradation of specific pollutants if naturally occurring
organisms are not able to do that job properly or not quickly enough. By
4 Environmental Monitoring

combining different metabolic abilities in the same micro-organism


bottlenecks in environmental cleanup may be circumvented. Until now this
has not been done on any significant scale. The main reason being the fact
that in most cases naturally occurring organisms can be found or selected
for, which are able to clean up a polluted site.
Examples have been found where soil bacteria have developed new
properties in response to the introduction of xenobiotics (that is, manmade
chemicals that are normally not found in nature). In some cases they even
appear to have acquired properties from other species. In the USA some
genetically modified bacteria have been approved for bioremediation purposes
but large scale applications have not yet been reported.
In Europe only controlled field tests have been authorized. Because new
organisms can be created by genetic engineering that may never be produced
by spontaneous or selection driven evolution, concerns exist about the
unpredictability of their possible interactions with the eco-system.
Genetically modified organisms which are properly kept within the
confines of their approved production facilities are much less a concern than
genetically modified organisms which are meant to be released into the
environment like disease resistant plants or soil bacteria for bioremediation.
The possible ecological effects of the latter are even more difficult to
evaluate due to the fact that it is well known that soil bacteria frequently
exchange genetic material (also between species).
This together with the fact that we know little about the great majority of
soil inhabiting bacterial species, makes it almost impossible to predict the
fate of every DNA copy of a newly introduced genetic property in a soil
bacterium.
If the extra DNA is derived from another soil bacterium, it may on the
other hand be reasonable to argue that the genetically modified bacterium
might also have evolved spontaneously some day due to the frequent exchange
of genetic material in the soil.

LEGISLATION
Regulation to ensure safe application of novel or modified organisms in
the environment is important, not least to maintain public confidence. The
European Union has two Directives on the contained use of genetically
modified micro-organisms, and on the deliberate release of genetically
modified organisms into the environment. These have been implemented in
the national legislation of most EU Member States. They require that a detailed
Environmental Monitoring 5

experimental protocol, including assessment of potential risks, is approved


by competent authorities before a genetically modified organism is released
into the environment.
The nature and sometimes even the site of the release has to be published
in the local press in some countries. After several years’ experience using the
legislation, the procedures involved are now being revised. Ammendments
to clarify and revise Directive 90/219/EEC were published in December 1998.
The aim of the European Commission is to maintain the EU’s competitiveness
globally - both in research and commercial applications- without
compromising safety.

PUBLIC OPINION, DIALOGUE AND DEBATE


In spite of the fact that traditional biotechnology already is of great value
to bioremediation and modern biotechnology may enhance this even further,
there are no recent data on what Europeans specifically think about
environmental biotechnology. Generally speaking, Europeans tend to take
an “optimistic” view of the developments they expect from modern
biotechnology, according to the most recent European Commission public
opinion survey which was published in 1997.
Unfortunately this survey did not investigate the attitude of the public
towards environmental biotechnology. The only environmentally related
question in this survey was whether people believed that modern
biotechnology would substantially reduce environmental pollution, which
47 % did. Whether or not this is only wishful thinking remains to be
determined. Ultimately, hard proof expessed in the form of improved
environmental parameters will be needed for full acceptance of environmental
biotechnology.
Conferences, public debates, seminars and round table meetings have been
held to bring people from the public, government, environmental
organisations, science and industry together to discuss critical issues. These
lively debates do not always lead to consensus, but they can provide a fuller
appreciation of all the aspects in a particular issue, facilitating a better
understanding of the problems involved.
A recent example is the workshop ‘How can biotechnology benefit the
environment’. Public information aimed at advancing dialogue and debate is
provided by many organisations. A compilation of these can be found in the
handbook of information sources which has been published by and can be
ordered from the EFB Taskgroup on Public perceptions of Biotechnology.
6 Environmental Monitoring

Environmental biotechnology has a career extending back into the last century.
As the need is better appreciated to move towards less destructive patterns
of economic activity, while maintaining improvement of social conditions in
spite of increasing population, the role of biotechnology grows as a tool for
remediation and environmentally sensitive industry. Already, the technology
has been proven in a number of areas and future developments promise to
widen its scope. Some of the new techniques now under consideration make
use of genetically modified organisms designed to deal efficiently with
specific tasks.
As with all situations where there is to be a release of new technology into
the environment, concerns exist. There is a potential for biotechnology to
make a further major contribution to protection and remediation of the
environment. Hence biotechnology is well positioned to contribute to the
development of a more sustainable society. As we move into the next
millennium this will become even more vitally important as populations,
urbanisation and industrialisation will continue to climb.

More Sustainable Industrial Processes Through the use of Enzymes


The leather processing industry has introduced enzymes to replace harsh
chemicals traditionally used for cleaning the hide. In textile production,
enzymes have superseded chemicals for bleaching, including the “stone
washing” of jeans. Chlorine consumption by the pulp and paper industry
may soon also be reduced considerably by the use of enzymes. The grease
and protein digesting enzymes in washing powders significantly reduce the
quantity of detergents needed for a given washing effect.
They also mean that the washing temperature can be reduced. Lowering
the temperature 20°C saves more than a third of the energy used by the
machine. Since in many Western European countries up to 5% of household
energy consumption was used for washing, these molecules have made a
significant contribution to energy conservation.

Biotechnological Solutions for Pollution


Pigs and chickens cannot utilize phosphate from phytate in their feed,
which therefore ends up in their manure. By adding the enzyme phytase to
their feed the amount of phosphate which is excreted by these animals can
be reduced by more than 30 %. In South Africa bacteria are used for the
isolation of gold from gold-ore. This so-called biomining saves an enormous
amount of smelting energy and generates much less waste. The chemical
Environmental Monitoring 7

production of indigo, the dye which is used for blue jeans, takes eight steps,
the use of very toxic chemicals and special protection measures for the process
operators and the environment. The biotechnological production of indigo,
which uses a genetically modified bacterium containing the right enzymes,
takes only three steps, proceeds in water, uses simple raw materials like sugar
and salts and generates only indigo, carbon dioxide and biomass which is
biodegradable.

AGRICULTURAL AND
ENVIRONMENTAL BIOTECHNOLOGY
Biotechnology refers to the practical application of modern laboratory
techniques such as recombinant DNA. Although most biotechnology research
has been medical, more and more is being undertaken for agricultural and
environmental uses. The term biological control usually refers to solutions
found in nature that can replace synthetic chemicals in the environment.
Since many biological control processes are being improved through
biotechnology, it has become impractical to distinguish between the two.
With proper attention to risk management, both natural and laboratory-
enhanced “green technologies” have the potential to protect our wildlife
heritage and reduce our chemical contamination of the environment.

ENVIRONMENTAL MONITORING
Both heavy metals and persistent organic pollutants (POPs) are capable
of accumulating in plants and animals. As they move up the food chain,
these contaminants often increase in concentration and become more harmful
to wildlife. Many species of bacteria, blue-green algae, mosses, ferns, and
dicots are capable of absorbing POPs and heavy metals. By following the
concentration of contaminants in specific plants, changes in environmental
contamination can be monitored.

BIOREMEDIATION AND WASTE MANAGEMENT


Many of the organisms useful for environmental monitoring can also be
used for the biological destruction, detoxification and harvesting of POPs
and heavy metals. The term for these biologically mediated clean-up functions
is bioremediation.
POPs and hydrocarbons can be destroyed by seeding soils with white rot
fungi that degrade the contaminants. Heavy metals like lead can be removed
8 Environmental Monitoring

by cultivating Indian mustard (Brassica juncea) on the contaminated soil


after tying up the polluting element with the chelating agent EDTA. The
harvested mustard stems and leaves are disposed of as hazardous waste. Water
can be cleaned by a community of bacteria, blue-green algae, plants, and fish
housed in an infrastructure known as a Living Machine. The organisms
sequester heavy metals and break down various organic compounds that make
the water unsuitable for wildlife habitat and human consumption.

BIOSUBSTITUTION AND INTEGRATED PEST MANAGEMENT


Biosubstitution is the replacement of synthetic chemicals with biological
alternatives. For example Bacillus thuringiensis (BT) is a soil bacterium that
produces toxins that are short-lived in the environment and are non-toxic to
people, wildlife, aquatic life and most beneficial insects. BT toxins are
incorporated into biopesticidal products for agriculture and home gardens,
and BT genes have been inserted into many food crops so they can produce
their own insecticidal toxins.
Plants (and animals) like BT corn are called genetically modified
organisms, or GMOs. Another example of biosubstitution is the application
of baculoviruses that infect and kill parasitic caterpillars. Genetic modification
of the baculoviruses holds great promise for increasing their usefulness as
natural pesticides.
Integrated pest management (IPM) is the application of environmentally
friendly cultivation practices that reduce the need for synthetic pesticides. In
IPM, pesticides are applied only when harmful stages of the pests emerge.
The cultivation of pest-resistant crop strains and the rotation of crop species
help avoid the buildup of parasite populations. Biosubstitution is the most
recent tool to be added to IPM for more sustainable agriculture.

REDUCING THE RISK OF NEW BIOTECHNOLOGY


All technologies can produce unanticipated negative repercussions.
Assessing the environmental safety of biotechnologies can be especially
difficult. New risks must be identified, quantified, and evaluated for their
potential impact. GMOs may compete or cross with unmodified varieties,
they may become weeds, or they may make pests hardier than ever by inducing
new resistance to naturally occurring pesticides.
Special consideration needs to be given to the release of GMOs in
“centers of origin,” as Mexico is for corn and China for soybeans. Complex
genetic interactions between indigenous varieties and their wild relatives
Environmental Monitoring 9

(such as corn and teosinte) may be altered by inadvertent crossing with


GMOs. The successful use of biotechnology for agricultural and
environmental applications will require vigilance and on-going management
of the risks.

IMPACT ON THE ECOSYSTEM


Pollution affects all species in the contaminated ecosystem, not just the
large, conspicuous animals at the top of the food chain. When we use new
biotechnology to solve environmental problems, we should also consider its
potential impact on the entire ecosystem. With thoughtful use, biotechnology
can have many positive effects on the diversity of species and habitats.
Only by considering the ecosystem as a whole can we protect nature and
shrink the growing list of endangered and threatened species. For instance,
monarch butterflies overwinter in central Mexico. Today, deforestation and
excessive tourism destroy their habitat. In the future, toxic chemicals like
pesticides from surrounding agriculture will threaten their existence (unless
we harness the potential of biotechnology to reduce chemical contamination
of the biosphere.)

PRINCIPAL DIRECTIONS OF
BIOTECHNOLOGICAL DEVELOPMENT
In a recent review of agricultural technology, the Office of Technology
Assessment (OTA) of the US Congress defined biotechnology to include
'any technique that uses living organisms or processes to make or modify
products, to improve plants or animals or to develop micro-organisms for
specific uses-it focuses upon recombinant DNA and cell fusion technologies'.
Longworth concurs with this definition, with the significant addition of
tissue-culture techniques, an aspect of biotechnology which has great
economic, and therefore social and political, potential impact. Despite debates
about whether these or any other definitions are adequate, they do touch on
the main aspects of biotechnology which deserve to be considered.
The aspect of this new biotechnology which most captures the imagination
and stirs the greatest controversy is gene splicing or recombinant DNA
techniques, which inspire researchers to consider the possibilities of producing
reproducible animals and plants markedly different from current existing
species, and referred to as transgenic species. Already in higher animals and
plants recombinant DNA has produced transgenic forms which are being
commercially exploited.
10 Environmental Monitoring

Their agricultural significance is, however, so far limited, and commercial


application is concentrated in highly profitable pharmaceutical and
horticultural markets. Genes have been introduced into several animal species
which alter their protein synthesis to enable transgenic sheep to produce
insulin in their milk, and rabbits to produce interferon. A completely different
application has originated in Denmark for salmon, where it has proved
possible to introduce germplasm which enables the salmon's physiology to
handle heavy metals, which are normally toxic, so opening up new locations
for farm fisheries.
Much current research is directed to conferring disease immunity on
animals, and holds out the prospect of widespread commercial application.
In plants one achievement has been the transfer of genetic resistance to
antibiotics in the petunia, and another has been the introduction of storage-
protein genes from French bean plants into tobacco plants.
As yet commercial progress with recombinant-DNA technology in plants
appears limited, but extensive opportunities beckon, particularly because plant
research is less restricted by the ethical and animal-welfare concerns which
apply to research on transgenic animals.
The most important commercial developments based on gene splicing
have so far occurred with genetically much simpler microorganisms, and it
is with these that the greatest short-to mediumrun commercial potential lies.
Already genetically engineered micro-organisms are producing a variety of
hormones, vaccines, enzymes and other proteins. Important examples are
the production of insulin, vaccines for neo-natal diarrhoea in calves and
piglets, and the bovine growth hormone BST identical to that produced in
cows which can stimulate a 20 per cent increase in milk yield. BST is already
a source of problems for legislators in the European Community, and has
provoked strong reactions from the media and milk consumers.
As far as larger animals, and cattle in particular, are concerned, it is
developments in embryo transfer and in many processes for manipulating
reproduction which hold out the prospect of continuing increases in yields,
feed conversion efficiency and general economic efficiency. Already,
apparently over 1 per cent of dairy calves in the USA are from embryo
transplants, despite the high costs still associated with this procedure.
Widespread adoption of these sophisticated technologies would further
distance livestock farming from its traditional rural simplicity, and from the
natural mating of animals to produce offspring. It would place technical
demands upon the operators which favour large-scale company farms capable
Environmental Monitoring 11

of supporting a range of highly trained specialists. Longworth identifies new


techniques of tissue and cell culture as having 'the potential for enormous
advances in crop improvement in the next couple of decades'.
Those techniques 'can both increase the genetic diversity and greatly
increase selection efficiency', and they permit innumerable plants to be
reproduced asexually from single cells or small pieces of tissue. The particular
technique known as 'callus culture' has been used for many years to clone
highly valued horticultural plants such as orchids, and cloning of cuttings is
widely practised for tree crops such as tea and palm oil as well as by millions
of gardeners for garden plants.
According to Longworth 'cell culture' of single cells has unexpectedly,
and so far inexplicably, resulted in plants with different properties being
regenerated from the same clump of parent tissue. Sugar cane, maize and
potato plants regenerated in this way have been found which are resistant to
important pathogens. This application of cell culture with the capacity to
generate vast numbers of seedlings rapidly has the potential to simplify greatly
the hitherto labourious procedures of plant breeding and selection. However,
in terms of current and immediate commercial importance, it is through micro-
organisms that biotechnology has its greatest impact. Microbial fermentation
processes have been of great commercial significance for centuries, for
example in bread, wine and cheese production, and there is the prospect of
considerable development. Two recent examples of new processes indicate
the sorts of impact that such developments can have.
In the late 1960s genetically engineered bacteria were developed which
were able to digest corn starch to produce high-fructose corn syrup (HFCS)
and which left as a residue corn gluten, which is now an important protein
feed for livestock. HFCS has made considerable inroads in the United States
and in Japan. It has largely replaced sugar as a sweetener in Coca Cola as
well as many other food and drink products. This has helped depress sugar
prices to Third World growers.
In the European Community steps have been taken to prevent HFCS and
other new powerful sweeteners from being produced and from undermining
the market for domestically grown beet sugar, which is supported by the
Common Agricultural Policy. A second important application of microbial
fermentation has been the production of ethanol from sugar cane in Brazil
and from maize in the USA.
The commercial viability of these processes is critically dependent upon
the price of oil as the main non-renewable source of fuel, and, to date, massive
12 Environmental Monitoring

subsidies have been required to maintain the Brazilian and USA ethanol
programmes. Longworth states, however, that a new biotechnology,
Sucrotech, is being patented, which will not only reduce the cost of producing
ethanol from sugar cane, but will simultaneously produce fructose at a cost
which will be competitive with HFCS and may thereby reclaim part of the
sweetener market for sugar cane.
That such possibilities are in prospect indicates how volatile the future
might be; biotechnology has tilted the competitive balance from sugar cane
to maize, causing economic pressure and even disruption to sugar-cane-
dependent economies and may in future switch it back again. In the long run
the capacity to ferment fuels microbially from renewable agricultural
feedstocks points to an important long-term reorientation of agriculture if
non-renewable oil becomes uncompetitively expensive as a fuel for cars and
feedstock for certain chemicals.
It suggests that eventually there will be an increased emphasis on
agricultural production of industrial feedstocks at the same time as continually
increasing food output will be required to feed the expanding world
population. All this will require considerable increases in agricultural
productivity, to which biotechnology will increasingly contribute, but it will
at the same time impose great strains on the natural environment and the
structure of agriculture.

PRIVATE-SECTOR CONTROL OF BIOTECHNOLOGY


DEVELOPMENT
It has been the role of the public sector to undertake research and
development of agricultural technology as a public service to firms which
might profit from translating that R & D into a commercial product or process,
to farmers who might profit from adoption of the technology, and perhaps
most importantly to consumers at home and abroad who benefited from the
lower prices resulting from greater abundance. This was particularly true of
the phase of change dominated by improvements in plant and animal breeding,
the 'old' biotechnology.
While the public sector had a role in basic research for chemical and
mechanical technologies, the benefits of research expenditure in these areas
were easier to capture by private companies investing in R & D, so that
increasingly the public sector has taken a smaller role in these areas although
maintaining a strong regulatory role in regard to agricultural chemicals in
particular.
Environmental Monitoring 13

Where it is impossible to prevent others from escaping payment for the


research costs, either because the product is easily copied (seeds which can
be regenerated by farmers or other firms) or because proposals for new
methods can be readily implemented, private firms are understandably
unwilling to invest. Machines, insecticides, fungicides and other manufactured
inputs do lend themselves more readily to private exploitation. Nevertheless
the returns to R & D in these products do depend upon the degree of difficulty
potential competitors would have in copying the product. In some cases there
are inherent technical difficulties in copying the process, or it would be
prohibitively expensive, but in other cases it is the ability to obtain patent
protection which creates legal barriers to potential competitors' ability to
become 'free-riders' and which protects incentives to private investment in R
& D.
Traditionally, however, it has been impossible for plant and animal breeders
to obtain patent rights for their products, which is one reason why public-
sector R & D has remained so important in this area. For, as under the
European Patent Convention (EPC) of 1973, it has been judged impossible
for plant varieties to satisfy one of the key criteria to qualify for a patent,
namely proof of 'an inventive step'.
The application of standard breeding practices to generate new varieties
by crossing existing plants or animal strains has not been deemed to be
invention. Thus under the EPC one set of exclusions from patentability is
'Plant or animal varieties or essentially biological processes for
microbiological processes or the products thereof.' In the absence of patent
rights plant-breeding firms in particular have worked hard to obtain other
means of protection and royalties for their products.
The history of the development of Plant Breeders' Rights (PER) is presented
by Mooney, and is of particular interest because of the concerns he expresses
about the consequences of allowing the basic genetic stock, which underpins
agriculture and hence our whole society, from becoming private rather than
public property. While Mooney's concerns are expressed in relation to 'old'
biotechnology varieties of plants, they are of particular importance with
respect to the products of the 'new' biotechnology.
For bio-engineered products are capable of meeting the inventive-step
criterion of patentability, and both micro-organisms and transgenic animals
have now been patented in the USA. Before examining developments in the
seed industry it is worth touching upon the implications of changes in research
policy which stress increasing reliance upon private R & D to develop and
14 Environmental Monitoring

exploit the new biotechnology and other technologies. In the UK government


has decided that it should withdraw from funding what it terms 'near-market
research', that is, research beyond the basic phase and which is preparation
for commercial exploitation. This policy is based in part upon the arguments
that industry should be investing more heavily in R & D, and that research
strategies at the near-market stage should be driven by assessments of likely
commercial success which can best be made by the firms involved, and will
therefore lead to greater efficiency in allocating research funds. This has led
to the closure of a number of government-financed agricultural research
institutes, the scaling down of others and the sale of the National Seeds
Organisation by auction to Unilever; Unilever won against competition with
BP and other major public companies.
This deliberate attempt to switch an increasing proportion of agricultural
R & D expenditure from the public to the private sector carries with it a
number of risks. In the first place there is a controversy about whether there
is under-investment in R & D so that the returns to extra expenditure are
high, or whether the converse is the case.
If there is under-investment, then withdrawal of public support for applied
research will exacerbate this, as the private sector will only undertake R & D
expenditure on those products and processes from which exclusive benefits
can be captured by the investor. Thus in relation to biotechnology in the
USA, where again public support for applied biotechnology research is
relatively small, Stallman and Schmid (1987) argue that there will be emphasis
on technologies which are applied in factory conditions where secrecy and
control can be maintained.
For technologies which cannot be confined to factories, such as seeds,
Stallman and Schmid state that 'Firms are also considering mechanisms which
"scramble" the genome of a plant in the second generation', in order to prevent
farmers from reproducing seed with the enhanced, engineered characteristics.
Clearly such actions are designed to frustrate the maximum spread of benefits
from the technology and to maximize private profit for companies investing
in research. Another facet of commercially orientated biotechnology research
is that it will aim at the most important crops and livestock products, those
produced by the largest farming units, and those which are most heavily
subsidized.
In the latter case this will worsen the budgetary problems of adjusting
agricultural policy in OECD countries. The probable neglect of minor crops,
difficult habitats and small farmers means that the public sector will have a
Environmental Monitoring 15

defined role in agricultural technology research, but one in which it will be


relegated to the second division and where it is unlikely to prove successful
in terms of the commercial yardstick of rates of return which is increasingly
emphasised by public-research policy.

REGULATION OF
AGRICULTURAL BIOTECHNOLOGY
For traditionally bred plants, regulators rely on plant breeders to conduct
appropriate safety testing and to be the first line of defence against genetic
alterations that might prove dangerous. The same should be true for
agricultural products developed using biotechnology. There is no reason for
regulators to treat biotech plants any differently than traditionally bred plants,
particularly given the fact that biotechnology provides greater control over
gene manipulation.
Despite fears about gene manipulation, traditional crossbreeding has altered
plant genes and improved the human diet for the past 30 years. In fact, people
worldwide safely consume these plants every day. Examples include wheat,
corn, potatoes, tomatoes, and countless other staples of the American diet. In
1984, the biotechnology industry began experimenting with “gene-spliced
plants” — the more advanced approach to gene manipulation that we now
call “biotechnology.”
USDA regulates the release of all genetically engineered agricultural plants
under statutes giving USDA’s Animal and Plant Health Inspection Service
(APHIS) the authority to regulate plants that may be or may become weeds
or other nuisances — what the statutes call “plant pests.” Although the rules
apply in a general sense to novel or exotic varieties of both gene-spliced and
conventional plants, APHIS typically only requires field testing of
conventional plants that are new to a particular U.S. ecosystem (transplanted
from another continent, for example). However, all genetically engineered
agricultural plants face a higher regulatory bar — making it more difficult to
expand the use of biotechnology.
Genetically engineered crops must be field tested under APHIS
regulations prior to commercialisation. For example, a new variety of corn
produced with conventional hybridisation requires no government-
mandated field testing, but all new varieties of genetically engineered corn
do, even though there is no logical reason for the regulatory disparity. For
most genetically engineered plants, APHIS requires the company producing
16 Environmental Monitoring

the plants to submit notice detailing the gene or genes that have been inserted,
where the plant will be tested, and other relevant characteristics of the plant
prior to receiving permission to conduct the field trials. Once the company
completes field testing, APHIS reviews the results and makes a determination
on whether or not the product should be “deregulated” and can be released
into the market.

REGULATORY SCHEME
At the time, the White House Office of Science and Technology Policy
began crafting a framework wherein existing federal agencies would regulate
genetically engineered organisms on the basis of their characteristics, not
the method of production — thus wisely deferring to the scientific
community’s judgement that regulation ought to address the products, not
the process of biotechnology.
Engineered organisms would not require extra scrutiny simply because
genetic engineering produced them (the process). Instead, they would be
subject to heightened scrutiny only if the individual organisms expressed
characteristics that posed some conceptually heightened risk (the product).
The federal government divided regulatory jurisdiction among agencies
already involved in agricultural, food, and environmental regulation.
These include the United States Department of Agriculture (USDA), the
Environmental Protection Agency (EPA), and the Food and Drug
Administration (FDA). While each of these agencies considers the
characteristics of individual products in their regulation, only FDA followed
the general scientific thinking that genetically engineered and non-genetically
engineered products should be regulated similarly. Both USDA and EPA
automatically subject all genetically engineered plants as a class to premarket
approval requirements not ordinarily applied to conventionally bred plants.

ENVIRONMENTAL PROTECTION AGENCY


EPA regulates plants that are modified to produce substances that act
like pesticides: that is, substances used by a plant to protect itself from
pests, such as insects, viruses, and fungi. EPA’s proposed rule for these
“plant-incorporated protectants” is not yet finalized. In the interim, however,
EPA has regulated such plants by applying its proposed guidelines, which
are functionally similar to rules for the registration of synthetic chemical
pesticides. Again, biotech products face higher regulatory hurdles, even
though plant-incorporated protectants developed through conventional
Environmental Monitoring 17

breeding are exempt from these requirements. FDA is responsible for


ensuring that food items, including foods derived from genetically modified
plants, are safe to eat. Following the general regulatory framework that
emphasises product regulation rather than process regulation, FDA rightly
does not treat foods derived from genetically engineered plants to be inherently
unsafe. Food producers are not required to seek premarket approval from
FDA unless there is a substantive reason to believe that the novel trait (or
traits) in the food pose a safety question.
The initial determination of safety is left to the producer, but FDA has
encouraged producers to consult with agency scientists prior to marketing a
food produced with biotechnology to ensure that the appropriate determination
is made. Recently, FDA published a proposed rule that would require
producers to notify the agency at least 120 days before marketing, which
may be a signal that FDA could abandon its more reasonable approach in the
future.
In addition, FDA does not require labelling of foods derived from
biotechnology unless the genetic insertions so alter the food that the common
name no longer describes it adequately. Examples of this phenomenon would
include such alterations of the food product that would raise a safety issue or
change the product’s nutritional composition or its storage or preparation
characteristics.
While U.S. consumers do not appear to be strongly opposed to biotech
foods (they, in fact, seem rather indifferent), a strong anti-biotechnology
movement has arisen in several European countries over the last few years.
The European Union (EU) has established strong restrictions on the
commercial planting of genetically engineered plants, and European food
processors and retailers are reluctant to import harvested agricultural products
derived from biotechnology — thus jeopardising the marketability of U.S.
commodity grain exports. And the EU is now negotiating for strong
restrictions on agricultural biotech products in important international
agreements governing food safety and environmental protection. Very strong
restrictions were included in the Biosafety Protocol, finalized in January 2000.
Perhaps, more important, though, are the Codex Alimentarius Commission
standards for food safety. Without convincing scientific evidence that
genetically engineered crop plants pose a heightened environmental or human
health risk, restrictions on agricultural biotech imports could be challenged
under the General Agreement on Tariffs and Trade and the World Trade
Organisation. Thus the EU has resorted to justifying its decisions on the
18 Environmental Monitoring

basis of a risk-management philosophy known as the “precautionary


principle,” and the EU advocates inclusion of the precautionary principle in
international environmental and food safety agreements (such as the Codex
Alimentarius), as well as within GATT itself. Its inclusion would give nations
greater latitude in restricting imports of U.S. agricultural products.
No single definition exists for the precautionary principle, but its general
meaning is that when an activity raises threats of harm to human health or
the environment, regulatory measures should be taken to prevent or restrict
the activity even if the risks have not been demonstrated scientifically.
Although the EU asserts that the precautionary principle is an unbiased risk-
management philosophy, critics have noted that its lack of definition and
evidentiary standards makes it all too easy to abuse for the purpose of masking
trade protectionism, and that its very approach to risk management is
inherently flawed and may, in fact, increase net risk.. Nevertheless, the
precautionary principle has already been included in several international
environmental treaties, including the Convention on Biological Diversity
and the Biosafety Protocol.

LABELLING ISSUES
Some activists, however, argue that the government should mandate the
labelling of all genetically engineered foods. They assert that consumers have
a “right to know” how their foods have been altered, and that a mandatory
label would best allow consumers to choose between genetically engineered
and conventional foods. Biotechnology advocates have argued against
mandatory labelling because such requirements raise food costs— something
that mostly harms lower-income Americans and people on fixed budgets.
Perhaps more important, while biotech products are not substantially
different from other products, special labels would likely make consumers
think these products were more dangerous. Hence, rather than serving
educational or “right to know” purposes, such labels promise to simply
confuse consumers.
A government-mandated label on all genetically engineered foods also
would raise important First Amendment free speech issues. In 1996, the U.S.
Court of Appeals, in International Dairy Foods Association et al. v. Amestoy,
ruled unconstitutional a Vermont statute requiring the labelling of dairy
products derived from cows treated with a genetically engineered growth
hormone, noting that food labelling cannot be mandated simply because some
people would like to have the information. “Absent … some indication that
Environmental Monitoring 19

this information bears on a reasonable concern for human health or safety or


some other sufficiently substantial governmental concern, the manufacturers
cannot be compelled to disclose it.”
In other words, to be constitutional, labelling mandates must be based in
science and confined to requiring disclosure of information that is relevant to
health or nutrition. Furthermore, consumers need not rely on mandatory labelling
of biotech foods to truly have a choice. Real-world examples show that market
forces are fully capable of supplying information about process attributes
(including kosher and organic production standards) that consumers truly
demand. The same can be said about non-biotech foods, and the FDA
recently published proposed guidelines to assist producers in voluntarily
labelling both genetically engineered and non-genetically engineered foods.
Additionally, the USDA’s newly published rule for organic certification
necessarily excludes biotech products from organic food production.
Consequently, consumers wishing to purchase non-biotech foods need
only look for certified organic products. Even as underdeveloped nations
clamor for biotechnology applications, and as countries like China continue
to experiment with and use agricultural biotechnology, opponents of
agricultural biotechnology in the West, particularly Europe, attack it as an
unnatural process that will destroy the world, not better it. They argue that
biotechnology should be heavily regulated, if not banned. Already, however,
genetically engineered plants are subject to strict regulatory oversight that is
equal to or greater than that advocated by the vast majority of scientific
specialists.
Additional regulation will slow down research and development of
genetically engineered crops, keep beneficial products off the market, and raise
the cost of products that do make it to consumers. Furthermore, the inclusion
of similar restrictions — or inclusion of the precautionary principle— in
international agreements will greatly impact the international trade of
agricultural goods and delay their introduction into the marketplace. Each of
these problems could prevent this technology’s benefits from being introduced
to industrialized nations and, more important, the developing world.

US FEDERAL REGULATIONS FOR


AGRICULTURAL BIOTECHNOLOGY
Regulatory systems for the products of agricultural biotechnology have
been in existence since the mid- to late 1980s within the United States. The
20 Environmental Monitoring

regulatory approach to the safety evaluation of plants developed using rDNA


technology has evolved in the best interests of research scientists, industry
and the general public. The agricultural products of rDNA technology, such
as GM foods and crops, may require approvals from up to three regulatory
agencies; the US FDA, the USDA and the US EPA, depending upon the
characteristics exhibited by the GM plant, its proposed use and introduced
traits.
The same standards of safety are applied to all products regardless of the
technology used in their development. The US FDA is responsible for ensuring
the human safety of all new foods and food components, including products
developed using rDNA technology, under the Federal Food, Drug, and
Cosmetic Act (FFDCA). The USDA evaluates the potential of a GM plant to
become a plant pest following its environmental introduction under the
Federal Plant Pest Act (FPPA).
The US EPA evaluates pesticides, including plant systems modified to
express pesticides (e.g. insect-protected or virus resistance), under the Federal
Insecticide, Fungicide, and Rodenticide Act (FIFRA). As a result, the
expression of an insecticidal protein in a food crop would undergo review by
the USDA, US EPA and US FDA; a GM food crop exhibiting a modified oil
content would be evaluated by the USDA and US FDA; and a non-food
horticultural plant developed using rDNA technology for any other purpose
(e.g. flower colour) would be subject to review by the USDA alone.
In most instances, obtaining all necessary approvals for the
commercialisation of an agricultural crop developed using rDNA technology
takes a decade or more. However, the exact amount of time required will
depend on the need to confirm performance, to evaluate characteristics of
the food, environmental effects, and to produce the required amount of
seed before the product can be distributed and commercially grown by
farmers. Up to five years of field trials (5-10 generations of plants) are
required for the developer of a new plant variety to collect sufficient data
to meet the reporting requirements of the USDA.
An additional five months to two years may be required for the US FDA,
USDA and/or US EPA to complete all necessary product consultations,
reviews and approvals. Approval for the first commercial planting of a GM
food crop was not issued until 1995. Since 1995, more than 40 new agricultural
crops developed using rDNA technology have received approval for
commercial planting within the United States. In 1999, approximately 72
per cent of the total 39.9 million hectares (more than 98 million acres) of
Environmental Monitoring 21

GM crops grown worldwide were planted in the United States. Herbicide-


tolerant soybeans (54 per cent), Bt corn (19 per cent) and herbicide tolerant
canola (9 per cent) accounted for approximately 82 per cent of the GM plants
cultivated. With an increasing number of agricultural biotechnology products
reaching later stages of commercial development, it is anticipated that the
overall area planted with GM crops will continue to rise.
The regulatory approach to evaluating the human health and environmental
safety of GM crops within the United States is best described as a science-
based, case-by-case assessment of hazards and risks. This approach has
provided the flexibility required to reduce the regulatory burden placed on
products that have been determined to be of low risk or concern. All agencies
involved in the regulation of plants developed using rDNA continue to
implement and develop policies based on recommendations made within
the Coordinated Framework. In the future, the US FDA, USDA and the US
EPA will be dedicating additional resources towards communicating how
GM food and food components are regulated within the United States, and
how these regulations function to be protective of both human health and the
environment.

FOOD AND DRUG ADMINISTRATION (US FDA)

AUTHORITY
The US FDA is responsible for ensuring the safety and wholesomeness of
all food and food components, including the products of rDNA technology,
under the FFDCA. The US FDA has the authority for the immediate removal
of any product from the market that poses potential risk to public health or
that is being sold without all necessary regulatory approvals. As a result, a
legal burden is placed on developers and food manufacturers to ensure the
commodities utilised and foods available to consumers are safe and in
compliance with all legal requirements of the FFDCA.
In order to understand the regulatory approach followed by the US FDA
in the safety evaluation of GM crops, it is useful to consider food and food
safety from a historical context. People had been consuming foods derived
from agricultural crops for many years prior to the existence of any food
laws or regulations within the United States.
Based on this experience, agricultural crops have been accepted as being
safe for consumption as food, without additional testing to demonstrate
their safety. As long as the new crop variety has exhibited similar agronomic
22 Environmental Monitoring

properties, and an appropriate taste and appearance, it has been considered


safe to consume. As a result, most foods consumed today, in particular
whole foods (i.e. fruits, grains and vegetables) and conventional foods,
have not been subject to any kind of premarket review or approval by the
US FDA.
Nevertheless, food scientists have a good understanding that many of the
commonly consumed agricultural crops contain natural toxicants (e.g.
tomatine in tomatoes, solanine in potatoes, cucurbiticin in cucumber,
psoralens in celery, etc.).
As a result, new plant varieties may be subject to routine chemical
analyses to ensure that none of these substances is present at potentially
harmful levels. This type of general approach has been used in assessing
the safety of thousands of new plant varieties that have been developed
over a number of decades of crop breeding without compromising the safety
of whole foods.

ROLE
Consistent with recommendations within the Coordinated Framework,
the US FDA considered existing provisions of the FFDCA to be sufficient
for the regulation of foods and food components developed using rDNA
technology. It was concluded that the scientific and regulatory issues posed
by the products of rDNA technology were not significantly different from
those posed by conventional products.
As a result, GM foods and food components have been subject to the
same standards of safety as already exist for the regulation of other foods
and food components under the FFDCA. In order to better communicate
interpretations of existing provisions of the FFDCA as they relate to the
safety evaluation of foods derived from new plant varieties, including
the products of rDNA technology, the US FDA released a policy statement
in 1992 entitled 'Statement of policy: foods derived from new plant
varieties'.
The US FDA has considered the use of genetic modification (i.e. rDNA
technology) in the development of new plant varieties to represent a
continuum of conventional plant breeding practices (e.g. mutagenesis,
hybridisation, protoplast fusion, etc.), and as a result, the safety evaluation
of all new plant varieties, not just those developed using rDNA technology,
have been evaluated based on an objective analysis of the characteristics of
a food or its components, and not on its method of production.
Environmental Monitoring 23

DEFINITION AND SCOPE OF BIOENGINEERED FOODS


The recently proposed rule of the US FDA concerns 'bioengineered foods'
which have been defined as 'foods derived from plant varieties that are
developed using in vitro manipulations of DNA (generally referred to as
rDNA technology)'. As a result, the proposed rule has a much narrower focus
than the 1992 US FDA Statement of Policy.
The US FDA has explained the need for a change in emphasis based on
their expectations that many of the new plant varieties exhibit a greater
potential to 'contain substances that are significantly different from, or that
are present in food at a significantly different level than before'. As a result,
the substances present in foods and food components derived from new plants
developed using rDNA technology are less likely to be considered GRAS,
and as a result, will require pre-market approval from the US FDA.

SAFETY AND NUTRITIONAL EVALUATION


The primary objective of the safety and nutritional evaluation is to
demonstrate that the food derived from a new plant variety is as safe or
nutritious as foods already consumed as a part of the diet. For new plant
varieties, including those developed using rDNA technology, a science-based
approach is used to focus the evaluation on the demonstrated characteristics
of the food or food component.
The evaluation of a GM food or food component typically involves
reviewing information or data on any newly introduced substances, the known
levels of toxicants, as well as the nutritional composition of the plant following
modification.
Substances that raise safety concerns (e.g. toxicants, allergens) would be
subject to more extensive evaluation, since both intended and unintended
changes may affect the levels of toxicants and nutrients in a food following
the modification. Guidance for performing a safety and nutritional evaluation
was provided in the 1992 US FDA Statement of Policy, in a series of flow
charts and text that cover:
• The crop that has been modified;
• Source(s) of the introduced genetic material;
• New substances intentionally added to the food as a result of the
genetic modification (e.g. proteins, but also fatty acids, and
carbohydrates).
Documentation required to support the evaluation typically includes: the
purpose or intended technical effect of the modification on the plant, together
24 Environmental Monitoring

with a description of the various applications or uses, a molecular


characterisation of the modification including the identities, sources and
functions of introduced genetic material; information on the expressed protein
products encoded by introduced genes; information relating to the known or
suspected allergenicity and toxicity of any expressed gene products; for foods
known to cause allergy, information on whether the endogenous allergens
have been altered by the genetic modification; information on the
compositional and nutritional characteristics of the foods, including anti-
nutrients; and in some instances, comparative results of feeding studies
involving the foods derived from plants modified using rDNA technology
and the non-modified counterpart.
In performing its evaluation, the US FDA is particularly interested in the
identification of inherent toxicants, known or potential allergens, assessing
the concentration and bioavailability of essential nutrients, the safety and
nutritional value of any newly introduced proteins, and the identity,
composition, and nutritional value of modified carbohydrates, fats and oils.
If additional questions of safety remain following this evaluation, further
toxicological studies may need to be performed. It is recognised that absolute
assurance of the safety of any food does not exist. As a result, the goal of the
safety evaluation is to establish a reasonable certainty of no harm under
anticipated conditions of consumption.
With this in mind, experience with the existing food supply has provided
the basis for evaluating the safety of new food or food components. Both the
Food Advisory Committee and Committee for Veterinary Medicine have been
extensively involved in the development of approaches for the safety and
nutritional evaluation of foods and food components derived from new plant
varieties, including those developed using rDNA technology.

PRODUCT CHARACTERISATION
Product characterisation takes into consideration information relating to
the modified food crop, the introduced genetic material and its expression
product, and acceptable levels of inherent plant toxicants and nutrients. All
characteristics of the gene insert must be known, including the source(s),
size, number of insertion sites, promoter regions, and marker sequences. It
must be established that the transferred genetic material does not come from
a pathogenic source, a known source of allergens, or a known toxicant-
producing source. The introduced genetic material should be well chara-
cterised to ensure that the introduced gene sequences do not encode harmful
Environmental Monitoring 25

substances and are stably inserted within the plant genome to minimize any
potential opportunity for undesired genetic rearrangement. Analytical data
are required to evaluate the nutritional composition, the levels of any known
toxicants, anti-nutritional and allergenic substances, and the safety-in-use of
antibiotic resistance marker genes.
Any new substances introduced into crops through rDNA technology (e.g.
proteins, fatty acids, carbohydrates) will be subject to pre-market review as
food additives by the US FDA, unless substantially similar to substances
already safely consumed as a part of foods, or that are considered GRAS. To
date, substances that have been added to foods through rDNA technology
have been previously consumed or have been determined to be substantially
similar to substances already consumed as a part of the diet.
As such, introduced substances have been considered exempt from the
requirement for pre-market approval as food additives with the US FDA. A
more rigourous safety evaluation of a GM crop is warranted if the introduced
gene sequence(s) has not been fully characterised, the nutritional composition
has been significantly altered, antibiotic resistance marker genes have been
used during its development, or if an allergenic protein or toxicant has been
detected at levels higher than what is typically observed in edible varieties of
the same crop species.
In any event, determinations as to the safety of substances that have been
introduced into new plant varieties through rDNA technology are made on a
case-by-case basis. Although an evaluation of the introduced gene sequences
and expression product(s) provides assurance as to their safety, further studies
may be required to predict whether unexpected effects may result following
their interaction with other genes within the plant.
In addition, the product characterisation of a GM plant involves assessing
sequence homology to known toxicants and allergens, thermal and digestive
stability, and if required, the results of both in vitro and in vivo assays to
demonstrate lack of toxicity.

COMPOSITIONAL ANALYSIS
The results of field trials performed over several years serve to characterise
the phenotypic and agronomic characteristics exhibited by the plant (e.g.
height, colour, leaf orientation, susceptibility to disease, root strength, vigour,
fruit or grain size, yield, etc.), as well as to provide the materials required for
the compositional analysis. Any anomalies in the phenotypic or agronomic
characteristics exhibited by a plant may result in a requirement for additional
26 Environmental Monitoring

information. Protein, fat, fibre, starch, amino acid, fatty acid, ash and sugar
levels are determined, as well as the levels of anti-nutrients, natural toxicants
or known allergens.
Studies of the nutritional composition are performed to determine whether
the levels of any key nutrients, vitamins or minerals have been altered as a
result of the genetic modification. Based on the results of these studies, a
determination is made as to whether the phenotypic and agronomic
characteristics of a GM crop or the concentrations of inherent constituents
fall within ranges typical of its conventional counterpart.
If inserting a new gene causes no change in any of the assessed parameters,
the US FDA can conclude with reasonable assurance that the GM crop is as
safe as the conventional crop. If the levels of essential nutrients or inherent
toxicants are found to be significantly different in the GM crop, the US FDA
may recommend additional action prior to commercialisation, such as
obtaining food additive status, or the use of specific labels to alert consumers
of an altered nutritional content, etc.

ALLERGENICITY
In consultation with scientific experts in the areas of food safety, food
allergy, immunology, biotechnology and diagnostics, the US FDA published
guidelines for assessing the allergenicity of GM foods or food components
in 1994. The approach to assessment is multi-faceted, incorporating data
regarding the origin of the genetic material, and the biochemical,
immunological and physicochemical properties of the expressed protein.
The overall assessment is reliant upon the fact that all known food allergens
are proteins and, notwithstanding the number of shared properties between
allergenic and non-allergenic food proteins, food allergens tend to exhibit a
number of similar characteristics. In general, food allergens share a number
of common properties: they have a molecular weight of over 10 000 Da; they
represent more than 1 per cent of the total protein content of the food; they
demonstrate resistance to heat, acid treatment, proteolysis and digestion; and
they are recognised by IgE.
For gene sequences derived from known allergenic sources (e.g. peanuts),
the developers of GM plants are expected to demonstrate that allergenic proteins
have not been introduced into the food. For assessment purposes, it is assumed
that any genetic material derived from a known allergenic source will encode
for an allergen. To demonstrate otherwise, the amino acid sequence of an
expressed protein must be compared with that of known allergens using protein
Environmental Monitoring 27

sequence databases. Furthermore, in vitro and/or in vivo immunologic analyses


using the sera of allergic patients sensitive to the source of the genetic material
may need to be performed to determine whether or not a potentially allergenic
protein is being expressed in the GM food. Some GM foods may be modified
to express genes from a source that is not known to be allergenic when
consumed.
Under these circumstances, the US FDA follows a similar decision tree-
based approach to determining the allergenic potential of the expression
product. In assessing these proteins, should any amino acid sequence exhibit
homology with a known allergen, the expressed protein would then be
evaluated in immunologic tests using the sera of patients known to be allergic
to the identified homologous protein. Regardless of the origin of the genetic
material, physicochemical studies are performed in vitro to provide
information concerning the expected stability of the expressed protein.
All known food allergens tend to be resistant to digestive degradation, as
demonstrated in simulated gastric fluid models, or to decomposition under
conditions of food processing. In making a determination regarding a GM
food, it is the totality of the biochemical, immunological and physicochemical
properties of the introduced protein that provides guidance as to the allergenic
potential of such a protein being expressed in food. The level of protein
expressed, produced and consumed as a part of the diet is also a primary
indicator of the allergenic potential, since nearly all food allergens are known
to be major proteins in their respective foods.
If the results of any of these studies suggest an allergenic potential, the
US FDA may recommend further scientific evaluation, require special
labelling to alert sensitive consumers, or alternatively caution the developer
about proceeding with the development of a particular GM food. Most
recently, a joint FAO/WHO Expert Consultation on Allergenicity of Foods
Derived from Biotechnology recommended a revised decision tree.
This decision-tree strategy was modified from the previous version FAO/
WHO to include a revised definition of sequence homology for gene product
comparison; a greater emphasis on serum testing, even with gene products
without homology to known allergens and not derived from an allergenic
source; and animal models to assess potential allergenicity, despite
acknowledgement by the Expert Consultation that these models are currently
under development and at present, not predictive of food allergies in humans.
More recently, the ad-hoc Open-Ended Working Group on Allergenicity
established by the ad-hoc Intergovernmental Codex Task Force on Foods
28 Environmental Monitoring

Derived from Biotechnology, considered the FAO/WHO strategy in drafting


an approach to assessing the potential allergenicity of foods derived from
rDNA plants.
The Codex Working Group recognised the absence of a definitive
predicitve test for allergenicity in humans to a newly expressed protein and
recommended an integrated, stepwise assessment strategy. The strategy
recommended by the Working Group, and accepted by the Codex Task Force
for inclusion in the draft forwarded for final adoption by the Codex
Alimenarius Commission, is consistent with that of FAO/WHO. However,
the Working Group suggested that some of the modifications included in
FAO/WHO could contribute to the overall weight of evidence of any
conclusion of potential allergenicity (e.g., allergen specific serum depositories,
animal models), pending development and validation.

ANTIBIOTIC RESISTANCE
The use of antibiotic-resistance genes as selectable markers has been
common practice in the development of new plant varieties using rDNA
technology. Concerns relate to the potential transfer of antibiotic-resistance
genes from GM plants to pathogens in the environment or to the gut of humans
consuming foods or food components. Issues relating to the use of antibiotic
resistance genes were identified in the 1992 US FDA Statement of Policy
Statement as well as discussed in additional guidance entitled Guidance for
Industry: Use of Antibiotic Resistance Marker Genes in Transgenic Plants.
The guidance provided within these documents was established in
consultation with experts in the fields of microbiology, medicine, food safety,
bacterial and mycotic diseases, and includes suggestions with respect to the
continued safe use of antibiotic-resistance marker genes by the developers
of new plant varieties.
The use of marker genes that encode resistance to clinically important
antibiotics has raised questions as to whether their presence in food could
reduce the effectiveness of oral doses of the antibiotic or whether the gene
present in the DNA could be transferred to pathogenic microbes, rendering
them resistant to treatment with the antibiotic.
The risk of transfer of antibiotic-resistance genes from plants to
microorganisms considered to be pathogenic to humans, however, is
considered to be minimal if not insignificant. Furthermore, the potential risks
are becoming less of a concern as more developers are beginning to research
the use of alternative technologies (e.g. non-resistance-based markers) in
Environmental Monitoring 29

plant breeding. The conclusions with respect to the safe use of antibiotic-
resistance marker genes are consistent with the findings of other national
and international food safety organisations.

CONSULTATION AND FILING PROCESS


The submission of a Pre-market Biotechnology Notification (PBN) has
recently been proposed as a mandatory requirement for the commercialisation
of bioengineered foods and food ingredients within the United States. Minimal
differences exist between the information to be submitted as a part of a PBN
and that previously presented in voluntary consultations with the US FDA.
As proposed, developers are still being encouraged to consult with the US
FDA as early and as often as necessary in the development of a bioengineered
food, such that any potential scientific or regulatory concerns can be identified
and addressed prior to the submission of the PBN. Guidelines for performing
consultations with the US FDA were released in a publication entitled
Guidance On Consultation Procedures: Foods Derived from New Plant
Varieties in 1997.
The publication recommended an approach for developers to proceed by
submitting a request for consultation, outlined the internal process by which
all requests would be handled, and included additional guidance as to the
type of safety and nutritional information to be presented during consultation
with the US FDA.
Once sufficient safety and nutritional information has accumulated to
demonstrate that a product is safe and in compliance with the FFDCA,
developers typically schedule a consultation to present their scientific findings
and conclusions to the US FDA. Consultations prior to notification not only
serve to keep the US FDA informed of advances made in the application of
rDNA technology in food production, but also keep the developers of
bioengineered foods aware of emerging safety, nutritional or regulatory
concerns of the US FDA.
Consultations are considered complete when all safety and regulatory
concerns between the US FDA and the developer have been resolved. The
US FDA proposes to perform an initial evaluation of a PBN within 15 days
of receipt to determine completeness, at which point, if considered complete,
the PBN will be filed, and a response can be expected within 120 days.
The US FDA does not issue a product approval per se, but informs the
developer by letter that:
• The evaluation period has been extended;
30 Environmental Monitoring

• The notice is not complete and why;


• It has no further questions 'at this time' based on the information that
has been presented.

LABELLING
The FFDCA defines what information must be disclosed to consumers on
a food label, such as the common or usual name, and other limitations
concerning the representations or claims that can be made or suggested about
a food product. All foods must be labelled truthfully and not be misleading
to consumers. Taking this into consideration, the FFDCA does not stipulate
the disclosure of information on the basis of consumer desire to know.
Labelling may be considered misleading if it fails to reveal material facts in
light of representations that are made with respect to a product.
The labelling of foods derived from new plant varieties, including plants
developed using rDNA technology, was originally addressed in the 1992 US
FDA Statement of Policy, and most recently discussed in Draft Guidance for
Industry for the voluntary labelling of bioengineered foods. To date, the US
FDA is not aware of any information that would distinguish foods developed
using rDNA technology (e.g. bioengineered foods) as a class from foods
developed through other methods of conventional plant breeding, and as
such, have not considered the method of development a material fact requiring
disclosure on product labels. Nevertheless, after extensive consultation,
including thousands of written comments and a series of public meetings,
the US FDA has observed 'a general agreement that providing more
information to consumers about bioengineered foods would be useful'.

REQUIREMENTS FOR LABELLING


Special labelling is required if the composition of the bioengineered food
differs significantly from its conventional counterpart. For example, for a
food that has been genetically modified to contain a new major sweetener, a
new common or usual name or other labelling may be required. Similarly, if
a GM food contains an allergen that consumers would not expect to be present
in that food, special labelling may be necessary to alert sensitive consumers.
If a protein commonly associated with an allergic reaction (e.g. peanut
protein) is transferred to another food through genetic modification, the US
FDA would evaluate whether labelling would provide sufficient consumer
protection. If labelling would not be considered to provide a sufficient level
of protection, the US FDA would take appropriate steps to ensure the GM
Environmental Monitoring 31

food would not be marketed. Therefore, current policy requires a GM food


to be labelled when the resulting product poses a safety issue or is substantially
different from its conventional counterpart, and as a result, could be considered
to pose a misrepresentation to consumers. The 1992 US FDA Statement of
Policy and the recent Draft Guidelines do not consider the use of rDNA
technology in the development of food products to be a material fact requiring
specific disclosure on the label.
Rather it is a method of development, similar to other methods of plant
breeding, which have not required disclosure on the label. Bioengineered
foods cannot be distinguished compositionally from foods modified through
more conventional methods and thus do not require specific disclosure through
labelling. The Draft Guidelines reaffirm the US FDA position that
bioengineered foods do not require special labelling.

VOLUNTARY LABELLING
To provide guiding principles for voluntary labelling, in recognition of
the desire of certain manufacturers to label foods as produced either with or
without bioengineering, the US FDA published a 'Draft Guidance for Industry:
voluntary labelling indicating whether the foods have or have not been
developed using bioengineering'. Emphasising that the use of rDNA
technology 'is not a material fact', the US FDA recognises that some consumers
want disclosure of bioengineered content and that some manufacturers wish
to provide it. In response, Draft Guidance was issued with suggestions
concerning the use of labelling statements that are not considered misleading.

EVALUATION OF GENETICALLY
MODIFIED FOOD CROPS
Genetic modification, otherwise referred to as recombinant DNA (rDNA)
technology or gene-splicing, has proven to be a more precise, predictable
and better understood method for the manipulation of genetic material than
previously attained through conventional plant breeding. To date, agricultural
applications of the technology have involved the insertion of genes for
desirable agronomic traits (e.g. herbicide tolerance, insect resistance) into a
variety of crop plants, and from a variety of biological sources.
Examples include soybeans modified with gene sequences from a
Streptomyces species encoding enzymes that confer herbicide tolerance, and
corn plants modified to express the insecticidal protein of an indigenous soil
microorganism, Bacillus thuringiensis (Bt). A growing body of evidence
32 Environmental Monitoring

suggests that the technology may be used to make enhancements to not only
the agronomic properties, but the food, nutritional, industrial and medicinal
attributes of genetically modified (GM) crops.
Regulatory supervision of rDNA technology and its products has been in
place for a longer period of time in the United States than in most other parts
of the world. The methods and approaches established to evaluate the safety
of products developed using rDNA technology continue to evolve in response
to the increasing availability of new scientific information.
As our understanding of the potential applications of the technology is
broadened, the safety of products developed using rDNA technology and the
potential effects of introduced gene sequences on human health or the
environment will be more closely scrutinized. In fact, much of the knowledge
acquired during the commercialisation of the products of rDNA technology
in agriculture is now finding application in evaluating the safety of products
developed through more conventional means.
The objective of this chapter is to provide the reader with an overview of
the significant events leading up to the present, science-based, regulatory
framework that exists for the safety evaluation of GM food crops within the
United States. An attempt has been made to discuss concerns over the
sufficiency of existing regulations, as well as to highlight recent initiatives
taken by federal regulatory agencies to address them. Through better
communication of how the regulatory process functions within the United
States, it is anticipated that current and future applications of rDNA technology
in agriculture will be met with a greater level of understanding and acceptance.

HISTORICAL PERSPECTIVE
rDNA technology was first developed in the 1970s. The initial response
of the scientific community, including members of the National Academy of
Science (NAS), to the prospects of rDNA technology, was to postpone any
further research involving the technology until the potential risks to human
health and the environment could be evaluated.
Researchers attending the International Conference on Recombinant DNA
Molecules in 1975, otherwise known as the Asilomar Conference, tried to
establish a scientific consensus on how best to self-regulate emerging
applications of the technology. The conditions and restrictions that were
proposed at this conference have formed the basis by which federal guidelines
and policies for rDNA technology research were drafted within the United
States.
Environmental Monitoring 33

NATIONAL INSTITUTES OF HEALTH (NIH)


The National Institutes of Health (NIH) was the first federal regulatory
agency to publish their interests in evaluating the safety of rDNA technology
in 1976, in the form of guidelines for the conduct of research. Because of the
uncertainties that existed at the time, all research into the potential applications
of rDNA technology was limited to the confines of federally funded
labouratories under NIH control. After continued research, and a more careful
assessment and monitoring of the risks, a set of less restrictive guidelines
was published in 1978.
However, the environmental release of organisms developed using rDNA
technology outside the confines of controlled labouratory conditions was
prohibited unless otherwise approved by the NIH director. In the early 1980s,
the NIH established an rDNA Advisory Committee (RAC) to review all data
and experience gained with applications of the technology under its control.
Based on recommendations of the RAC, a more relaxed set of research
guidelines was published by the NIH in 1983.
The NIH approved the first environmental release of an organism
developed using rDNA technology (ice-minus strain of Pseudomonas) in
1983. In response, they were criticized for failing to prepare a statement or
assessment of the environmental impact of their regulatory decision as
required under the National Environmental Policy Act (NEPA). Once the
legal controversy had subsided, all responsibility that the NIH had for
regulating the environmental introduction of GM organisms was relinquished.
Nevertheless, NIH guidelines continue to be referenced in assessing the safety
of rDNA research performed within industry, federal and other state
labouratories. However, it was unclear which federal regulatory agencies
would be responsible for ensuring the safety of the products developed using
rDNA technology.

OFFICE OF SCIENCE AND TECHNOLOGY POLICY (OSTP)


In response to a need for clarification, the Office of Science and Technology
Policy (OSTP) began work on the development of a policy to establish a
federal regulatory framework for evaluating the safety of products developed
using rDNA technology. Following an opportunity for public comment, OSTP
published a final version of the 'Coordinated Framework for Regulation of
Biotechnology' (Coordinated Framework) in 1986.
The policy provided the basis by which federal regulatory agencies got
involved in evaluating the safety of products at later stages of commercial
34 Environmental Monitoring

development at that time. According to the Coordinated Framework, the


products of rDNA technology should be regulated on the basis of the unique
characteristics and features that they exhibit, not their method of production.
The products of rDNA technology were considered to pose risks to human
health and the environment similar to those posed by conventional products
already regulated within the United States.
As a result, no new federal regulatory agencies or regulations were required.
The Coordinated Framework did not, however, rule out the possibility of the
development of new guidelines, procedures, criteria or even regulations to
supplement or alter the scope of existing statutes for the products of rDNA
technology. The Coordinated Framework identified three federal regulatory
agencies within the United States: the US Food and Drug Administration
(US FDA), the US Department of Agriculture (USDA) and the US
Environmental Protection Agency (US EPA), as having primary
responsibilities for evaluating the products of rDNA technology under
development at that time.
In 1992, the OSTP released another document entitled, 'Exercise of Federal
Oversight within the Scope of Statutory Authority: Planned Introductions of
Biotechnology Products Into the Environment', outlining the proper basis by
which federal regulatory agencies were expected to exercise their regulatory
authority. As with conventional products, dependent upon the intended use
and function, more than one federal regulatory agency may share an interest
in evaluating the safety of a product developed using rDNA technology. If
more than one federal regulatory agency has an interest, lead agencies are
identified as being responsible for coordinating activities to limit any potential
duplication of efforts. Although federal regulatory agencies worked
independent of one another, it was realised that close working relationships
would need to be established in order to evaluate effectively the safety of
products developed using rDNA technology.
Recently, the OSTP teamed up with the White House Council on
Environmental Quality (CEQ) to perform a six-month inter-agency evaluation
of the federal regulatory agency responsibilities in evaluating the
environmental safety of products developed using rDNA technology. A case-
study approach for a variety of different classes of products developed using
rDNA technology was used to evaluate the level of federal regulatory agency
involvement, to identify strengths, weaknesses and areas of potential
improvement. The review concluded that non-e of the previously approved
products of rDNA technology has had any significant negative impact on the
Environmental Monitoring 35

environment. Although all the case studies were published, OSTP/CEQ failed
to reach a consensus on issues relating to the relevant strengths and
weaknesses of the existing regulatory structure within the time allotted for
the completion of its review. A review of the case studies published provides
a comprehensive interpretation of the responsibilities of each federal
regulatory agency in ensuring the safety of the products developed using
rDNA technology.

NATIONAL ACADEMY OF SCIENCES (NAS)


The National Academy of Sciences (NAS), and its operating arm, the
National Research Council (NRC), have served as a primary source of
scientific, technological, human health and environmental policy advice
during the development of regulatory approaches for the safety evaluation of
products developed using rDNA technology within the United States. In 1987,
the NRC published a report concerning the potential human health and
environmental hazards associated with the commercial introduction of GM
organisms, entitled Introduction of Recombinant DNA-Engineered Organisms
into the Environment: Key Issues. The risks associated with the introduction
of GM organisms were considered to be essentially the same in kind as those
associated with unmodified organisms.
In other words, rDNA technology did not appear to introduce any unique
risks as compared to the products that had been developed using more
conventional methods of genetic modification. In reaching these conclusions,
the NRC performed an evaluation of the similarities and differences in the
properties exhibited by products developed using a variety of different
techniques.
To this day, the conclusions of this report continue to be referenced by the
regulators and developers of GM crop varieties worldwide. A subsequent
NRC report, entitled Field Testing Genetically Modified Organisms:
Framework for Decisions, also reached similar conclusions, but provided
additional guidance as to how regulatory decisions concerning the introduction
of GM organisms should be made. The NRC recommended that regulatory
decisions concerning the introduction of GM organisms should be made on
a case-by-case basis.
Consistent with the Coordinated Framework, the NRC did not consider
the nature of the process used for the genetic modification of an organism to
be a useful criterion for determining whether a product requires less or more
regulatory oversight. As a result, no valid reason existed to regulate organisms
36 Environmental Monitoring

genetically modified via modern techniques (e.g. rDNA technology) any


differently from organisms genetically modified via more conventional means.
Similar conclusions have been published in the reports of international
standards-setting organisations. In retrospect, within both reports, the NRC
acknowledged that modern and conventional methods of genetic modification
are not without risks to human health or the environment, and as a result,
neither could be considered inherently more risky.
As a result, regulatory decisions concerning the safety of the products of
rDNA technology need only to take into consideration the specific
characteristics exhibited by a particular GM organism and the environment
in which it is to be introduced, and not the method by which it has been
produced.
The NRC has further articulated their conclusions into what is now
commonly referred to as the 'Concept of Familiarity'. Although familiarity
with the characteristics of a particular organism or the environment to which
it will be introduced would not necessarily mean it was safe, it can be expected
to provide a sufficient amount of information to allow for a judgement to be
made of the risks.
For example, familiarity with a new GM plant variety could be established
based on comparisons between characteristics of the parent line or other
crop species exhibiting similar traits, as well as through the results of actual
field tests involving the GM plant. These principles were further elabourated
upon by the Organisation for Economic Co-operation and Development
(OECD), and as a result, have been referenced in the development of
regulatory policies for evaluating the safety of GM crops on a global basis.
More recently, a committee established by the NRC published a report
entitled Genetically Modified Pest-Protected Plants: Science and Regulation,
based on a review of all scientific and regulatory data collected during the
regulatory approval process for GM crops within the United States. The
primary objective was to assess independently the effectiveness of existing
and proposed regulations for the safety evaluation of GM crops expressing
plant pesticides (i.e. plant-incorporated protectants).
No new evidence was identified to suggest plants expressing plant
pesticides posed any greater risk to human health or the environment as a
result of their genetic modification. In fact, the NRC concluded, 'with careful
planning and appropriate regulatory oversight, commercial cultivation of GM
plants is not expected to pose higher risks and may pose less risk than other
commonly used chemical and biological pest-management techniques'.
Environmental Monitoring 37

However, the NRC report included requests for federal regulatory agencies
to further strengthen the current regulatory approval process through better
coordination and communication between agencies, on-going investment in
the research and monitoring of potential human health (e.g. allergenicity)
and environmental impacts (e.g. insect resistance), and by providing greater
access to information evaluated in support of regulatory decisions.

MODERN AGRICULTURAL
BIOTECHNOLOGY
In 1973, Cohen and Boyer transferred a gene from one organism into
another. In 1982, the first biotech plant, an antibiotic resistant tobacco, was
developed. In January 1983, at a meeting of genetic researchers in Miami,
three different teams reported success in using Agrobacterium tumefaciens,
a bacterium, to carry new genes into plant cells, heralding the dawn of modern
agricultural biotechnology. Agrobacterium tumefaciens, described as a
“natural genetic engineer”, splices its own genes into host plant cells. This
pathogenic bacterium was now converted into a pack mule, to carry new,
foreign genes into plant cells, minus the disease and this became the most
common means of producing Genetically Engineered Organisms (GEOs).
Field tests for GE crops resistant to pests and pathogens were first
conducted in the US, in 1985. A co-coordinated framework for the regulation
of GEOs was established and the first GE tobacco was released, in 1986 .
The US Department of Agriculture published guidelines for field trials of
GE crops in 1991. On approval from the Food and Drug Administration,
Flavour Saver, the first GE tomato, with a longer shelf life, was on the US
markets in 1994. During 1995-96, GE soybean, corn and cotton were approved
for commercialisation, in the US.
A number of GE crops, developed for pest, pathogen and herbicide
resistance, are now commercially cultivated in several countries. Rice with
pro-vitamin A, higher iron content, or human milk proteins and potatoes
with high protein content are in various stages of development. Tobacco
plants producing functional human haemoglobin and bacteria that produce
human insulin have been developed, as well as plants with vaccines against
rabies and other viral diseases. Food grain crops that withstand drought and
salinity are high priority research, and so are those for high yield. A GE
tobacco plant detoxifies soils contaminated by explosive residues, providing
solution to a frustrating environmental problem in countries ravaged by armed
38 Environmental Monitoring

conflict. Now more than 70 biotech agricultural crops that have been approved
for use in North America, including varieties of soybeans, cotton, canola,
corn, potatoes, squash, tomatoes and papaya.
About six million farmers in some 17 countries now cultivate GE crop on
about 125 million acres, a 30 fold increase over 1996. By end of the year
2002, six GE crops planted in the US (soybeans, corn, cotton, papaya, squash
and canola) produced an additional four billion pounds of food and fibre on
the same acreage, improved farm income by US $ 1.5 billion and reduced
pesticide use by 46 million pounds.
In 2003 in the US, 80 per cent of soybean acres will be planted with biotech
varieties. A number of activist groups vehemently attack GE technology and
its products on grounds of safety to humans and the environment, and costs
of technology transfer and its reach to the needy. Products of agricultural
biotechnology bear the brunt of this ill-informed, unscientific and prejudiced
onslaught much more than GE products related to health or industry.
In 2001, the European Community released results of a 15-year study,
costing US $ 64 million, and involving more than 400 research teams and 81
projects. This report concluded that GE products pose no more risk to human
health or the environment than conventional crops. So far, extensive and
intensive research on the probable risks of GE technology has not brought
out any adverse effects and non-e of the fears expressed by anti-tech activists
were proved even marginally. Nevertheless, caution and examining issues
case by case, is the watch word of technologists, who are aware of their
responsibilities.
For a number of products, technology transfer is free of costs for developing
countries, as for example Golden Rice, the rice with pro-vitamin A. It is the
responsibility of the Governments of the respective countries to bear the
costs of developing local varieties and to reach the products to the needy at
an affordable cost. In India, three GE varieties of pest resistant cotton were
approved for commercialisation, a year ago. Approval for other GE varieties
of cotton and GE mustard was deferred twice by the Genetic Engineering
Approval Committee (GEAC), the highest authority on the issue. In India,
the level of public awareness of the realisable benefits and probable risks of
GE products is abysmally low.
The functioning of the GEAC leaves much to be desired. Taking advantage
of this hazy situation, mixing up economic, social and political issues with
science, and even using such vagaries of nature as the severe drought, several
groups of vested interest have created mistrust, confusion and scare. Trashing
Environmental Monitoring 39

a very promising technology this way does not augur well for the future of
Indian agriculture. This results only in denying the benefits of technology to
the farmers and consumers.

THE FUTURE OF
AGRICULTURAL BIOTECHNOLOGY
There are divergent views about the rate of uptake of new agricultural
biotechnologies and their impact. Kalter and Tauer and the US Office of
Technology Assessment anticipate comparatively rapid take-up, while others
(Buckwell and Moxey and Farrington do not foresee much impact until the
next century, that is until ten to fifteen years have elapsed. The caution of the
latter commentators is probably justified, for there are various hurdles to be
jumped before biotechnologies can gain acceptance. One hurdle to be
overcome is the economic one.
Self-evidently there must be some economic advantage to farmers or agro-
industry from adopting the technology. Frequently what captures the scientific
imagination turns out to be economically unviable in use, although it is only
with use that efficiency can be improved and costs brought down. A more
demanding test still for biotechnologies will be to obtain legislative approval
and public acceptance. These are intimately connected. In the case of
machinery developments, provided safety standards are observed there are
no legal impediments to developing new and improved machines, since there
are no obvious problems of hazard to the public and therefore public
acceptance.
Chemical insecticide, fungicides and herbicides do, however, pose much
greater problems because of concerns with toxicity, and elabourate testing
and licensing procedures have evolved after a relatively haphazard set of
procedures in the 1950s and 1960s, when widespread use was made of DDT
and organo-phosphorous compounds without adequate recognition of their
toxicity. In significant part, because of the problems which have been
experienced with agro-chemicals, the licensing of biotechnologies will
inevitably be based on testing at least as stringent as that which now exists
for chemicals.
Indeed the licensing procedures are likely to be more stringent because of
public and scientific concerns. Public acceptance of biotechnology in the
food chain will be made more difficult by confusion about differences between
biotechnologies. In the early 1980s there was a crescendo of concern about
40 Environmental Monitoring

the use of steroids to promote faster liveweight gain in calves and cattle. The
public outcry eventually led to the banning of such hormones for meat animals;
but the legacy is a profound distrust of and hostility to new products such as
synthetic Bovine Somatrophin (BST), which has the capacity to increase the
milk yield of those dairy cows which are 'relatively' deficient in what is a
naturally occurring hormone.
Although initial scientific results are favourable, and some minor doubts
remain, the major influence leading the European Commission to impose a
moratorium on the use of BST until the end of 1990 at least is concern about
public perception; the Commission has stated 'It would be a serious setback
to producers and to the Community's milk policy were present [positive]
trends in consumption to be reversed as a result of adverse consumer reaction.
With newly bioengineered plants and animals scientific and public concerns
are emerging which are of a different order from those associated with
previous biotechnology.
One relates to the possibility that crop failures may become more frequent
if biotechnology leads to a reduction in genetic diversity in crops being grown;
such a tendency has already been observed in relation to the Green Revolution
technology for wheat. There are also concerns about upsetting the balance of
nature in unforseen ways, akin to the unanticipated consequence of
introducing rabbits into Australia and then trying to control these by
introducing myxomatosis. Thus questions arise such as what happens if
herbicide resistance introduced into a commercial crop plant transfers itself
by cross pollination into a closely related weed species, or indeed if the
herbicide-resistant crop should colonize wild habitats.
There are, of course, more lurid and absurd notions bandied about in the
popular press which are similar in nature to the attacks made on Darwinism.
The questions, both absurd and real, raised in relation to agricultural
biotechnology will undoubtedly slow the rate at which it is adopted.
Nevertheless instances of adoption are increasing: BST in the USA, a
bioengineered baker's yeast in the UK, bioengineered sheep producing insulin,
etc. As these become more widespread and increasingly affect lower-valued,
bulk agricultural products, so the impacts of biotechnology on structural
change will increase. The balance of economic power will switch increasingly
to industry, to high-technology large farms and against smaller farmers in
disadvantaged regions and countries. That, unfortunately, is a seemingly
inevitable consequence of what we consider to be economic progress.
Environmental Monitoring 41

Biological Treatment for Wastewater

Industrial wastewater treatment covers the mechanisms and processes used


to treat waters that have been contaminated in some way by man’s industrial
or commercial activities prior to its release into the environment or its re-
use. Most industries produce some wet waste although recent trends in the
developed world have been to minimise such production or recycle such
waste within the production process. However, many industries remain
dependent on processes that produce a water based waste stream.
It is a processes, which employ aerobic or anaerobic microorganisms and
result in decanted effluents and separated sludge containing microbial mass
together with pollutants. Biological treatment processes are also used in
combination and/or in conjunction with mechanical and advanced unit
operations.
Increasing water pollution and the pathetic state of rivers worldwide
demands a wastewater plant that can handle aerobic degradation of waste.
Eco-friendliness means, in quintessence, effective pollution control both in
the air and on the ground – yet while belching smokestacks are now rarely
seen, pollution control on the ground is still a serious cause for concern for
the industry, and as a result effluent or wastewater treatment has become
mandatory for all industries.
Industrial wastewater treatment is a group of unit processes designed to
separate, modify, remove, and destroy undesirable substances carried by
42 Environmental Monitoring

wastewater from industrial sources. United States governmental regulations


have been issued that involve volatile organic substances, designated priority
pollutants; aquatic toxicity as defined by a bioassay; and in some cases
nitrogen and phosphorus. As a result, sophisticated technology and process
controls have been developed for industrial wastewater treatment.
Wastewater streams that are toxic or refractory should be treated at the
source, and there are a number of technologies available. For example, wet
air oxidation of organic materials at high temperature and pressure (2000 lb/
in. or 14 kilopascals and 550°F or 288°C) is restricted to very high
concentrations of these substances. Macroreticular (macroporous) resins are
specific for the removal of particular organic materials, and the resin is
regenerated and used again. Membrane processes, particularly reverse
osmosis, are high-pressure operations in which water passes through a
semipermeable membrane, leaving the contaminants in a concentrate.
Pretreatment and primary treatment processes address the problems of
equalization, neutralization, removal of oil and grease, removal of suspended
solids, and precipitation of heavy metals.
Aerobic biological treatment is employed for the removal of biodegradable
organics. An aerated lagoon system is applicable (where large land areas are
available) for treating nontoxic wastewaters, such as generated by pulp and
paper mills. Fixed-film processes include the trickling filter and the rotating
biological contactor. In these processes, a biofilm is generated on a surface,
usually plastic. As the wastewater passes over the film, organics diffuse into
the film, where they are biodegraded. Anaerobic processes are sometimes
employed before aerobic processes for the treatment of high-strength, readily
degradable wastewaters. The primary advantages of the anaerobic process is
low sludge production and the generation of energy in the form of methane
(CH4) gas.
Biological processes can remove only degradable organics. Nondegradable
organics can be present in the influent wastewater or be generated as oxidation
by-products in the biological process. Many of these organics are toxic to
aquatic life and must be removed from the effluent before discharge. The
most common technology to achieve this objective is adsorption on activated
carbon.
In some cases, toxic and refractory organics can be pretreated by chemical
oxidation using ozone, catalyzed hydrogen peroxide, or advanced oxidation
processes. In this case the objective is not mineralization of the organics but
detoxification and enhanced biodegradability.
Environmental Monitoring 43

Biological nitrogen removal, both nitrification and denitrification, is


employed for removal of ammonia from wastewaters. While this process is
predictable in the case of municipal wastewaters, many industrial wastewaters
are inhibitory to the nitrifying organisms.
Volatile organics can be removed by air or steam stripping. Air stripping
is achieved by using packed or tray towers in which air and water counterflow
through the tower. In steam stripping, the liquid effluent from the column is
separated as an azeotropic mixture.
Virtually all of the processes employed for industrial wastewater treatment
generate a sludge that requires some means of disposal. In general, the
processes employed for thickening and dewatering are the same as those
used in municipal wastewater treatment. Waste activated sludge is usually
stabilized by aerobic digestion in which the degradable solids are oxidized
by prolonged aeration.
Most landfill leachates have high and variable concentrations of organic
and inorganic substances. All municipal and most industrial landfill leachates
are amenable to biological treatment and can be treated anaerobically or
aerobically, depending on the effluent quality desired. Activated carbon has
been employed to remove nondegradable organics. In Europe, some plants
employ reverse osmosis to produce a high-quality effluent.

Aerobic Treatment System


An aerobic treatment system or ATS, often called (incorrectly) an aerobic
septic sytem is a small scale sewage treatment system similar to a septic tank
system, but which uses an aerobic process for digestion rather than the
anaerobic process used in septic systems. These systems are commonly found
in rural areas where public sewers are not available, and may be used for a
single residence or for a small group of homes.

Process
The ATS process generally consists of the following phases:
• Pre-treatment stage to remove large solids and other undesireable
substances from the wastewater.
• Aeration stage, where the aerobic bacteria digest the biological
wastes in the wastewater.
• Settling stage to allow any undigested solids to settle. This forms
a sludge which must be periocally removed from the system.
44 Environmental Monitoring

• Disinfecting stage, where chlorine or similar disinfectant is mixed


with the water, to produce an antiseptic output.

Comparison to Tradional Septic Systems


The aeration state and the disinfecting stage are the primary differences
from a traditional septic system to the user of the system. These stages increase
the initial cost of the aerobic system, and also the maintainence requirements
over the passive septic system. The aerator requires a supply of electricity to
drive the air pump, and the disinfectant must be periodically renewed. On
the positive side, the aerobic system produces a much higher quality effluent,
and the leach field can be far smaller than that of a similar capacity septic
system, and the output can be discharged in areas too environmentally
sensitive for septic system output. Some aerobic systems recycle the effluent
through a sprinkler system, using it to water the lawn.
Aerobic systems are similar to conventional septic systems in that they
both use natural processes to treat wastewater. But unlike septic anaerobic
treatment, the aerobic treatment process requires oxygen. Aerobic treatment
units, therefore, use a mechanism to inject and circulate air inside the treatment
tank. This mechanism requires electricity to operate.
For this reason, aerobic systems cost more to operate and need more routine
maintenance than most septic systems. However, when properly operated
and maintained, aerobic systems can provide a high quality wastewater
treatment alternative to septic systems.

Why Use Aerobic Treatment?


Aerobic Treatment Units (ATUs) have proven to be a reliable technology
when they are properly designed, constructed, and maintained. When properly
monitored and maintained their performance is consistent.
ATUs may enable development or use of difficult sites. They can remedy
existing malfunctioning systems and they can be a good option for homes in
environmentally sensitive areas.

Process Description
Aerobic systems treat wastewater using natural processes that require
oxygen. Bacteria that thrive in oxygen-rich environments work to break down
and digest the wastewater inside the aerobic treatment unit.
Like most onsite systems, aerobic systems treat the wastewater in stages.
Sometimes the wastewater receives pretreatment before it enters the aerobic
Environmental Monitoring 45

unit, and the treated wastewater leaving the unit requires additional treatment
or disinfection before being returned to the environment. Such a variety of
designs exists for home aerobic units and systems that it is impossible to
describe a typical system. Instead, it is more practical to discuss how some
common design features of aerobic systems work and the different stages of
aerobic treatment.

Pretreatment
Some aerobic systems include a pretreatment step to reduce the amount
of solids in the wastewater going into the aerobic unit. Solids include greases,
oils, toilet paper, and other materials that are put down the drain or flushed
into the system. Too much solid material can clog the unit and prevent
effective treatment. Some pretreatment methods include a septic tank, a
primary settling compartment in the pretreatment unit, or a trash trap.
Pretreatment is optional but can greatly improve a unit’s performance.

Aerobic Treatment Units


The main function of the aerobic treatment unit is to collect and treat
household wastewater, which includes all water from toilets, bathtubs,
showers, sinks, and laundry. Aerobic units themselves come in many sizes
and shapes-rectangular, conical, and some shapes that defy classification.
There are two typical aerobic treatment designs: (1) suspended growth units
and (2) attached growth units.
The process most aerobic units use to treat wastewater is referred to as
suspended growth. These units include a main compartment called an aeration
chamber in which air is mixed with the wastewater. Since most home aerobic
units are buried underground like septic tanks, the air must be forced into the
aeration chamber by an air blower or through liquid agitation.
The forced air mixes with wastewater in the aeration chamber, and the
oxygen supports the growth of aerobic bacterial that digests the solids in the
wastewater. This mixture of wastewater and oxygen is called the mixed
liquor.
The treatment occurring in the mixed liquor is referred to as suspended
growth because the bacterial grow as they are suspended in the liquid
unattached to any surface.
Unfortunately, the bacterial cannot digest all of the solids in the mixed
liquor, and these solids eventually settle out as sludge. Many aerobic units
include a secondary chamber called a settling chamber or clarifier where
46 Environmental Monitoring

excess solids can settle. Other designs allow the sludge to accumulate at the
bottom of the tank.
In aerobic units designed with a separate settling compartment, the sludge
returns to the aeration chamber (either by gravity or by a pumping device).
The sludge contains bacterial that also aid in the treatment process. Although,
in theory, the aerobic treatment process should eventually be able to consume
the sludge completely, in practice, the sludge does build up in most units and
will need to be pumped out periodically so that solids don’t clog the unit.
An alternative design for aerobic treatment is the attached growth system.
These units treat wastewater by taking a surface made of material that the
bacteria can attach to, and then exposing that surface alternately to wastewater
and air. This is done either by rotating the surface in and out of the wastewater
or by dosing the wastewater onto the surface. Pretreatment is required. The
air needed for the process either is naturally present or is supplied
mechanically.
Attached growth systems, such as trickling filters and rotating disks, are
less common than suspended growth systems, but have certain advantages.
For example, there is no need for mixing, and solids are less likely to be
washed out of the system during periods of heavy household water use.

Flow Design
The way and the rate in which wastewater is received by and flows through
the aerobic unit differs from design to design. Continuous flow designs simply
allow the wastewater to flow through the unit at the same rate that it leaves
the home. Other designs employ devices (such as pretreatment tanks, surge
chambers, and baffles) to control the amount of the incoming flow. Batch
process designs use pumps or siphons to control the amount of wastewater
in the aeration tank and/or to discharge the treated wastewater in controlled
amounts after a certain period.
Controlling the flow of wastewater helps to protect the treatment process.
When too much wastewater is flushed into the system all at once, it can
become overburdened, and the quality of treatment can suffer. The
disadvantages to mechanical flow control devices are that, like all mechanical
components, they need maintenance and run the risk of malfunctioning.

Disinfection
Some units have the disinfection process incorporated into the unit design.
In some cases, disinfection may be the only treatment required of the
Environmental Monitoring 47

wastewater from an aerobic unit before the water is released into the
environment. Added costs for disinfectants, such as chlorine, should be taken
into account with aerobic units.

Design Criteria
Aerobic units should be large enough to allow enough time for the solids
to settle and for the wastewater to be treated. The daily wastewater volume
is usually determined by the number of bedrooms in the house. Flows in
Arizona for individual homes are up to 140 gallons per day (gpd) per bedroom.
The needed size of an aerobic unit is often estimated the same way the
size of a septic tank is estimated, by the number of bedrooms (not bathrooms)
in the house. It is assumed that each person will use approximately 50 to
100 gallons of water per day, and that each bedroom can accommodate two
people. When calculated this way, a three-bedroom house will require a unit
with a capacity of 300 to 600 gallons per day.
Some health departments require that aerobic units be sized at least as
large as a septic tank in case the aerobic unit malfunctions and oxygen doesn’t
mix with the wastewater. In such cases, the aerobic unit will work as a septic
tank, which will, at least, provide partial treatment for the wastewater.
Lower temperatures tend to slow down most biological processes, and
higher temperatures tend to speed them up. The aerobic process itself creates
heat, which, along with the heat from the electrical components, may help to
keep the treatment process active. However, cold weather can have adverse
effects on the performance of aerobic units.
In one 1977 study of aerobic units, bulking of the sludge seemed to occur
when the temperature of the mixed liquor fell below 15 degrees Celsius (59
degrees Fahrenheit). Problems can sometimes be avoided by insulating
around the units.

Operation and Maintenance


Aerobic treatment systems are not accepted in all areas. Regulations for
onsite systems can vary from state to state and county to county. A major
reason that aerobic systems are not more widely used is concern about
improper operation and maintenance by homeowners. Aerobic systems
require regular maintenance, and abuse or neglect can easily lead to system
failure.
A typical inspection should include removing the unit’s cover and checking
its general appearance. Check pipes and the inside of the aeration chamber
48 Environmental Monitoring

and note the appearance of the wastewater inside the unit and its color and
odor. If the unit includes a chlorinator, this too will need to be checked and
may need cleaning. Samples may be taken of the mixed liquor from the
aeration chamber, as well as the final treated wastewater. Check to see that
all mechanical parts, alarms, and controls are in working order, and that
solids are pumped from the system if needed.
It is important that mechanical components in aerobic systems receive
regular inspection and maintenance. For example, air compressors sometimes
need to be oiled, and vanes, filters, and seals may need to be replaced.
Malfunctions are common during the first few months after installation. In
most cases, homeowners do not have the expertise to inspect, repair, and
maintain their own systems.
Most aerobic units have controls that can be switched on and off by the
homeowner in case of emergency. Aerobic units also are required to have
alarms to alert the homeowner of malfunctions. Depending on the design of
the system, controls and alarms can be located either inside or outside the
home, and alarms can be visible, audible, or both.
Homeowners should make sure that controls and alarms are always
protected from corrosion, and that the aerobic unit is turned back on if here
is a power outage or if it is turned off temporarily.
To assure homeowners are receiving a reputable ATU most states require
the system to by approved by the NSF International (formerly the National
Sanitation Foundation). NSF has tested aerobic units according to the
requirements of ANSI/NSF Standard 40. NSF is a nonprofit organization
devoted to the protection of the environment through the development of
product standards, product evaluations, research, education and training.
The American National Standards Institute (ANSI) is the recognized
accredited in the U.S. for organizations that develop consumer standards
and for those that provide independent product evaluations. NSF is accredited
by ANSI for both of these areas of service.

Anaerobic Digestion
Anaerobic digestion (AD) is the harnessed and contained, naturally
occurring process of anaerobic decomposition. An anaerobic digester is an
industrial system that harnesses these natural process to treat waste, produce
biogas that can be used to power electricity generators, provide heat and
produce soil improving material. Anaerobic digesters have been around for a
long time and they are commonly used for sewage treatment or for managing
Environmental Monitoring 49

animal waste. Increasing environmental pressures on waste disposal has


increased the use of AD as a process for reducing waste volumes and
generating useful byproducts. It is a fairly simple process that can greatly
reduce the amount of organic matter which might otherwise end up in landfills
or waste incinerators.
Almost any organic material can be processed in this manner. This includes
biodegradable waste materials such as waste paper, grass clippings, leftover
food, sewage and animal waste. Anaerobic digesters can also be fed with
specially grown energy crops to boost biodegradable content and hence
increase biogas production. After sorting or screening to remove inorganic
or hazardous materials such as metals and plastics, the material to be processed
is often shredded, minced, or hydrocrushed to increase the surface area
available to microbes in the digesters and hence increase the speed of
digestion. The material is then fed into an airtight digester often with extra
water added depending on the digestion process and feedstock.

Microbial Fuel Cell


A microbial fuel cell (MFC) or biological fuel cell is a device in which a
chemical, typically glucose, is converted to electric power by means of bacteria
on the anode side. Power outputs are usually small, in the order of magnitude
of about a milliwatt, and there are no current commercially available
applications. However, some hope to use them in the future to build a glucose-
powered pacemakers that would need no other power supply than the glucose
present in the bloodstream.
A microbial Fuel Cell is a device that converts chemical energy to electrical
energy by the catalytic reaction of microorganisms. A typical microbial fuel
cell consists of anode and cathode compartments separated by a cation specific
membrane. In the anode compartment, fuel is oxidized by microorganisms,
generating electrons and protons. Electrons are transferred to the cathode
compartment through, and the protons through the membrane. Electrons and
protons are consumed in the cathode compartment reducing oxygen to water.
In general, there are two types of microbial fuel cell, mediator- and mediator-
less microbial fuel cell. Biological fuel cells take glucose and methanol from
food scraps and convert it into hydrogen and food for the bacteria.

Mediator Microbial Fuel Cell


Most of the microbial cells are electrochemically inactive. The electron
transfer from microbial cells to the electrode is facilitated by mediators such
50 Environmental Monitoring

as potassium ferric cyanide, thionine, methyl viologen (methyl blue), humic


acid, neutral red and so on (Delaney et al., 1984; Lithgow et al., 1986). Most
of the mediators available are expensive and toxic.

Mediator-less Microbial Fuel Cell


Mediator-less microbial fuel cells have been engineered at the Korea
Institute of Science and Technology, by a team led by Kim, Byung Hong A
mediator-less microbial fuel cell does not require a mediator but uses
electrochemically active bacteria to transfer electrons to the electrode
(electrons are carried directly from the bacterial respiratory enzyme to the
electrode). Among the electrochemically active bacteria are, Shewanella
putrefaciens (Kim et al., 1999a), Aeromonas hydrophila (Cuong et al., 2003),
and others.
Mediator-less MFCs are a much more recent development and due to this
the factors that affect optimum operation, such as the bacteria used in the
system, the type of ion membrane, and the system conditions such as
temperature, are not particularly well understood. Bacteria in mediator-less
MFCs typically have electrochemically-active redox enzymes such as
cytochromes on their outer membrane that can transfer electrons to external
materials.

Generating Electricity
When micro-organisms consume a substrate such as sugar in aerobic
conditions they produce carbon dioxide and water, however when oxygen is
not present they produce carbon dioxide, protons and electrons as described
below:
C12H22O11 + 13H2O → 12CO2 + 48H+ + 48e-
Microbial fuel cells use inorganic mediators to tap into the electron
transport chain of cells and steal these electrons that are produced. The
mediator crosses the outer cell lipid membranes and plasma wall; it then
begins to liberate electrons from the electron transport chain that would
normally be taken up by oxygen or other intermediates. The now reduced
mediator exits the cell laden with electrons that it shuttles to an electrode
where it deposits them; this electrode becomes the electro-generic anode
(negatively charged electrode). The release of the electrons means that the
mediator returns to its original oxidised state ready to repeat the process. It is
important to note that this can only happen under anaerobic conditions, if
oxygen is present then it will collect all the electrons as it has a greater
Environmental Monitoring 51

electronegativity than the mediator. A number of mediators have been


suggested for use in microbial fuel cells, these include natural red, methylene
blue, thionine or resorfuin.
This is the principle behind generating a flow of electrons from most micro-
organisms, in order to turn this into a useable supply of electricity this process
has to be accommodated into a fuel cell.
In order to generate a useful current it is necessary to create a complete
circuit, not just shuttle electrons to a single point.
The mediator and micro-organism, in this case yeast, are mixed together
in a solution to which is added a suitable substrate such as glucose. This
mixture is placed in a sealed chamber to stop oxygen entering, thus forcing
the micro-organism to use anaerobic respiration. An electrode is placed in
the solution that will act as the anode as described previously.
In the second chamber of the MFC is another solution and electrode. This
electrode, called the cathode is positively charged and is the equivalent of
the oxygen sink at the end of the electron transport chain only now it is
external to the biological cell. The solution is an oxidizing agent that picks
up the electrons at the cathode, as with the electron chain in the yeast cell
this could be a number of molecules such as oxygen, however this is not
particularly practical as it would require large volumes of circulating gas. A
more convenient option is to use a solution of a solid oxidizing agent.
Connecting the two electrodes is a wire and completing the circuit and
connecting the two chambers is a salt bridge or ion exchange membrane,
this last feature allows the protons produced, as described in the above
equation to pass from the anode chamber to the cathode chamber.
The reduced mediator carries electrons from the cell to the electrode, here
the mediator is reduced as it deposits the electrons, these then flow across
the wire to the second electrode, which acts as an electron sink, from here
they pass to an oxidising material.

Uses

Power Generation
Microbial fuel cells have a number of potential uses. The first and most
obvious is harvesting the electricity produced for a power source. Virtually
any organic material could be used to ‘feed’ the fuel cell. MFCs could be
installed to waste water treatment plants. The bacteria would consume waste
material from the water and produce supplementary power for the plant. The
52 Environmental Monitoring

gains to be made from doing this are that MFCs are a very clean and efficient
method of energy production. A fuel cell’s emissions are well below
regulations. MFCs also use energy much more efficiently than standard
combustion engines which are limited by the Carnot Cycle, In theory a MFC
is capable of energy efficiency far beyond 50%(17).
However MFCs do not have to be used on a large scale, it has even been
suggested that MFCs could be implanted in the body to be employed as a
power source for a pacemaker, a microsensor or a microactuator. The MFC
would take glucose from the blood stream or possibly other substrates
contained in the body and use this to generate electricity to power these
devices. The advantages to using a MFC in this situation as opposed to a
normal battery is that it uses a renewable form of energy and would not need
to be recharged like a standard battery would. Further to this they could also
be built very small and they operate well in mild conditions, 20oC to 40oC
and also at pH of around 7(19).

Further Uses
Using the electricity from the fuel cells can be harnessed in applications
for EcoBots, Gastrobots and Biosensors. Since the current generated from a
microbial fuel cell is directly proportional to the strength of wastewater used
as the fuel, an MFC can be used to measure the strength of wastewater.The
strength of wastewater is commonly evaluated as biochemical oxygen demand
(BOD) values. BOD values are determiined incubating samples for 5 days
with proper source of microbes, usually activate sludge collected from sewage
works. When BOD values are used as a real time control parameter, 5 days’
incubation is too long.
An MFC-type BOD sensor can be used to measure real time BOD values.
Oxygen and nitrate are preferred electron acceptors over the electrode reducing
current generation from an MFC. An MFC-type BOD sensors underestimate
BOD values in the presence of these electron acceptors. This can be avoided
inhibiting aerobic and nitrate respirations in the MFC using terminal oxydase
inhibitors such as cyanide and azide. Improvement of a microbial fuel cell
performance as a BOD sensor using respiratory inhibitors.This type of BOD
sensor is commercially available.

WASTE MANAGEMENT TECHNIQUES


Managing municipal waste, industrial waste and commercial waste has
traditionally consisted of collection, followed by disposal. Depending upon
Environmental Monitoring 53

the type of waste and the area, a level of processing may follow collection.
This processing may be to reduce the hazard of the waste, recover material
for recycling, produce energy from the waste, or reduce it in volume for
more efficient disposal.
Collection methods vary widely between different countries and regions,
and it would be impossible to describe them all. For example, in Australia
most urban domestic households have a 240 litre (63.4 gallon) bin that is
emptied weekly by the local council. Many areas, especially those in less
developed areas, do not have a formal waste-collection system in place.
In Canadian urban centres curbside collection is the most common method
of disposal, whereby the city collects waste, and or recyclables, and or organics
on a scheduled basis from residential areas. In rural areas people dispose of
their waste at transfer stations. Waste collected is then transported to a regional
landfill.
Disposal methods also vary widely. In Australia, the most common method
of disposal of solid waste is to landfills, because it is a large country with a
low-density population. By contrast, in Japan it is more common for waste
to be incinerated, because the country is smaller and land is scarce.

Landfill
Disposing of waste in a landfill is the most traditional method of waste
disposal, and it remains a common practice in most countries. Historically,
landfills were often established in disused quarries, mining voids or borrow
pits. Running a landfill that minimises environmental problems can be a
hygienic and relatively inexpensive method of disposing of waste materials;
however, a more efficient method of disposal will almost surely be needed in
time as less land becomes available for such purposes.
Older or poorly managed landfills can create a number of adverse
environmental impacts, including wind-blown litter, attraction of vermin and
pollutants such as leachate, which can leach into and pollute groundwater
and rivers. Another product of landfills containing harmful wastes is landfill
gas, mostly composed of methane and carbon dioxide, which is produced as
the waste breaks down anaerobically.
Characteristics of a modern landfill include methods to contain leachate,
such as lining clay or plastic liners. Disposed waste should be compacted
and covered to prevent attracting mice and rats and preventing wind-blown
litter. Many landfills also have a landfill gas extraction system installed after
closure to extract the gas generated by the decomposing waste materials.
54 Environmental Monitoring

This gas is often burnt in a gas engine to generate electricity. Even flaring the
gas off is a better environmental outcome than allowing it to escape to the
atmosphere, as this consumes the methane, which is a far stronger greenhouse
gas than carbon dioxide. Some of it can be tapped for use as a fuel.
Many local authorities, especially in urban areas, have found it difficult to
establish new landfills due to opposition from owners of adjacent land. Few
people want a landfill in their local neighborhood. As a result, solid waste
disposal in these areas has become more expensive as material must be
transported further away for disposal.
Some oppose the use of landfills in any way, anywhere, arguing that the
logical end result of landfill operations is that it will eventually leave a
drastically polluted planet with no canyons, and no wild space. Some futurists
have stated that landfills will be the “mines of the future”: as some resources
become more scarce, they will become valuable enough that it would be
necessary to ‘mine’ them from landfills where these materials were previously
discarded as valueless.
This fact, as well as growing concern about the impacts of excessive
materials consumption, has given rise to efforts to minimise the amount of
waste sent to landfill in many areas. These efforts include taxing or levying
waste sent to landfill, recycling the materials, converting material to energy,
designing products that require less material, and legislation mandating that
manufacturers are responsible for final packaging and materials disposal costs
(as in the manufacturers setting up and funding the “Grüne Punkt” in Germany
to achieve that end). A related subject is that of industrial ecology, where the
material flows between industries is studied. The by-products of one industry
may be a useful commodity to another, leading to reduced materials
wastestream.

Incineration
Incineration is the process of destroying waste material by burning it.
Incineration is often alternatively named “Energy-from-waste” (EfW) or
“waste-to-energy”; this is misleading as there are other ways of recovering
energy from waste that do not involve directly burning it.
Incineration is carried out both on a small scale by individuals, and on a
large scale by industry. It is recognised as a practical method of disposing of
hazardous waste materials, such as biological medical waste. Many entities
now refer to disposal of wastes by exposure to high temperatures as thermal
treatment (however this also includes gasification and pyrolysis). This concept
Environmental Monitoring 55

encompasses recovery of metals and energy from municipal solid waste


(MSW) as well as safe disposal of the remaining ash and reduction of the
volume of waste.
Though classic incineration is still widely used in many areas, especially
developing countries, incineration as a waste management tool is becoming
controversial for several reasons.
First, it may be a poor use of many waste materials because it destroys not
only the raw material, but also all of the energy, water, and other natural
resources used to produce it. Some energy can be reclaimed as electricity by
using the combustion to create steam to drive an electrical generator, but
even the best incinerator can only recover a fraction of the caloric value of
fuel materials.
Second, incineration of municipal solid wastes does produce significant
amounts of dioxin and furan emissions to the atmosphere. Dioxins and furans
are considered by many to be serious health hazards. However, advances in
emission control designs and very stringent new governmental regulations
have caused large reductions in the amount of dioxins and furans produced
by waste-to-energy plants. The U.S. Environmental Protection Agency (EPA)
and the European Union have taken the lead in mandating very strict emission
standards for incineration of wastes.
Incineration also produces large amounts of ash requiring safe disposal so
as not to contaminate underground aquifers. Until recently, safe disposal of
incinerator ash was a major problem. In the mid-1990s, experiments in France
and Germany used electric plasma torches to melt incinerator ash into inert
glassy pebbles, valuable in concrete production. Incinerator ash has also been
chemically separated into lye and other useful chemicals. This process, plasma
arc waste disposal, is now operated commercially, and is used to convert
existing waste and landfill into power generating gas and construction rubble.
An incineration technique that avoids ash disposal problems is the
incorporation of the ash in portland cement furnaces, with savings of fuel, a
double benefit.

Composting and Anaerobic Digestion


Waste materials that are organic in nature, such as plant material, food
scraps, and paper products, are increasingly being recycled. These materials
are put through a composting and/or digestion system to control the biological
process to decompose the organic matter and kill pathogens. The resulting
stabilized organic material is then recycled as mulch or compost for
56 Environmental Monitoring

agricultural or landscaping purposes. There are a large variety of composting


and digestion methods and technologies, varying in complexity from simple
windrow composting of shredded plant material, to automated enclosed-vessel
digestion of mixed domestic waste. These methods of biological
decomposition are differentiated as being aerobic in composting methods or
anaerobic in digestion methods, although hybrids of the two methods also
exist.

Examples of Composting and Anaerobic Digestion Programs


The Green Bin Program, a form of organic recycling used in Toronto,
Ontario and surrounding municipalities including Markham, Ontario, Canada,
makes use of anaerobic digestion to reduce the amount of garbage shipped
to Michigan, in the United States. This is the newest facet of the 3-stream
waste management system has been implemented in the town and is another
step towards the goal of diverting 70% of current waste away from the
landfills. Green Bins allow any organic waste that in the past would have
formed landfill waste to be composted and turned into nutrient rich soil.
Examples of waste products for the Green Bin are food products and scraps,
soiled papers and sanitary napkins. Currently Markham, like the other
municipalities in the Greater Toronto Area, ships all of its waste to Michigan
at a cost of $22 CAN per tonne (metric ton, 1000 kg).
The Green Bin Program is currently being studied by other Municipalities
in the province of Ontario as a way of diverting waste away from the landfills.
Notably, Toronto and Ottawa are in the preliminary stages of adopting a similar
program.
The City of Edmonton, Alberta, Canada has adopted large-scale
composting to deal with its urban waste. Its composting facility is the largest
of its type in the world, representing 35 per cent of Canada’s centralised
composting capacity. The $100-million co-composter allows Edmonton to
recycle 65 per cent of its residential waste. The co-composter itself is 38,690
square metres in size, equivalent to 8 football fields. It’s designed to process
200,000 tonnes of residential solid waste per year and 22,500 dry tonnes of
biosolids, turning them into 80,000 tonnes of compost annually.
The “biological” element refers to either anaerobic digestion or
composting. Anaerobic digestion breaks down the biodegradable component
of the waste to produce biogas and soil conditioner. The biogas can be used
to generate renewable energy. More advanced processes such as the ArrowBio
Process enable high rates of gas and green energy production without the
Environmental Monitoring 57

production of RDF. This is facilitated by processing the waste in water.


Biological can also refer to a composting stage. Here the organic component
is treated with aerobic microorganisms. They break down the waste into
carbon dioxide and compost. There is no green energy produced by systems
simply employing composting.
MBT is gaining increased recognition in countries with changing waste
management markets such as the UK and Australia where WSN
Environmental Solutions has taken a leading role in developing MBT plants.
Pyrolysis and gasification are two related forms of thermal treatment where
materials are heated with high temperatures and limited oxygen. The process
typically occurs in a sealed vessel under high pressure. Converting material
to energy this way is more efficient than direct incineration, with more energy
able to be recovered and used.
Pyrolysis of solid waste converts the material into solid, liquid and gas
products. The liquid oil and gas can be burnt to produce energy or refined
into other products. The solid residue (char) can be further refined into
products such as activated carbon.
Gasification is used to convert organic materials directly into a synthetic
gas (syngas) composed of carbon monoxide and hydrogen. The gas is then
burnt to produce electricity and steam. Gasification is used in biomass power
stations to produce renewable energy and heat.
Plasma gasification is the gasification of matter in an oxygen-starved
environment to decompose waste material into its basic molecular structure.
Plasma gasification does not combust waste as incinerators do. It converts
organic waste into a fuel gas that still contains all the chemical and heat
energy from the waste. It converts inorganic waste into an inert vitrified
glass.
Plasma is considered as a 4th state of matter, the other three being gas,
liquid, and solid. Electricity is fed to a torch, which has two electrodes, creating
an arc. Inert gas is passed through the arc, heating the process gas to internal
temperatures as high as 13,000 °C (25,000 °F). The temperature a metre
from the torch can be as high as ~4000 °C (~8,000 °F). Because of these
high temperatures the waste is completely destroyed and broken down into
its basic elemental components. There are no tars or furans. At these high
temperatures all metals become molten and flow out the bottom of the reactor.
Inorganics such as silica, soil, concrete, glass, gravel, etc. are vitrified into
glass and flow out the bottom of the reactor. There is no ash remaining to go
back to a landfill.
58 Environmental Monitoring

The plasma reactor does not discriminate between types of waste. It can
process any type of waste. The only variable is the amount of energy that it
takes to destroy the waste. Consequently, no sorting of waste is necessary
and any type of waste, other than nuclear waste, can be processed.
The reactors are large and operate at a slightly negative pressure, meaning
that the feed system is simplified because the gas does not want to escape.
The gas has to be pulled from the reactor by the suction of the compressor.
Each reactor can process 20 tonnes per hour (t/h) compared to 3 t/h or typical
gasifiers. Because of the size and negative pressure, the feed system can
handle bundles of material up to 1 metre in size. This means that whole
drums or bags of waste can be fed directly into the reactor making the system
ideal for large scale production.
The gas coming out of a plasma gasifier is lower in trace contaminants
than with any kind of incinerator or other gasifier. Because the process starts
with lower emissions out of the reactor, it is able to achieve significantly
lower stack emissions. The gasifier doesn’t care about the amount of moisture
in the waste. The moisture consumes energy to vaporise and can impact the
capacity and economics; however, it will not affect the process.
Gas from the plasma reactor can be burned to produce electricity or can
be synthesised into ethanol to contribute to automotive fuel.

TREATMENT IN THE
RECEIVING ENVIRONMENT
Many processes in a wastewater treatment plant are designed to mimic
the natural treatment processes that occur in the environment, whether that
environment is a natural water body or the ground. If not overloaded, bacteria
in the environment will consume organic contaminants, although this will
reduce the levels of oxygen in the water and may significantly change the
overall ecology of the receiving water. Native bacterial populations feed on
the organic contaminants, and the numbers of disease-causing microorganisms
are reduced by natural environmental conditions such as predation, exposure
to ultraviolet radiation, etc.
Consequently, in cases where the receiving environment provides a high
level of dilution, a high degree of wastewater treatment may not be required.
However, recent evidence has demonstrated that very low levels of certain
contaminants in wastewater, including hormones (from animal husbandry
and residue from human birth control pills) and synthetic materials such as
Environmental Monitoring 59

phthalates that mimic hormones in their action, can have an unpredictable


adverse impact on the natural biota and potentially on humans if the water is
re-used for drinking water. In the US and EU, uncontrolled discharges of
wastewater to the environment are not permitted under law, and strict water
quality requirements are to be met. A significant threat in the coming decades
will be the increasing uncontrolled discharges of wastewater within rapidly
developing countries.

Worldwide Shortfall of Sewage Treatment


Viewed from a worldwide perspective there is inadequate sewage treatment
capacity, especially in lesser developed countries. This circumstance has
existed since at least the 1970s and is due to overpopulation, the water crisis
and the expense of constructing wastewater treatment systems. The result of
inadequate sewage treatment is significant mortality increases from (mostly)
preventable diseases; moreover, this mortality impact is particularly high
among the infants and other children in underdeveloped countries, particularly
on the continents of Africa and Asia.In many developing countries the bulk
of domestic and industrial wastewater is discharged without any treatment
or after primary treatment only.
Water utilities in developing countries are chronically underfunded because
of low water tariffs, the inexistence of sanitation tariffs in many cases, low
billing efficiency (i.e. many users that are billed do not pay) and poor
operational efficiency (i.e. there are overly high levels of staff, there are high
physical losses, and many users have illegal connections and are thus not
being billed). In addition, wastewater treatment typically is the process within
the utility that receives the least attention, partly because enforcement of
environmental standards is poor.
As a result of all these factors, operation and maintenance of many
wastewater treatment plants is poor. This is evidenced by the frequent
breakdown of equipment, shutdown of electrically operated equipment due
to power outages or to reduce costs, and sedimentation due to lack of sludge
removal. Developing countries as diverse as Egypt, Algeria, China or
Colombia have invested substantial sums in wastewater treatment without
achieving a significant impact in terms of environmental improvement. Even
if wastewater treatment plants are properly operating, it can be argued that
the environmental impact is limited in cases where the assimilative capacity
of the receiving waters (ocean with strong currents or large rivers)is high, as
it is often the case.Waterborne diseases that are prevalent in developing
60 Environmental Monitoring

countries, such as diarrhea, typhus and cholera, are caused primarily by poor
hygiene practices and poor disposal of wastewater. The public health impact
of the discharge of untreated wastewater is comparatively much lower.
Hygiene promotion, on-site sanitation and low-cost sanitation thus are likely
to have a much greater impact on public health than wastewater treatment.
Given the scarcity of financial resources in developing countries and the
poor track record of wastewater treatment plants, it could thus be argued that
investments should first be undertaken to evacuate wastewater from human
settlements and to promote good hygiene practices.
Only once this has been achieved, substantial funds should be invested in
wastewater treatment plants. However, national legislation modeled on
standards in the US and the EU has often led to the priorization of expensive
wastewater treatment without having much of an environmental impact.In
the year 2000, the United Nations has established that 2.64 billion people
had inadequate access to sanitation, adequate sanitation being defined as
access to an improved latrine, a septic tank or a sewer.
This value represented 44 percent of the global population, but in Africa
and Asia approximately half of the population had no access whatsoever to
sanitation. There are few reliable figures on the share of the wastewater
collected in sewers that is being treated in developing countries. However,
in Latin America about 15% of collected wastewater passes through treatment
plants (with varying levels of actual treatment) and in Sub-Saharan Africa
almost none of the collectecd wastewater is treated.

Biological Treatment of Waste Water


Waste water in general, and sewage in particular, usually contains amounts
of hydrocarbon that need to be reduced to a certain low level or even to zero.
Biological treatment can eliminate these hydrocarbons. Certain bacteria are
able to ‘eat’ them and, in doing so, do the same thing that any living species
is doing - they ‘burn’ their ‘meal’ to produce energy that keeps them warm,
and lets them move and reproduce. In other words, it lets them live and gives
us clean water.

Bacteria for Waste Water Treatment


Since sewage consists of a great variety of organic and inorganic
components, no special bacterion can eats them all. A population of various
micro-organisms is the best ingredient for a good degradation. There are,
however, certain groups of bacteria that are specialised to deal with certain
Environmental Monitoring 61

hydrocarbons, surrounding conditions and nutrition. They are closely related


to the way biological waste water treatment is performed.’Aerobic’ bacteria
require oxygen to burn hydrocarbons. Aerobic treatment technology can
usually be recognised by air blown into waste water basins, making the whole
thing blubber around. Providing air this way requires compressors that
consume power, causing treatment costs to increase. Therefore, a technology
is desired that does not need to have compressors blow air into the waste
water. Two different kinds of bacteria satisfy this need.Waste water treatment
in an ‘anoxic’1 environment is performed with a different type of aerobic
bacterion.
No air or oxygen needs to be blown into basins. Instead they obtain their
oxygen from groups such as nitrate (NO3-) or nitrite (NO2-). These compounds
are chemically reduced to molecular nitrogen (N2), providing the bacteria
with their oxygen.The third group of bacteria and the related technology are
called ‘anaerobic’ - they do not need oxygen. Instead they often produce it as
their waste. In the early days of life on our planet, these bacteria dominated,
as the atmosphere contained very little oxygen. Oxygen was, in fact, a poison.
Producing this waste gas led to the first major extinction of species and is
also linked to the greenhouse effect.

Biological Reactions
Aerobic bacteria need nitrogen and phosphate, as well as carbon and
oxygen, to live. Both elements are widespread in nature and known to be
important ingredients in any kind of manure. The optimum ratio of the
elements carbon, nitrogen and phosphorus in the nutrition of bacteria has
been determined as 100:5:1.CompostBiological activity similar to that in
waste water treatment can be observed in the production of compost. Organic
waste is eaten by micro-organisms, producing heat that stimulates them to
eat more, reproduce and die. Death of the biomass is the final stage in the
process and is what makes compost a good fertiliser.

WASTEWATER TREATMENT
Domestic Wastewater treatment is the process of removing contaminants
from sewage. It includes physical, chemical and biological processes to
remove physical, chemical and biological contaminants. Its objective is to
produce a wastestream (or treated effluent) and a solid waste or sludge also
suitable for discharge or reuse back into the environment. This material is
often inadvertently contaminated with toxic organic and inorganic
62 Environmental Monitoring

compounds.Sewage is created by residences, institutions, and commercial


and industrial establishments. It can be treated close to where it is created (in
septic tanks or onsite package plants and other aerobic treatment systems),
or collected and transported via a network of pipes and pump stations to a
municipal treatment plant. Sewage collection and treatment is typically subject
to local, state and federal regulations and standards (regulation and controls).
Industrial sources of wastewater often require specialized treatment processes.
Typically; sewage treatment involves three stages, called primary,
secondary and tertiary treatment. First, the solids are separated from the
wastewater stream. Then dissolved biological matter is progressively
converted into a solid mass by using indigenous, water-borne bacteria.
Finally, the biological solids are neutralized then disposed of or re-used,
and the treated water may be disinfected chemically or physically (for example
by lagooning and micro-filtration). The final effluent can be discharged into
a natural surface water body (stream, river or bay) or other environment
(wetland, golf course, greenway, etc.) Sewage is the liquid waste from toilets,
baths, showers, kitchens, etc. that is disposed of via sewers. In many areas
sewage also includes some liquid waste from industry and commerce.
In the UK, the waste from toilets is termed foul waste, the waste from
items such as basins, baths, kitchens is termed sullage water, and the industrial
and commercial waste is termed trade waste.The division of household water
drains into greywater and blackwater is becoming more common in the
developed world, with greywater being permitted to be used for watering
plants or recycled for flushing toilets. Much sewage also includes some surface
water from roofs or hard-standing areas.
Municipal wastewater therefore includes residential, commercial, and
industrial liquid waste discharges, and may include stormwater runoff. Sewage
system capable of handling stormwater is known as a combined system. Such
systems are usually avoided since they complicate and therby reduce the
efficacy of sewage treatment plants owing to their seasonality. Storm drains
are preferred for this purpose.Sewerage systems that transport liquid waste
discharges and stormwater together to a common treatment facility are called
combined sewer systems.
The construction of combined sewers is a less common practice in the
U.S. and Canada than in the past and is no longer accepted within Building
Regulations in the UK and other European countries. Instead, liquid waste
and stormwater are collected and conveyed in separate sewer systems, referred
to as sanitary sewers and storm sewers in the U.S. and as foul sewers and
Environmental Monitoring 63

surface water sewers in the UK. Overflows from foul sewers designed to
relieve pressure from heavy rainfall are termed storm sewers or combined
sewer overflows.As rainfall runs over the surface of roofs and the ground, it
may pick up various contaminants including soil particles (sediment), heavy
metals, organic compounds, animal waste, and oil and grease. Some
jurisdictions require stormwater to receive some level of treatment before
being discharged directly into waterways. Examples of treatment processes
used for stormwater include sedimentation basins, wetlands, and vortex
separators (to remove coarse solids).The site where the process is conducted
is called a sewage treatment plant.
The flow scheme of a sewage treatment plant is generally the same for all
countries:
Mechanical Treatment; Influx (Influent), Removal of large objects,
Removal of sand and grit Pre-precipitation
Biological Treatment; Oxidation bed (oxidizing bed) or Aerated
systemPost precipitation Effluent
Chemical Treatment (this step is usually combined with settling and other
processes to remove solids, such as filtration. The combination is referred to
in the US as physical-chemical treatment.).

Treatment Stages
Primary treatmentPrimary treatment is to reduce oils, grease, fats, sand,
grit, and coarse (settleable) solids. This step is done entirely with machinery,
hence the name mechanical treatment. Influx (influent) and removal of large
objectsIn the mechanical treatment, the influx (influent) of sewage water is
strained to remove all large objects that are deposited in the sewer system,
such as rags, sticks, condoms, sanitary towels (sanitary napkins) or tampons,
cans, fruit, etc. This is most commonly done with a manual or automated
mechanically raked screen. This type of waste is removed because it can
damage the sensitive equipment in the sewage treatment plant.

Sand and Grit Removal


This stage typically includes a sand or grit channel where the velocity of
the incoming wastewater is carefully controlled to allow sand grit and stones
to settle but still maintain the majority of the organic material within the
flow. This equipment is called a detritor or sand catcher. Sand grit and stones
need to be removed early in the process to avoid damage to pumps and other
equipment in the remaining treatment stages. Sometimes there is a sand
64 Environmental Monitoring

washer (grit classifier) followed by a conveyor that transports the sand to a


container for disposal. The contents from the sand catcher may be fed into
the incinerator in a sludge processing plant but in many cases the sand and
grit is sent to a land-fill.

Screening and Maceration


The grit free liquid is then passed through fixed or rotating screens to
remove floating and larger material such as rags and smaller particulates
such as peas and corn. Screenings are collected and may be returned to the
sludge treatment plant or may be disposed of off site by landfilling or
incineration. Maceration, in which solids are cut into small particles through
the use of rotating knife edges mounted on a revolving cylinder, is used in
plants that are able to process this particulate waste.

Sedimentation
Many plants have a sedimentation stage where the sewage is allowed to
pass slowly through large tanks, commonly called “primary clarifiers” or
“primary sedimentation tanks”. The tanks are large enough that faecal solids
can settle and floating material such as grease and plastics can rise to the
surface and be skimmed off. The main purpose of the primary stage is to
produce a generally homogeneous liquid capable of being treated biologically
and a sludge that can be separately treated or processed. Primary settlement
tanks are usually equipped with mechanically driven scrapers that continually
drive the collected sludge towards a hopper in the base of the tank from
where it can be pumped to further sludge treatment stages.

Secondary Treatment
Secondary treatment is designed to substantially degrade the biological
content of the sewage such as are derived from human waste, food waste,
soaps and detergent. The majority of municipal and industrial plants treat the
settled sewage liquor using aerobic biological processes. For this to be
effective, the biota require both oxygen and a substrate on which to live.
There are number of ways in which this is done. In all these methods, the
bacteria and protozoa consume biodegradable soluble organic contaminants
(e.g. sugars, fats, organic short-chain carbon molecules, etc.) and bind much
of the less soluble fractions into floc particles. Secondary treatment systems
are classified as fixed film or suspended growth. In fixed film systems - such
as rock filters - the biomass grows on media and the sewage passes over its
Environmental Monitoring 65

surface. In suspended growth systems - such as activated sludge - the biomass


is well mixed with the sewage. Typically, suspended growth systems can be
operated in a smaller space than fixed film systems that treat the same amount
of water; however, fixed film systems are more able to cope with drastic
changes in the amount of biological material and can provide higher removal
rates for organic material and suspended solids than suspended growth
systems.

Roughing Filters
Roughing filters are intended to treat particularly strong or variable organic
loads, typically industrial, to allow them to then be treated by conventional
secondary treatment processes. They are typically tall, circular filters filled
with open synthetic filter media to which sewage is applied at a relatively
high rate. The design of the filters allows high hydraulic loading and a high
flow-through of air. On larger installations, air is forced through the media
using blowers. The resultant liquor is usually within the normal range for
conventional treatment processes.

Activated Sludge
Activated sludge plants use a variety of mechanisms and processes to use
dissolved oxygen to promote the growth of biological floc that substantially
removes organic material. It also traps particulate material and can, under
ideal conditions, convert ammonia to nitrite and nitrate ultimately to nitrogen
gas, (see also denitrification).Filter Beds (Oxidising beds)In older plants and
plants receiving more variable loads, trickling filter beds are used where the
settled sewage liquor is spread onto the surface of a deep bed made up of
coke (carbonised coal), limestone chips or specially fabricated plastic media.
Such media must have high surface areas to support the biofilms that form.
The liquor is distributed through perforated rotating arms radiating from a
central pivot.
The distributed liquor trickles through this bed and is collected in drains
at the base. These drains also provide a source of air which percolates up
through the bed, keeping it aerobic. Biological films of bacteria, protozoa
and fungi form on the media’s surfaces and eat or otherwise reduce the
organic content. This biofilm is grazed by insect larvae and worms which
help maintain an optimal thickness. Overloading of beds increases the
thickness of the film leading to clogging of the filter media and ponding on
the surface.
66 Environmental Monitoring

Moving Bed Biological Reactor


Moving Bed Biological Reactor (MBBR) involve the addition of inert
media into existing activated sludge basins to provide active sites for biomass
attachment. This conversion results in a strictly attached growth system.
Advantages of attached growth systems include 1) maintain a high density
of biomass population 2) increase the efficiency of the system without the
need for increasing the mixed liquor suspended solids (MLSS) concentration
and 3) eliminate the cost of operating the return activated sludge (RAS) line.

Biological Aerated Filters


Biological Aerated (or Anoxic) Filter (BAF) combines filtration with
biological carbon reduction, nitrification or denitrification. BAF usually
includes a reactor filled with a filter media. The media is either in suspension
or supported by a gravel layer at the foot of the filter. The dual purpose of
this media is to support highly active biomass that is attached to it and to
filter suspended solids. Carbon reduction and ammonia conversion occurs
in aerobic mode and sometime achieved in a single reactor while nitrate
conversion occurs in anoxic mode. BAF is operated either in upflow or
downflow configuration depending on design specified by
manufacturer.Membrane Biological ReactorsMembrane Biological Reactors
(MBR) includes a semi-permeable membrane barrier system either submerged
or in conjunction with an activated sludge process. This technology guarantees
removal of all suspended and some dissolved pollutants. The limitation of
MBR systems is directly proportional to nutrient reduction efficiency of the
activated sludge process. The cost of building and operating a MBR is usually
higher than conventional wastewater treatment.

Secondary Sedimentation
The final step in the secondary treatment stage is to settle out the biological
floc or filter material and produce sewage water containing very low levels
of organic material and suspended matter.

Tertiary Treatment
Tertiary treatment provides a final stage to raise the effluent quality to the
standard required before it is discharged to the receiving environment (sea,
river, lake, ground, etc.) More than one tertiary treatment process may be
used at any treatment plant. If disinfection is practiced, it is always the final
process. It is also called Effluent polishing.
Environmental Monitoring 67

Filtration
Sand filtration removes much of the residual suspended matter. Filtration
over activated carbon removes residual toxins. LagooningLagooning provides
settlement and further biological improvement through storage in large man-
made ponds or lagoons. These lagoons are highly aerobic and colonization
by native macrophytes, especially reeds, is often encouraged. Small filter
feeding invertebrates such as Daphnia and species of Rotifera greatly assist
in treatment by removing fine particulates.
68 Environmental Monitoring

Health Impacts of Water Pollution

It is a well-known fact that clean water is absolutely essential for healthy


living. Adequate supply of fresh and clean drinking water is a basic need for
all human beings on the earth, yet it has been observed that millions of people
worldwide are deprived of this. Freshwater resources all over the world are
threatened not only by over exploitation and poor management but also by
ecological degradation. The main source of freshwater pollution can be
attributed to discharge of untreated waste, dumping of industrial effluent,
and run-off from agricultural fields.
Industrial growth, urbanization and the increasing use of synthetic organic
substances have serious and adverse impacts on freshwater bodies. It is a
generally accepted fact that the developed countries suffer from problems of
chemical discharge into the water sources mainly groundwater, while
developing countries face problems of agricultural run-off in water sources.
Polluted water like chemicals in drinking water causes problem to health
and leads to water-borne diseases which can be prevented by taking measures
can be taken even at the household level.

GROUNDWATER AND ITS CONTAMINATION


Many areas of groundwater and surface water are now contaminated with
heavy metals, POPs (persistent organic pollutants), and nutrients that have
an adverse affect on health. Water-borne diseases and water-caused health
Environmental Monitoring 69

problems are mostly due to inadequate and incompetent management of water


resources. Safe water for all can only be assured when access, sustainability,
and equity can be guaranteed. Access can be defined as the number of people
who are guaranteed safe drinking water and sufficient quantities of it. There
has to be an effort to sustain it, and there has to be a fair and equal distribution
of water to all segments of the society. Urban areas generally have a higher
coverage of safe water than the rural areas. Even within an area there is
variation: areas that can pay for the services have access to safe water whereas
areas that cannot pay for the services have to make do with water from hand
pumps and other sources.
In the urban areas water gets contaminated in many different ways, some
of the most common reasons being leaky water pipe joints in areas where the
water pipe and sewage line pass close together. Sometimes the water gets
polluted at source due to various reasons and mainly due to inflow of sewage
into the source.
Pesticides. Run-off from farms, backyards, and golf courses contain
pesticides such as DDT that in turn contaminate the water. Leechate from
landfill sites is another major contaminating source. Its effects on the
ecosystems and health are endocrine and reproductive damage in wildlife.
Groundwater is susceptible to contamination, as pesticides are mobile in the
soil. It is a matter of concern as these chemicals are persistent in the soil and
water.
Sewage. Untreated or inadequately treated municipal sewage is a major
source of groundwater and surface water pollution in the developing countries.
The organic material that is discharged with municipal waste into the
watercourses uses substantial oxygen for biological degradation thereby
upsetting the ecological balance of rivers and lakes. Sewage also carries
microbial pathogens that are the cause of the spread of disease.
Nutrients. Domestic waste water, agricultural run-off, and industrial
effluents contain phosphorus and nitrogen, fertilizer run-off, manure from
livestock operations, which increase the level of nutrients in water bodies
and can cause eutrophication in the lakes and rivers and continue on to the
coastal areas. The nitrates come mainly from the fertilizer that is added to
the fields. Excessive use of fertilizers cause nitrate contamination of
groundwater, with the result that nitrate levels in drinking water is far above
the safety levels recommended. Good agricultural practices can help in
reducing the amount of nitrates in the soil and thereby lower its content in
the water.
70 Environmental Monitoring

Synthetic organics. Many of the 100 000 synthetic compounds in use


today are found in the aquatic environment and accumulate in the food chain.
POPs or Persistent organic pollutants, represent the most harmful element
for the ecosystem and for human health, for example, industrial chemicals
and agricultural pesticides. These chemicals can accumulate in fish and cause
serious damage to human health. Where pesticides are used on a large-scale,
groundwater gets contaminated and this leads to the chemical contamination
of drinking water.
Acidification. Acidification of surface water, mainly lakes and reservoirs,
is one of the major environmental impacts of transport over long distance of
air pollutants such as sulphur dioxide from power plants, other heavy industry
such as steel plants, and motor vehicles. This problem is more severe in the
US and in parts of Europe.

CHEMICALS IN DRINKING WATER


Chemicals in water can be both naturally occurring or introduced by human
interference and can have serious health effects.
Fluoride. Fluoride in the water is essential for protection against dental
caries and weakening of the bones, but higher levels can have an adverse
effect on health. In India, high fluoride content is found naturally in the waters
in Rajasthan.
Arsenic. Arsenic occurs naturally or is possibly aggrevated by over
powering aquifers and by phosphorus from fertilizers. High concentrations
of arsenic in water can have an adverse effect on health. A few years back,
high concentrations of this element was found in drinking water in six districts
in West Bengal. A majority of people in the area was found suffering from
arsenic skin lesions. It was felt that arsenic contamination in the groundwater
was due to natural causes. The government is trying to provide an alternative
drinking water source and a method through which the arsenic content from
water can be removed.
Lead. Pipes, fittings, solder, and the service connections of some household
plumbing systems contain lead that contaminates the drinking water source.
Recreational use of water. Untreated sewage, industrial effluents, and
agricultural waste are often discharged into the water bodies such as the
lakes, coastal areas and rivers endangering their use for recreational purposes
such as swimming and canoeing.
Petrochemicals. Petrochemicals contaminate the groundwater from
underground petroleum storage tanks.
Environmental Monitoring 71

Other heavy metals. These contaminants come from mining waste and
tailings, landfills, or hazardous waste dumps.
Chlorinated solvents. Metal and plastic effluents, fabric cleaning,
electronic and aircraft manufacturing are often discharged and contaminate
groundwater.

DISEASES
Water-borne diseases are infectious diseases spread primarily through
contaminated water. Though these diseases are spread either directly or
through flies or filth, water is the chief medium for spread of these diseases
and hence they are termed as water-borne diseases. Most intestinal (enteric)
diseases are infectious and are transmitted through faecal waste. Pathogens
– which include virus, bacteria, protozoa, and parasitic worms – are disease-
producing agents found in the faeces of infected persons. These diseases are
more prevalent in areas with poor sanitary conditions.
These pathogens travel through water sources and interfuses directly
through persons handling food and water. Since these diseases are highly
infectious, extreme care and hygiene should be maintained by people looking
after an infected patient. Hepatitis, cholera, dysentery, and typhoid are the
more common water-borne diseases that affect large populations in the tropical
regions. A large number of chemicals that either exist naturally in the land or
are added due to human activity dissolve in the water, thereby contaminating
it and leading to various diseases.
Pesticides. The organophosphates and the carbonates present in pesticides
affect and damage the nervous system and can cause cancer. Some of the
pesticides contain carcinogens that exceed recommended levels. They contain
chlorides that cause reproductive and endocrinal damage.
Lead. Lead is hazardous to health as it accumulates in the body and affects
the central nervous system. Children and pregnant women are most at risk.
Fluoride. Excess fluorides can cause yellowing of the teeth and damage
to the spinal cord and other crippling diseases.
Nitrates. Drinking water that gets contaminated with nitrates can prove
fatal especially to infants that drink formula milk as it restricts the amount of
oxygen that reaches the brain causing the ‘blue baby’ syndrome. It is also
linked to digestive tract cancers. It causes algae to bloom resulting in
eutrophication in surface water.
Petrochemicals. Benzene and other petrochemicals can cause cancer even
at low exposure levels.
72 Environmental Monitoring

Chlorinated solvents. These are linked to reproduction disorders and to


some cancers.
Arsenic. Arsenic poisoning through water can cause liver and nervous
system damage, vascular diseases and also skin cancer.
Other heavy metals. –Heavy metals cause damage to the nervous system
and the kidney, and other metabolic disruptions.
Salts. It makes the fresh water unusable for drinking and irrigation
purposes.
Exposure to polluted water can cause diarrhoea, skin irritation, respiratory
problems, and other diseases, depending on the pollutant that is in the water
body. Stagnant water and other untreated water provide a habitat for the
mosquito and a host of other parasites and insects that cause a large number
of diseases especially in the tropical regions. Among these, malaria is
undoubtedly the most widely distributed and causes most damage to human
health.

PREVENTIVE MEASURES
Water-borne epidemics and health hazards in the aquatic environment are
mainly due to improper management of water resources. Proper management
of water resources has become the need of the hour as this would ultimately
lead to a cleaner and healthier environment. In order to prevent the spread of
water-borne infectious diseases, people should take adequate precautions. The
city water supply should be properly checked and necessary steps taken to
disinfect it. Water pipes should be regularly checked for leaks and cracks. At
home, the water should be boiled, filtered, or other methods and necessary
steps taken to ensure that it is free from infection.

MINAMATA: ENVIRONMENTAL CONTAMINATION WITH METHYL


MERCURY
In Minamata, Japan, inorganic mercury was used in the industrial
production of acetaldehyde. It was discharged into the nearby bay as waste
water and was ingested by organisms in the bottom sediments. Fish and other
creatures in the sea were soon contaminated and eventually residents of this
area who consumed the fish suffered from MeHg (methyl mercury)
intoxication, later known as the Minamata disease. The disease was first
detected in 1956 but the mercury emissions continued until 1968. But even
after the emission of mercury stopped, the bottom sediment of the polluted
water contained high levels of this mercury.
Environmental Monitoring 73

Various measures were taken to deal with this disease. Environmental


pollution control, which included cessation of the mercury process; industrial
effluent control, environmental restoration of the bay; and restrictions on the
intake of fish from the bay. This apart research and investigative activities
were promoted assiduously, and compensation and help was offered by the
Japanese Government to all those affected by the disease.
The Minamata disease proved a turning point, towards progress in
environment protection measures. This experience clearly showed that health
and environment considerations must be integrated into the process of
economic and industrial development from an early stage.

THE WATER-POLLUTION LADDER


AND VALUE LEVELS
The levels of water quality for which the research team sought willingness-
to-pay estimates are “boatable, “ fishable,” and “swimmable.” These levels
are described in words and depicted graphically by means of a “water-quality
ladder”. Use of these categories, two of which are embodied in the law
mandating the national programme for water-pollution control, permitted
avoidance of the communications problems associated with describing water
quality in terms of the numerous abstract technical measures of pollution
(oxygen depletion, for example). Although the boatable-fishable-swimmable
categories are widely understood by the public, they did require further
specification to ensure that different people perceived them in a similar fashion.
Boatable water was defined as an intermediate level between water which
“has oil, raw sewage and other things in it, has no plant or animal life and
smells bad” on the one hand, and water which is of fishable quality on the
other.
Game fish such as bass and trout cannot tolerate water in which certain
rough fish such as carp and catfish flourish. In pretests, experiments were
made with two levels of fishable water-one for rough fish like carp and catfish,
and the other for game fish like bass—but a single definition of fishable was
adopted as water “clean enough so that game fish like bass can live in it,”
under the assumption that the words “game fish” and “bass” had wide
recognition and denoted water of the quality that Congress had in mind.
Swimmable water appeared to present less difficulty for popular understanding
since the enforcement of water-quality standards for swimming by health
authorities has led to widespread awareness that swimming in polluted water
74 Environmental Monitoring

can cause illness. Because willingness-to-pay questions have to describe in


some detail the conditions of the “market” for the good, they are inevitably
longer than the usual survey research questions. Respondents quickly become
bored and restless if material is read to them without giving them frequent
opportunities to express judgements or to look at visual aids. The
questionnaire for this experiment was designed to be as interactive as possible
by interpreting the text with questions which required the respondents to use
the newly described water-quality categories. They were also handed a card
depicting the water-quality ladder which was referred to constantly during
the sequence of benefits questions.

WILLINGNESS-TO-PAY QUESTIONS AND ANSWERS


Questions about willingness to pay should seem realistic to respondents.
Accordingly, they were couched in terms of annual household payments in
higher prices and taxes because this is the way people do pay for
waterpollution control. A portion of each household’s annual federal tax
payment goes towards the expense of regulating water pollution and providing
construction grants for sewage-treatment plants. Local sewage taxes pay for
the maintenance of these plants. Those private users, such as manufacturing
plants, who incur pollution-control expenses ultimately pass much or all of
the cost along to consumers in higher prices. Thus, this payment method has
a ring of truth to the respondents. As explained earlier “starting-point bias”
can be an important problem in bidding games and surveys. That is, a high
starting bid from an interviewer may elicit a higher bid from a respondent
than a low starting bid.
A major methodological innovation of the research reported in this chapter
is the development of a device for eliminating such a bias, the “payment
card.” In this technique, the respondent is given a card which contains a
menu of alternative amounts of payment beginning at $0 and increasing by a
fixed interval until an arbitrarily determined large amount is reached. When
the time comes to elicit the amount one is willing to pay, the respondent is
asked to pick a number from the card (or any number in between) which “is
the most you would be willing to pay in taxes and higher price each year”
(italics in the questionnaire) for a given level of water quality. Thus, the
interviewer suggests no bid at all. It turns out, however, that this presents
some problems of its own. In initial pretests, it was found that the respondents
had considerable difficulty in determining their willingness to pay when a
card was used which only presented various dollar amounts.
Environmental Monitoring 75

A number of them expressed embarrassment, confusion, or resentment at


the task, and some who gave amounts indicated they were very uncertain
about them. The problem lay with the lack of benchmarks for their estimates.
People are not normally aware of the total amounts they pay for public goods
even when that amount comes out of their taxes, nor do they know how
much such goods cost. Without a way of psychologically anchoring their
estimate in some manner, they were not able to arrive at meaningful estimates.
They needed benchmarks of some kind which would convey sufficient
information without biasing their responses. Their most appropriate
benchmarks for willingness to pay for water-pollution control would appear
to be the amounts they are already paying in higher prices and taxes for other
nonenvironmental, publicly provided goods and services.
Amounts were identified on the card for several such goods, and further
pretests were conducted, indicating that the benchmarks made the task
meaningful for most people. But the use of payment cards with benchmarks
raises the possibility of introducing its own kind of bias. Are the respondents
who gave amounts for water-pollution control using the benchmarks for
general orientation or are they basing their amounts directly on the benchmarks
themselves in some manner? In the former case, respondents would be giving
unique values for water quality; in the latter case, they would be giving values
for water quality relative to what they think they are paying for a particular
set of other public goods. If the latter case holds and their water-quality values
are sensitive to changes in the benchmark amounts, or to changes in the set
of public goods identified on the payment card, their validity as estimates of
consumer surplus for water quality are suspect. A test for this kind of bias
was conducted in the pretest by using different versions of the payment card
with the amounts paid for other publicly provided goods changed by modest
amounts. No bias was found, and so the “anchored” payment card was deemed
to be a suitable device for the full-scale experiment. Tests were also conducted
to attempt to discover if any of the other sorts of bias were inherent in the
questionnaire. Again, none was found.
A final point should be made regarding the payment card. What people
actually pay for publicly provided goods varies with their income. To correct
for this, four different payment cards were developed corresponding to four
income classes. At the appropriate point in the interview, the interviewer gave
the respondent the payment card for his or her income category, which had
been established by a prior question. The respondents valued three levels of
water quality which were described in words and depicted on the water quality
76 Environmental Monitoring

ladder. They were first asked how much they were willing to pay to maintain
national water quality in the boatable level. Subsequent questions asked them
their willingness to pay for overall water quality to fishable quality and
swimmable quality. The average willingness-to-pay amounts given by the
respondent for the two higher levels consists of the amounts they offered for
the lower levels plus any additional amount they offered for the higher level.
The average annual amounts per household for those respondents who
answered the willingness-to-pay questions turned out to be:
The most substantial benefit is for boatable water. The respondents are
willing to give about 20 percent more for fishable water than boatable water,
but only an additional 15 percent to make the water swimmable. The data
also permitted one to make a rough distinction between the type of recreation
and the intrinsic values discussed earlier. Since the willingness-to-pay
questions measure the overall value that respondents have for water quality,
the amount given by each respondent represents the combination of
recreational and intrinsic values held by that person. But it was possible to
tell from the questions whether a person actually engaged in water-based
recreation.
It was reasoned that the values expressed by the respondents who do not
engage in in-stream recreation should be almost purely intrinsic in nature. In
calculating the average willingness-to-pay amount for the nonrecreationists
alone, therefore, we get an approximation of the intrinsic value of water
quality. By subtracting this amount from the total the recreationists are willing
to pay, one can estimate, in a rough way, the portions of the recreationists’
benefits which are attributable to recreation and intrinsic values.
When this is done, it is found that intrinsic value constitutes about 45
percent of the total value for recreationists, 100 percent for the
nonrecreationists (by assumption), and about 55 percent for the sample as a
whole. If this is a correct reflection of reality, it is a major finding and may
have large implications for the future study of benefits from environmental
improvement. It was noted earlier that, while the sample of persons
interviewed was initially chosen at random, quite a few respondents failed to
give usable answers. Any aggregate national benefit estimate based on these
data therefore could not be put forward as accurate. Thus, I make such an
estimate simply to illustrate that the results of this experiment imply very
large values.
There are about 80 million households in the United States. Assume that
the sample results imply that to have high-quality recreational waters
Environmental Monitoring 77

throughout the country there is an annual willingness to pay of $200 per


household. This would imply a total willingness to pay of $16 billion.
According to the earlier discussion, this would divide about equally between
user and nonuser values. At first this might seem out of line with the value of
well under the billion dollars that was calculated for recreational fishing.
But this is not necessarily the case. Recall that that estimate is for a relatively
small increase in the nation’s fishable waters over the actual conditions of
the early 1970s, and that the estimate from the national survey is the value
people attach to making and maintaining the whole of the nation’s fresh
waters of high recreational quality where the alternative is almost total
degradation of most of the nation’s watercourses. In other words, both the
baselines and the routes of benefit accrual considered are different in the two
studies.
A somewhat closer comparison, though still not a perfect one, is between
the survey’s reported willingness to pay for an improvement from boatable
to fishable water and the largest value found in the fishing study for essentially
complete cleanup (in fishing terms) of the nation’s fresh water—roughly $1
billion. The objective of this experiment was not to produce an accurate
estimate of national benefits, rather it was to test the feasibility of using a
macro approach to the estimation of water-quality benefits. In that, it
succeeded.

PESTICIDE MONITORIN
IN SURFACE WATER
Monitoring data for pesticides are generally poor in much of the world
and especially in developing countries. Key pesticides are included in the
monitoring schedule of most western countries, however the cost of analysis
and the necessity to sample at critical times of the year (linked to periods of
pesticide use) often preclude development of an extensive data set. Many
developing countries have difficulty carrying out organic chemical analysis
due to problems of inadequate facilities, impure reagents and financial
constraints. New techniques using immunoassay procedures for presence/
absence of specific pesticides may reduce costs and increase reliability.
Immunoassay tests are available for triazines, acid amides, carbamates, 2,4-
D/phenoxy acid, paraquot and aldrin
Data on pesticide residues in fish for lipophilic compounds and
determination of exposure and/or impact of fish to lipophobic pesticides
78 Environmental Monitoring

through liver and/or bile analysis is mainly restricted to research programmes.


Hence, it is often difficult to determine the presence, pathways and fate of
the range of pesticides that are now used in large parts of the world. In contrast,
the ecosystemic impacts from older, organochlorine pesticides such as DDT,
became readily apparent and has resulted in the banning of these compounds
in many parts of the world for agricultural purposes. Table indicates why
older pesticides, together with other hydrophobic carcinogens such as PAHs
and PCBs, are poorly monitored when using water samples. As an example,
the range of concentration of suspended solids in rivers is often between 100
and 1000 mg/l except during major run-off events when concentrations can
greatly exceed these values.
Tropical rivers that are unimpacted by development have very low
suspended sediment concentrations, but increasingly these are a rarity due to
agricultural expansion and deforestation in tropical countries. As an example,
approximately 67% of DDT is transported in association with suspended
matter at sediment concentrations as low as 100 mg/l and increases to 93%
at 1000 mg/l of suspended sediment. Given the analytical problems of
inadequate detection levels and poor quality control in many laboratories of
the developing countries, plus the fact that recovery rates vary from 50-150%
for organic compounds, it follows that monitoring data from water samples
are usually a poor indication of the level of pesticide pollution for compounds
that are primarily associated with the solid phase.
The number of NDs in many databases is almost certainly an artifact of
the wrong sampling medium (water) and, in some cases, inadequate analytical
facilities and procedures. Clearly, this makes pesticide assessment in water
difficult in large parts of the world. Experience suggests that sediment-
associated pesticide levels are often much higher than recorded and NDs are
often quite misleading. Some water quality agencies now use multi-media
(water + sediment + biota) sampling in order to more accurately characterize
pesticides in the aquatic environment. Pesticide monitoring requires highly
flexible field and laboratory programmes that can respond to periods of
pesticide application, which can sample the most appropriate medium (water,
sediment, biota), are able to apply detection levels that have meaning for
human health and ecosystem protection and which can discriminate between
those pesticides which appear as artifacts of historical use versus those that
are in current use.
For pesticides that are highly soluble in water, monitoring must be closely
linked to periods of pesticide use. In the United States where there have been
Environmental Monitoring 79

major studies of the behaviour of pesticide run-off, the triazines (atrazine


and cyanazine) and alachlor (chlorinated acetamide) are amongst the most
widely used herbicides. These are used mainly in the spring (May). Studies
by Schottler indicate that 55-80% of the pesticide run-off occurred in the
month of June.
The significance for monitoring is that many newer and soluble pesticides
can only be detected shortly after application; therefore, monitoring
programmes that are operated on a monthly or quarterly basis (typical of
many countries) are unlikely to be able to quantify the presence or determine
the significance of pesticides in surface waters. Pesticides that have limited
application are even less likely to be detected in surface waters. The danger
lies in the presumption by authorities that ND (non-detectable) values implies
that pesticides are absent. It may well only mean that monitoring programmes
failed to collect data at the appropriate times or analysed the wrong media.
80 Environmental Monitoring

Effects of Soil Pollution

The effects of pollution on soil are quite alarming and can cause huge
disturbances in the ecological balance and health of living creatures on earth.
Some of the most serious soil pollution effects are mentioned below:
• Decrease in soil fertility and therefore decrease in the soil yield.
Definitely, how can one expect a contaminated soil to produce healthy
crops?
• Loss of soil and natural nutrients present in it. Plants also would not
thrive in such a soil, which would further result in soil erosion.
• Disturbance in the balance of flora and fauna residing in the soil.
• Increase in salinity of the soil, which therefore makes it unfit for
vegetation, thus making it useless and barren.
• Generally crops cannot grow and flourish in a polluted soil. Yet if
some crops manage to grow, then those would be poisonous enough
to cause serious health problems in people consuming them.
• Creation of toxic dust leading is another potential effect of soil
pollution.
• Foul smell due to industrial chemicals and gases might result in
headaches, fatigue, nausea, etc. in many people.
• Soil pollutants would bring in alteration in the soil structure, which
would lead to death of many essential organisms in it. This would
also affect the larger predators and compel them to move to other
places, once they lose their food supply.
Environmental Monitoring 81

PESTICIDES
The term “pesticide” is a composite term that includes all chemicals that
are used to kill or control pests. In agriculture, this includes herbicides (weeds),
insecticides (insects), fungicides (fungi), nematocides (nematodes), and
rodenticides (vertebrate poisons). A fundamental contributor to the Green
Revolution has been the development and application of pesticides for the
control of a wide variety of insectivorous and herbaceous pests that would
otherwise diminish the quantity and quality of food produce. The use of
pesticides coincides with the “chemical age” which has transformed society
since the 1950s.
In areas where intensive monoculture is practised, pesticides were used
as a standard method for pest control. Unfortunately, with the benefits of
chemistry have also come disbenefits, some so serious that they now threaten
the long-term survival of major ecosystems by disruption of predator-prey
relationships and loss of biodiversity. Also, pesticides can have significant
human health consequences. While agricultural use of chemicals is restricted
to a limited number of compounds, agriculture is one of the few activities
where chemicals are intentionally released into the environment because they
kill things.
Agricultural use of pesticides is a subset of the larger spectrum of industrial
chemicals used in modern society. The American Chemical Society database
indicates that there were some 13 million chemicals identified in 1993 with
some 500 000 new compounds being added annually. In the Great Lakes of
North America, for example, the International Joint Commission has estimated
that there are more than 200 chemicals of concern in water and sediments of
the Great Lakes ecosystem. Because the environmental burden of toxic
chemicals includes both agriculture and non-agricultural compounds, it is
difficult to separate the ecological and human health effects of pesticides
from those of industrial compounds that are intentionally or accidentally
released into the environment.
However, there is overwhelming evidence that agricultural use of pesticides
has a major impact on water quality and leads to serious environmental
consequences. Although the number of pesticides in use is very large, the
largest usage tends to be associated with a small number of pesticide products.
In a recent survey in the agricultural western provinces of Canada where
some fifty pesticides are in common use, 95% of the total pesticide application
is from nine separate herbicides. Although pesticide use is low to nil in
traditional and subsistence farming in Africa and Asia, environmental, public
82 Environmental Monitoring

health and water quality impacts of inappropriate and excessive use of


pesticides are widely documented. For example, Appelgren reports for
Lithuania that while pesticide pollution has diminished due to economic
factors, water pollution by pesticides is often caused by inadequate storage
and distribution of agrochemicals. In the United States, the US-EPA’s National
Pesticide Survey found the 10.4% of community wells and 4.2% of rural
wells contained detectible levels of one or more pesticides. In a study of
groundwater wells in agricultural southwestern Ontario (Canada), 35% of
the wells tested positive for pesticides on at least one occasion.
The impact on water quality by pesticides is associated with the following
factors:
• Active ingredient in the pesticide formulation.
• Contaminants that exist as impurities in the active ingredient.
• Additives that are mixed with the active ingredient (wetting agents,
diluents or solvents, extenders, adhesives, buffers, preservatives and
emulsifiers).
• Degradate that is formed during chemical, microbial or photochemical
degradation of the active ingredient.
In addition to use of pesticides in agriculture, silviculture also makes
extensive use of pesticides. In some countries, such as Canada, where one in
ten jobs is in the forest industry, control of forest pests, especially insects, is
considered by the industry to be essential. Insecticides are often sprayed by
aircraft over very large areas. Irrigated agriculture, especially in tropical and
subtropical environments, usually requires modification of the hydrological
regime which, in turn, creates habitat that is conducive to breeding of insects
such as mosquitoes which are responsible for a variety of vector-borne
diseases. In addition to pesticides used in the normal course of irrigated
agriculture, control of vector-borne diseases may require additional
application of insecticides such as DDT which have serious and widespread
ecological consequences. In order to address this problem, environmental
management methods to control breeding of disease vectors are being
developed and tested in many irrigation projects.

HISTORICAL DEVELOPMENT OF PESTICIDES


The history of pesticide development and use is the key to understanding
how and why pesticides have been an environmental threat to aquatic systems,
and why this threat is diminishing in developed countries and remains a
problem in many developing countries.
Environmental Monitoring 83

NORTH-SOUTH DILEMMA OVER PESTICIDE ECONOMICS


As noted above, the general progression of pesticide development has
moved from highly toxic, persistent and bioaccumulating pesticides such as
DDT, to pesticides that degrade rapidly in the environment and are less toxic
to non-target organisms. The developed countries have banned many of the
older pesticides due to potential toxic effects to man and/or their impacts on
ecosystems, in favour of more modern pesticide formulations. In the
developing countries, some of the older pesticides remain the cheapest to
produce and, for some purposes, remain highly effective as, for example, the
use of DDT for malaria control. Developing countries maintain that they
cannot afford, for reasons of cost and/or efficacy, to ban certain older
pesticides.
The dilemma of cost/efficacy versus ecological impacts, including long
range impacts via atmospheric transport, and access to modern pesticide
formulations at low cost remains a contentious global issue. In addition to
ecological impacts in countries of application, pesticides that have been long
banned in developed countries (such as DDT, toxaphene, etc.), are consistently
found in remote areas such as the high arctic. Chemicals that are applied in
tropical and subtropical countries are transported over long distances by global
circulation. The global situation has deteriorated to the point where many
countries are calling for a global convention on “POPs” (Persistent Organic
Pollutants) which are mainly chlorinated compounds that exhibit high levels
of toxicity, are persistent, and bioaccumulate. The list is not yet fixed; however,
“candidate” substances include several pesticides that are used extensively
in developing countries.

FATE AND EFFECTS OF PESTICIDES


Factors Affecting Pesticide Toxicity in Aquatic Systems
The ecological impacts of pesticides in water are determined by the
following criteria:
• Toxicity: Mammalian and non-mammalian toxicity usually expressed
as LD50 (“Lethal Dose”: concentration of the pesticide which will
kill half the test organisms over a specified test period). The lower
the LD50, the greater the toxicity; values of 0-10 are extremely toxic.
Drinking water and food guidelines are determined using a risk-based
assessment. Generally, Risk = Exposure (amount and/or duration) ×
Toxicity. Toxic response (effect) can be acute (death) or chronic (an
84 Environmental Monitoring

effect that does not cause death over the test period but which causes
observable effects in the test organism such as cancers and tumours,
reproductive failure, growth inhibition, teratogenic effects, etc.).
• Persistence: Measured as half-life (time required for the ambient
concentration to decrease by 50%). Persistence is determined by biotic
and abiotic degradational processes. Biotic processes are
biodegradation and metabolism; abiotic processes are mainly
hydrolysis, photolysis, and oxidation. Modern pesticides tend to have
short half lives that reflect the period over which the pest needs to be
controlled.
• Degradates: The degradational process may lead to formation of
“degradates” which may have greater, equal or lesser toxicity than
the parent compound. As an example, DDT degrades to DDD and
DDE.
• Fate (Environmental): The environmental fate (behaviour) of a pesticide
is affected by the natural affinity of the chemical for one of four
environmental compartments: solid matter (mineral matter and
particulate organic carbon), liquid (solubility in surface and soil water),
gaseous form (volatilization), and biota. This behaviour is often referred
to as “partitioning” and involves, respectively, the determination of:
the soil sorption coefficient (KOC); solubility; Henry’s Constant (H);
and the n-octanol/water partition coefficient (KOW). These parameters
are well known for pesticides and are used to predict the environmental
fate of the pesticide.
An additional factor can be the presence of impurities in the pesticide
formulation but that are not part of the active ingredient. A recent example is
the case of TFM, a lampricide used in tributaries of the Great Lakes for
many years for the control of the sea lamprey. Although the environmental
fate of TFM has been well known for many years, recent research by
Munkittrick et al. has found that TFM formulation includes one or more
highly potent impurities that impact on the hormonal system of fish and
cause liver disease.

Human Health Effects of Pesticides


Perhaps the largest regional example of pesticide contamination and human
health is that of the Aral Sea region linked the effects of pesticides to “the
level of oncological (cancer), pulmonary and haematological morbidity, as
well as on inborn deformities... and immune system deficiencies”.
Environmental Monitoring 85

Human health effects are caused by:


• Skin contact: handling of pesticide products
• Inhalation: breathing of dust or spray
• Ingestion: pesticides consumed as a contaminant on/in food or in
water.
Farm workers have special risks associated with inhalation and skin contact
during preparation and application of pesticides to crops. However, for the
majority of the population, a principal vector is through ingestion of food that
is contaminated by pesticides. Degradation of water quality by pesticide run-
off has two principal human health impacts. The first is the consumption of
fish and shellfish that are contaminated by pesticides; this can be a particular
problem for subsistence fish economies that lie downstream of major
agricultural areas.
The second is the direct consumption of pesticide-contaminated water. WHO
has established drinking water guidelines for 33 pesticides. Many health and
environmental protection agencies have established “acceptable daily intake”
(ADI) values which indicate the maximum allowable daily ingestion over a
person’s lifetime without appreciable risk to the individual. For example, in a
recent paper by Wang and Lin studying substituted phenols,
tetrachlorohydroquinone, a toxic metabolite of the biocide pentachlorophenol,
was found to produce “significant and dose-dependent DNA damage”.

Ecological Effects of Pesticides


Pesticides are included in a broad range of organic micro pollutants that
have ecological impacts. Different categories of pesticides have different
types of effects on living organisms, therefore generalization is difficult.
Although terrestrial impacts by pesticides do occur, the principal pathway
that causes ecological impacts is that of water contaminated by pesticide
run-off. The two principal mechanisms are bioconcentration and
biomagnification.

Bioconcentration
This is the movement of a chemical from the surrounding medium into
an organism. The primary “sink” for some pesticides is fatty tissue (“lipids”).
Some pesticides, such as DDT, are “lipophilic”, meaning that they are
soluble in, and accumulate in, fatty tissue such as edible fish tissue and
human fatty tissue. Other pesticides such as glyphosate are metabolized
and excreted.
86 Environmental Monitoring

Biomagnification
This term describes the increasing concentration of a chemical as food energy
is transformed within the food chain. As smaller organisms are eaten by larger
organisms, the concentration of pesticides and other chemicals are increasingly
magnified in tissue and other organs. Very high concentrations can be observed
in top predators, including man. The ecological effects of pesticides (and other
organic contaminants) are varied and are often inter-related. Effects at the
organism or ecological level are usually considered to be an early warning
indicator of potential human health impacts. The major types of effects are
listed below and will vary depending on the organism under investigation and
the type of pesticide. Different pesticides have markedly different effects on
aquatic life which makes generalization very difficult. The important point is
that many of these effects are chronic (not lethal), are often not noticed by
casual observers, yet have consequences for the entire food chain.
• Death of the organism.
• Cancers, tumours and lesions on fish and animals.
• Reproductive inhibition or failure.
• Suppression of immune system.
• Disruption of endocrine (hormonal) system.
• Cellular and DNA damage.
• Teratogenic effects (physical deformities such as hooked beaks on
birds).
• Poor fish health marked by low red to white blood cell ratio, excessive
slime on fish scales and gills, etc.
• Intergenerational effects (effects are not apparent until subsequent
generations of the organism).
• Other physiological effects such as egg shell thinning.
These effects are not necessarily caused solely by exposure to pesticides
or other organic contaminants, but may be associated with a combination of
environmental stresses such as eutrophication and pathogens. These
associated stresses need not be large to have a synergistic effect with organic
micro pollutants. Ecological effects of pesticides extend beyond individual
organisms and can extend to ecosystems. Swedish work indicates that
application of pesticides is thought to be one of the most significant factors
affecting biodiversity. Jonsson et al. report that the continued decline of the
Swedish partridge population is linked to changes in land use and the use of
chemical weed control. Chemical weed control has the effect of reducing
habitat, decreasing the number of weed species, and of shifting the balance
Environmental Monitoring 87

of species in the plant community. Swedish studies also show the impact of
pesticides on soil fertility, including inhibition of nitrification with
concomitant reduced uptake of nitrogen by plants. These studies also suggest
that pesticides adversely affect soil micro-organisms which are responsible
for microbial degradation of plant matter (and of some pesticides), and for
soil structure.

NATURAL FACTORS THAT DEGRADE PESTICIDES


In addition to chemical and photochemical reactions, there are two principal
biological mechanisms that cause degradation of pesticides.
These are:
• Microbiological processes in soils and water
• Metabolism of pesticides that are ingested by organisms as part of
their food supply.
While both processes are beneficial in the sense that pesticide toxicity is
reduced, metabolic processes do cause adverse effects in, for example, fish. Energy
used to metabolize pesticides and other xenobiotics (foreign chemicals) is not
available for other body functions and can seriously impair growth and
reproduction of the organism.

Degradation of Pesticides in Soil


“Many pesticides dissipate rapidly in soils. This process is mineralization
and results in the conversion of the pesticide into simpler compounds such
H2O, CO2, and NH3. While some of this process is a result of chemical
reactions such as hydrolysis and photolysis, microbiological catabolism and
metabolism is usually the major route of mineralization. Soil micro biota
utilize the pesticide as a source of carbon or other nutrients. Some chemicals
(for example 2,4-D) are quite rapidly broken down in soil while others are
less easily attacked. Some chemicals are very persistent and are only slowly
broken down (atrazine)”.

Process of Metabolism
Metabolism of pesticides in animals is an important mechanism by which
organisms protect themselves from the toxic effects of xenobiotics (foreign
chemicals) in their food supply. In the organism, the chemical is transformed
into a less toxic form and either excreted or stored in the organism. Different
organs, especially the liver, may be involved, depending on the chemical.
Enzymes play an important role in the metabolic process and the presence of
88 Environmental Monitoring

certain enzymes, especially “mixed” function oxygenases (MFOs) in liver,


is now used as an indicator that the organism has been exposed to foreign
chemicals.

REGIONAL EXAMPLES OF
ECOLOGICAL EFFECTS
In Europe, the European Environment Agency cites a study by Galas et al.
that closely links toxicity of Po River water to the Zooplankton daphnia
magna, to run-off of agricultural pesticides. In the Great Lakes of North
America bioaccumulation and magnification of chlorinated compounds in
what is, on global standards, a relatively clean aquatic system, caused the
disappearance of top predators such as eagle and mink and deformities in
several species of aquatic birds.
The World Wide Fund for Nature reports that a significant amount of an
estimated 190 000 tons of agricultural pesticides plus additional loadings of
non-agricultural pesticides that are released by riparian countries bordering
the North Sea, eventually are transported into the North Sea by a combination
of riverine, groundwater, and atmospheric processes. WWF further reports
that the increased rate of disease, deformities and tumours in commercial
fish species in highly polluted areas of the North Sea and coastal waters of
the United Kingdom since the 1970s is consistent with effects known to be
caused by exposure to pesticides.

PESTICIDE MONITORING IN SURFACE WATER


Monitoring data for pesticides are generally poor in much of the world
and especially in developing countries. Key pesticides are included in the
monitoring schedule of most western countries, however the cost of analysis
and the necessity to sample at critical times of the year (linked to periods of
pesticide use) often preclude development of an extensive data set. Many
developing countries have difficulty carrying out organic chemical analysis
due to problems of inadequate facilities, impure reagents, and financial
constraints.
New techniques using immunoassay procedures for presence/absence of
specific pesticides may reduce costs and increase reliability. Immunoassay
tests are available for triazines, acid amides, carbamates, 2,4-D/phenoxy acid,
paraquot and aldrin. Data on pesticide residues in fish for lipophilic
compounds, and determination of exposure and/or impact of fish to lipophobic
pesticides through liver and/or bile analysis is mainly restricted to research
Environmental Monitoring 89

programmes. Hence, it is often difficult to determine the presence, pathways


and fate of the range of pesticides that are now used in large parts of the
world. In contrast, the ecosystemic impacts from older, organochlorine
pesticides such as DDT, became readily apparent and has resulted in the
banning of these compounds in many parts of the world for agricultural
purposes.
Why older pesticides, together with other hydrophobic carcinogens such
as PAHs and PCBs, are poorly monitored when using water samples. As an
example, the range of concentration of suspended solids in rivers is often
between 100 and 1000 mg/l except during major run-off events when
concentrations can greatly exceed these values.
Tropical rivers that are unimpacted by development have very low
suspended sediment concentrations, but increasingly these are a rarity due to
agricultural expansion and deforestation in tropical countries. As an example,
approximately 67% of DDT is transported in association with suspended
matter at sediment concentrations as low as 100 mg/l, and increases to 93%
at 1000 mg/l of suspended sediment. Given the analytical problems of
inadequate detection levels and poor quality control in many laboratories of
the developing countries, plus the fact that recovery rates (part of the analytical
procedure) can vary from 50-150% for organic compounds, it follows that
monitoring data from water samples are usually a poor indication of the level
of pesticide pollution for compounds that are primarily associated with the
solid phase.
The number of NDs (Not Detectable) in many databases is almost certainly
an artifact of the wrong sampling medium (water) and, in some cases,
inadequate analytical facilities and procedures. Clearly, this makes pesticide
assessment in water difficult in large parts of the world. Experience suggests
that sediment-associated pesticide levels are often much higher than recorded,
and NDs are often quite misleading.
Some water quality agencies now use multi-media (water + sediment
+ biota) sampling in order to more accurately characterize pesticides in
the aquatic environment. Another problem is that analytical detection
levels in routine monitoring for certain pesticides may be too high to
determine presence/absence for protection of human health. Gilliom
noted that the US Geological Survey’s Pesticide Monitoring Network
[in 1984] had a detection limit of 0.05 µ g/l for DDT, yet the aquatic
life criterion is 0.001 m g/l and the human health criterion is 0.0002 m
µ/l-both much less that the routine detection limit of the programme.
90 Environmental Monitoring

ND (not detectible) values, therefore, are not evidence that the chemical
is not present in concentrations that may be injurious to aquatic life and to
human health. That this analytical problem existed in the United States
suggests that the problem of producing water quality data that can be used
for human health protection from pesticides in developing countries, must
be extremely serious. Additionally, detection limits are only one of many
analytical problems faced by environmental chemists when analysing for
organic contaminants.
Even when one has good analytical values from surface water and/or
sediments, the interpretation of pesticide data is not straight forward. For
example, the persistence of organochlorine pesticides is such that the
detection of, say, DDT may well indicate only that:
• The chemical has been deposited through long range transport from
some other part of the world,
• It is a residual from the days when it was applied in that region.
In North America, for example, DDT is still routinely measured even
though it has not been used for almost two decades. The association of
organochlorine pesticides with sediment means that the ability of a river
basin to cleanse itself of these chemicals is partly a function of the length of
time it requires for fine-grained sediment to be transported through the basin.
Geomorphologists now know that the process of erosion and transport of
silts and clays is greatly complicated by sedimentation within the river system
and that this fine-grained material may take decades to be transported out of
the river basin. For sediment-associated and persistent pesticides that are
still in use in some countries, the presence of the compound in water and/or
sediments results from a combination of current and past use. As such, the
data make it difficult to determine the efficacy of policy decisions such as
restrictive use or bans.
Pesticide monitoring requires highly flexible field and laboratory
programmes that can respond to periods of pesticide application, which can
sample the most appropriate medium (water, sediment, biota), are able to
apply detection levels that have meaning for human health and ecosystem
protection, and which can discriminate between those pesticides which appear
as artifacts of historical use versus those that are in current use. For pesticides
that are highly soluble in water, monitoring must be closely linked to periods
of pesticide use.
In the United States where there have been major studies of the behaviour
of pesticide run-off, the triazines (atrazine and cyanazine) and alachlor
Environmental Monitoring 91

(chlorinated acetamide) are amongst the most widely used herbicides. These
are used mainly in the spring. Studies by Schottler et al. indicate that 55-
80% of the pesticide run-off occurred in the month of June.
The significance for monitoring is that many newer and soluble pesticides
can only be detected shortly after application; therefore, monitoring
programmes that are operated on a monthly or quarterly basis are unlikely to
be able to quantify the presence or determine the significance of pesticides
in surface waters.
Pesticides that have limited application are even less likely to be detected
in surface waters. The danger lies in the presumption by authorities that ND
(non-detectable) values implies that pesticides are absent. It may well only
mean that monitoring programmes failed to collect data at the appropriate
times or analysed the wrong media.

PESTICIDE MANAGEMENT AND CONTROL


Prediction of water quality impacts of pesticides and related land
management practices is an essential element of site-specific control options
and for the development of generic approaches for pesticide control.
Prediction tools are mainly in the form of models. Also, the key hydrological
processes that control infiltration and run-off, and erosion and sediment
transport, are controlling factors in the movement of pesticides.

The European Experience


The Netherlands National Institute of Public Health and Environmental
Protection concluded that “groundwater is threatened by pesticides in all
European states. This is obvious both from the available monitoring data
and calculations concerning pesticide load, soil sensitivity and leaching... It
has been calculated that on 65% of all agricultural land the EC standard for
the sum of pesticides (0.5; mg/l) will be exceeded. In approximately 25% of
the area this standard will be exceeded by more than 10 times...”
In recognition of pesticide abuse and of environmental and public health
impacts the European countries have adopted a variety of measures that
include the following:
• Reduction in use of pesticides (by up to 50% in some countries).
• Bans on certain active ingredients.
• Revised pesticide registration criteria.
• Training and licensing of individuals that apply pesticides.
• Reduction of dose and improved scheduling of pesticide application
92 Environmental Monitoring

to more effectively meet crop needs and to reduce preventative


spraying.
• Testing and approval of spraying apparatus.
• Limitations on aerial spraying.
• Environmental tax on pesticides.
• Promote the use of mechanical and biological alternatives to pesticides.
Elsewhere, as for example Indonesia, reduction in subsidies has reduced
the usage of pesticides and has increased the success of integrated pesticide
management programmes.

Pesticide Registration
Pesticide control is mainly carried out by a system of national
registration which limits the manufacture and/or sale of pesticide products
to those that have been approved. In developed countries, registration is
a formal process whereby pesticides are examined, in particular, for
mammalian toxicity (cancers, teratogenic and mutagenic effects, etc.) and
for a range of potential environmental effects based on the measured or
estimated environmental behaviour of the product based on its physico-
chemical properties. Most developing countries have limited capability
to carry out their own tests on pesticides and tend to adopt regulatory
criteria from the developed world. As our knowledge of the effects of
pesticides in the environment accumulates, it has become apparent that
many of the older pesticides have inadequate registration criteria and are
being re-evaluated.
As a consequence, the environmental effects of many of the older pesticides
are now recognized as so serious that they are banned from production or
sale in many countries. A dilemma in many developing countries is that many
older pesticides (e.g. DDT) are cheap and effective. Moreover, regulations
are often not enforced with the result that many pesticides that are, in fact,
banned, are openly sold and used in agricultural practice. The dichotomy
between actual pesticide use and official policy on pesticide use is, in many
countries, far apart. Regulatory control in many countries is ineffective without
a variety of other measures, such as education, incentives, etc.
The extent to which these are effective in developed versus developing
countries depends very much on:
• The ability of government to effectively regulate and levy taxes and
• On the ability or readiness of the farming community to understand
and act upon educational programmes.
Environmental Monitoring 93

The fundamental dilemma remains one of accommodating local and short


term gain by the farmer (and manufacturer and/or importer) by application
of an environmentally dangerous pesticides, with societal good by the act of
limiting or banning its use. There is now such concern over environmental
and, in some instances, human health effects of excessive use and abuse of
pesticides, that there is active discussion within many governments of the
need to include a programme of pesticide reduction as part of a larger strategy
of sustainable agriculture. In 1992, Denmark, the Netherlands and Sweden
were the first of the 24 member states of the OECD to embark upon such a
programme.
The Netherlands is the world’s second largest exporter of agricultural
produce after the United States. In contrast, wood preservatives in the forest
sector account for 70% of Swedish pesticide use with agriculture using only
30%. As noted above, the lack of baseline data on pesticides in surface waters
of OECD countries, is a constraint in establishing baseline values against
which performance of the pesticide reduction programme can be measured.

DANISH EXAMPLE
In 1986 the Danish Government initiated an Action Plan for sustainable
agriculture which would prevent the use of pesticides for two purposes:
• Safeguard Human Health: From the risks and adverse effects
associated with the use of pesticides, primarily by preventing intake
via food and drinking water.
• Protect the Environment: Both the non-target and beneficial organisms
found in the flora and fauna on cultivated land and in aquatic
environments.
The objective was to achieve a 50% reduction in the use of agricultural
pesticides by 1997 from the average amount of pesticides used during the
period 1981-85.
This was to be measured by:
• A decline in total sales (by weight) of the active ingredients and,
• Decrease in frequency of application.
While the World Wide Fund for Nature report that by 1993, sales of active
ingredients had been reduced by 30%, the application frequency had not
declined. The Danish legislation included the following components although,
by 1993, not all had achieved comparable success. Beginning with the use of
DDT during the Second World War (whose discovery earned entomologist
Dr. Paul Mueller a Nobel Prize in 1948), the pesticide industry has grown
94 Environmental Monitoring

rapidly, and at times even exponentially, up to the present day. Currently


over 2.5 million tons of such chemicals, worth over US $30 billion, are applied
to crops in every country in the world. Of this amount, 73% is produced by
just ten multinational agrochemical corporations1; five countries,
Germany, Britain and Switzerland–are the primary producing nations. 2
Encompassing insecticides, herbicides, parasitocides, nematocides, growth
regulators, fungicides, defoliants and dessicants among others, this wide-
ranging set of approximately 100,000 compounds, 7000 of which are
registered for use in Canada, have one thing in common; they are all designed
to kill one or more often many species of living organisms, usually in a
nonspecific manner. Estimates that less than five per cent of pesticide
formulations by volume reach intended target organisms may well be accurate,
considering the inevitability of drift, routine pesticide use as prevention
without prior confirmation of infestations, and incautious application.
So-called ‘inert’ portions of formulations–composing up to 95% of many
products–are often quite toxic in themselves. Early on in their history, the
development of resistance to chemical pesticides became a significant issue.
The rapid multiplication rates of single-celled or other simple organisms
makes it clear that such a problem is inevitable, but the speed with which
resistance occurs has often surprised observers.
Resistance to DDT was a major problem only five years after its
introduction. Today multiple pesticide resistance is common, and new
pesticides, like new antibiotics, are regularly produced by industry to address
this problem. The World Health Organization (WHO) estimates today–in
figures that are widely accepted to be underestimates–that 200,000 people
are killed worldwide, every year, as a direct result of pesticide poisoning, up
from 30,000 in 1990. The WHO further estimates that at least 3 million
persons are poisoned annually, many of whom are children. A study in England
and Wales demonstrated that 50% of pesticide poisonings involved children
under the age of 10.4 Pesticides can be remarkably persistent in biological
systems.
The US Environmental Protection Agency has conducted the National
Human Adipose Tissue Survey since 1976, measuring toxic compounds in
human fat. In 1982 this study found DDE, the primary metabolite of DDT, in
93% of samples. 5 A 1990 study of adipose tissue levels of toxic compounds
in autopsy specimens from elderly Texans found DDE, dieldrin, oxychlordane
and heptachlor epoxide in 100% of samples. 6 These findings are particularly
disturbing because DDT has been banned in the US since 1972.
Environmental Monitoring 95

Pesticides are also found far afield, in ecosystems considered pristine and
far from active pesticide use. Osprey eggs in the Queen Charlotte Islands,
polar bear fat in the high arctic, and the blubber of whales in all the oceans of
the world are contaminated with pesticide residues, 7 even though all these
creatures live far from point sources of pesticide application. Water and wind,
as well as the bodies of animals that serve as prey for others (including
humans) higher on the food chain, are the universal vectors for pesticide
dispersal. Highest on the food chain, human breast milk is of great concern
because of high levels of bio-accumulated pesticides. Breast milk of Inuit
women contains much higher pesticide levels than the milk of women in
southern Canada, raising concerns about this most intimate and crucial form
of human sustenance.
Two other factors make pesticides problematic for human and ecosystem
health. First, many pesticides are not persistent in human or other biological
systems. Therefore they may be difficult to measure in tissue or other samples
collected more than a few hours after exposure, although their biological
effects may persist for days, months or even years. Second, many pesticides
undoubtedly have additive or synergistic effects with one another, especially
when they belong to the same chemical class. Only recently have these two
issues been acknowledged by legislators, with the 1996 US Food Quality
and Protection Act being the first major enactment anywhere in the world
that takes the latter fact into consideration.
A further health issue regarding pesticides has emerged in the last decade.
This is the demonstration that many chemical compounds, among them many
pesticides, have hormone-like effects in biological systems, effects that were
previously unsuspected as occurring on such a wide scale. During the last
four years, the US Environmental Protection Agency (EPA) has been
designing a first-ever programme for analysing these effects.
The work has proceeded so slowly, however, despite a legislative mandate
to act speedily, that the EPA itself is now being sued for dragging its feet!10
The Canadian government, lacking legislation for examining any adverse
effects of pesticides, relies on manufacturers to supply such evidence.

RATIONALE FOR A CAPE POSITION STATEMENT


As a result of the unquestioning trust that the population at large has placed
in the scientific community, the commercial sector and regulatory agencies
in the past, pesticides have become dispersed on a massive scale throughout
our global ecosystem, without adequate testing for adverse effects in humans.
96 Environmental Monitoring

In what has been called a massive, uncontrolled, global biochemical


experiment, they are now essentially universal in surface waters, soils and
biological systems. Because of their fundamentally toxic nature, pesticides
are unlikely to be absolved of their demonstrated negative role in the health
of humans and biological systems in general. In fact, it can be logically inferred
that their deleterious impact will eventually be shown to be far more extensive
than what is known at present, because so much research has yet to be done
on the full range of toxicity potentialities.
Physicians are ill-trained to diagnose the adverse effects of pesticide
exposures. Because there is no mandatory requirement for reporting actual
or suspected pesticide poisonings, little confidence can be placed in many
aspects of estimates of the public health effects of pesticide accumulation in
local or regional ecosystems.
However there is growing evidence that the health of future generations
may be severely harmed by pesticides, alone or in combination with other
toxic chemicals now permeating the global ecosphere. The fetus and the
newborn child, in particular, appear to be uniquely sensitive to the harmful
effects of pesticides and other toxins. Children, it has often been said, are not
simply small adults.
They are beings with uniquely vulnerable physiological processes. They
incorporate ingested or inhaled substances into their growing bodies far more
avidly than adults; these substances can profoundly influence their unique
developmental processes, and induce disproportionately greater acute and
chronic toxicity.
For the above reasons, the Canadian Association of Physicians for the
Environment feels it is important for us, as concerned clinicians, to lay out
clearly what we believe to be the path for ethical scientists, medical or
otherwise, to advocate on behalf of Canadian society. We believe that in
such a statement, we must avoid ambiguity.
We believe that we should likewise avoid clinging to banal certitudes.
Instead, we must speak in a balanced and responsible way about the future
direction society must take to avoid a possible looming toxic tragedy.

STATEMENT
Reaching the goal of pesticide elimination cannot be accomplished without a
dramatically increased support programme for farmers and other growers who are
prepared to convert to sustainable growing practices, including cessation of pesticide
use.
Environmental Monitoring 97

We believe that the best means to accomplish the goal of eliminating routine
pesticide use is as follows:
• Through an immediate and substantive increase in funding and
practical support for research and information dissemination
concerning alternative, nontoxic methods of pest control, coupled with
strong market incentives for non-chemical lawn and garden care
contractors and product suppliers.
• Through the development of new and imaginative legislative
initiatives and clear-cut and substantive market incentives (including
tax shifting) to support and encourage the rapid expansion of organic
growing practices in all parts of the country; at all levels of
government. This must include an essentially cost-free, uniform,
nationwide certification process for new and already established
organic growing operations.
• Through the Federal government and its regulators immediately moving
towards a legislated end to cosmetic pesticide use within two years, as
recommended by the House of Commons Standing Committee on
Environment and Sustainable Development. (Cosmetic uses encompass
lawn and decorative garden management, and the noncommercial growing
of food crops.)
• Through the Federal government legislating, for the Pesticide
Management Regulatory Agency, an increasingly restrictive regulatory
framework governing the use of synthetic pesticides. This would begin
with the most toxic substances, but ultimately include all synthetic
chemical pesticides and ‘inerts’ unless needed for critical, short-term,
emergency situations.
Three initial steps in this direction must include:
– The immediate elimination of the most toxic pesticides, as
determined by an independent scientific panel;
– The rapid introduction of full disclosure of ALL ingredients in
pesticide formulations; and
– The establishment of an independent office for the collection
and public disclosure of all reports of proven or possible adverse
effects resulting from pesticide exposures.
• Through all government pesticide regulation reflecting the
following four essential elements:
– The precautionary principle (do not act without reasonable proof
ofharmlessness);
– The principle of reverse onus (the producer bears responsibility
for safety);
98 Environmental Monitoring

– Zero discharge and residual contamination (no persistent


ecosystem residues); and
– Closed (clean) production processes.
• Finally through all levels of government working steadily towards the
abandonment of all synthetic pesticide use except in rare, urgent, critical
situations.
Environmental Monitoring 99

Measures of Radioactive Pollution

Radioactive substances are those which have the ability to emit high energy
particles, like alpha and beta particles and gamma rays. They are unstable in
nature and are continuously emitting these particles in order to gain some
stability. When we are talking about the effects of radioactive pollution, it
actually means the effects of these emissions on the environment and living
beings of the earth.

CAUSES OF RADIOACTIVE POLLUTION


Radioactive pollution is rising because of the increase in radioactivity
uses. It occurs mostly from the waste products that are left behind after the
use of radioactive substances. These materials are disposed off without any
precautionary measures to isolate the emissions, which then contaminates
the air, soil and water. Large amount of radioactive waste is generated from
nuclear reactors used in nuclear power plants and for many other purposes. It
may occur during extraction and refining of the radioactive material. Nuclear
accidents and nuclear explosions are two of the worst man-made sources of
radioactive pollution.

EFFECTS OF RADIOACTIVE POLLUTION ON THE ENVIRONMENT


When the soil gets contaminated by radioactive substances, then it is
transferred to the plants growing on it. It can lead to genetic mutation of the
100 Environmental Monitoring

plants’ DNA and affect its normal functioning. Some of the plants may die
after such exposure while others may develop weak seeds. When any part of
the contaminated plant, including the fruits are consumed by human beings,
then it causes serious health risks. Radioactive emissions from nuclear
weapons are considered as the most harmful for the environment, as they
stay in the atmosphere for as long as a hundred years. Thus, it affects several
generations. Similarly, the radioactive substances from the land surface that
flows down to the water bodies remain there for years to come. It causes
harm to the aquatic animals. Thus, we can say that radioactive pollution has
a destructive effect on the entire ecosystem.

EFFSCTS OF RADIOACTIVE POLLUTION ON HUMAN BEINGS


The effects of radioactive pollution on human beings often vary from mild
to severe, and it largely depends on the level of exposure to the emissions.
Among the particles emitted from these substances, the effects of alpha
particles is the lowest and the gamma rays are the most dangerous. When the
human body is exposed to radiation, then it reacts with its biological molecules
and ions are formed in the process. This leads to the formation of a large
number of free radicals that destroy vital molecular components like proteins,
enzymes, nucleic acids, etc. Low levels of exposure on a small portion of the
body may just affect the cell membranes and cause mild skin irritation. Other
immediate effects of short span exposure of nuclear radiation are nausea,
vomiting, diarrhea, loss of hair and nails, bruises due to subcutaneous
bleeding, etc.
Long term exposure has far more serious health effects. The rapidly
growing cells like that of the skin, bone marrow, blood, intestines, and gonads,
are more sensitive towards radioactive emissions. On the other hand, cells
that do not undergo rapid cell division like bone cells, muscle cells, and
nervous cells, cannot be damaged so easily. It has a serious threat to various
systems of the body that include the cardiac system, neurological system and
reproductive system. The radioactive rays can cause irreparable damage to
the DNA molecules and lead to a life-threatening condition. It causes genetic
mutations that promote the growth of cancerous cells in the body. People
with heavy radiation exposure are prone to skin cancer, lung cancer, thyroid
cancer, etc. The effects of genetic mutation tend to pass on to the future
generations as well.
In other words, if the parents are exposed to nuclear radiation, then their
child could be born with genetic birth defects and retardation. Most of these
Environmental Monitoring 101

effects of radioactive pollution do not show up immediately, but have severe


long term health consequences. Therefore, it is high time that some measures
are taken to minimize radioactive pollution. It can be controlled to a great
extent with the help of safe disposal of the nuclear wastes. The radioactive
properties of the waste material decreases with time, which may vary from a
few days to a few years. Till then, these materials have to be kept in an
isolated condition, so that the environment and living beings are not exposed
to it.

RADIOACTIVE POLLUTION
Radioactive pollution can be defined as the release of radioactive
substances or high-energy particles into the air, water, or earth as a result of
human activity, either by accident or by design.
The sources of such waste include:
• Nuclear weapon testing or detonation;
• The nuclear fuel cycle, including the mining, separation, and
production of nuclear materials for use in nuclear power plants or
nuclear bombs;
• Accidental release of radioactive material from nuclear power plants.
Sometimes natural sources of radioactivity, such as radon gas emitted
from beneath the ground, are considered pollutants when they become
a threat to human health.
Since even a small amount of radiation exposure can have serious (and
cumulative) biological consequences, and since many radioactive wastes
remain toxic for centuries, radioactive pollution is a serious environmental
concern even though natural sources of radioactivity far exceed artificial
ones at present. The problem of radioactive pollution is compounded by the
difficulty in assessing its effects.
Radioactive waste may spread over a broad area quite rapidly and
irregularly from an abandoned dump into an aquifer and may not fully show
its effects upon humans and organisms for decades in the form of cancer or
other chronic diseases. Surface waters are a powerful factor that causes
migration of radionuclides across the territory of Belarus. For this reason it
is essential to take into due account the transit role of rivers in the
transportation of radionuclides, including transboun-dary transfer. In
watercourses and flowing water bodies concentration of radionuclides are
reducing every year, but they tend to accumulate in sttatic water bodies (lakes,
ponds, reservoirs, especially in bottom sediments).
102 Environmental Monitoring

Due to the accident at the Chernobyl nuclear power station (CNPS) one
quarter of the country’s territory was contaminated by caesium-137 (23%),
strontium-90 (10%) and plutonium (about 2 %). Concentrations of caesium-
137 early 90’s presented. The monitoring data the radiation situation in the
Dnieper-Sozh and Pripyat basins is stable. The annual average concentration
of caesium-137 has decreased significantly in large and small rivers for the
observed early 1990’s period. Exceeding of the National permissible levels
for caesium-137 and strontium-90 were not observed. Caesium-137
concentration in surface waters are closely connected with the annual volume
of river flow, as can be seen from the increase in concentrations of caesium-
137 in some rivers, where water supply was lower than the perennial average
values.
The data analysis of concentration of caesium-137 during the spring flood
in the Pripyat basin shows that concentration of caesium-137 in a dissolved
form in the Pripyat basin remains at the level of the average indices for the
prior period. Yet concentration of this radionuclide has considerably increased
in dredges. This means that caesium-137 is washed out and transported by
flood flow with sediments. Since the radiation situation has stabilized,
transboundary transport of radioactive elements through river flow has
significantly decreased. Mainly it is the Pripyat that transports radionuclides,
in particular strontium-90, as they are washed out from the 30-kilometre
Chernobyl zone. Due attention is devoted to studies of the radiation state of
small rivers, which are tributaries of the Pripyat and the Sozh in the most
contaminated areas of the Gomel and Mogilev regions. Over the years
radioactivity in water tends to decrease.
The exception is the dissolved strontium -90, which is a specific feature
for the CNPS zone. Annual data on the content of caesium-137 and strontium-
90 (in soluble and suspended forms) in the bottom sediments and water biota
suggest that bottom sediments and water biota are significant contributors to
the total radioactivity of surface water systems. A tendency towards a reduction
in radioactivity of bottom sediments and water biota is minor. During spring
high water and summer-autumn flood, migration of radionuclides into open
water systems occurs both in soluble and in absorbed forms on organic and
mineral carrieres. The ratio between concentrations of caesium-137 and
strontium-90 at the end checkpoints of the Braginka and Senna Rivers suggests
that starting from1992-1993, the concentration of strontium-90 has begun to
exceed that of caesium-137. That phenomenon is characteristic for surface
watercourses close to the CNPS zone and is explained by an increase in
Environmental Monitoring 103

migration ability of strontium-90 due to its release from active particles.

RADIOACTIVE MATERIALS REGULATIONS


Many facilities use radioactive material (RAM) in diverse ways or have
radioactive wastes. Exit signs may contain tritium or radioactive hydrogen-
3 gas, and smoke detectors may contain radium. Other examples of industrial
uses of RAM include devices to measure the density of concrete or blacktop,
determine the thickness of paper and rolled steel as it is made, find cracks in
pipes or airplane surfaces, test the amount of lead in paint, or monitor the
flow of sludge through pipes at a sewage treatment plant.
Research facilities and academic institutions use RAM during the
development of new pharmaceuticals to “tag” certain molecules to follow
their progress through chemical or biological processes and in other research
activities. Medical facilities inject patients with RAM to diagnose medical
conditions and for therapeutic treatments. Medical facilities also use large
radiation sources for cancer treatment.
Radium paint was once used on aircraft instruments, naval compasses,
military vehicle instruments, and on clocks and watches to make the amounts
and lines glow in the dark. Naturally occurring radioactive material is found
as uranium in clay and bricks, granite, shale, or other rocks. It is also found
as radium in soils or as radium sulfate scales on some pipes and fittings from
the oil and gas industry and as the naturally radioactive constituent of
potassium, potassium-40.

Environmental Monitoring and Radon Gas


Environmental Monitoring Unit (EMU) of the RPMWS is responsible
for DEQ’s Radon Progamme activities. Environmental Protection Agency’s
State Indoor Radon Grant Progamme. This progamme provides education
on the radiological risks posed by public exposure to radon gas and works
closely with local health departments throughout the state for outreach at the
local level.
The EMU operates an environmental monitoring network around each of
Michigan’s nuclear power plant sites. The progamme collects and performs
radioanalyses of several types of samples, including direct radiation, air,
surface water, precipitation, and milk from the environs of the nuclear plants.
Unit labouratory analyses also include samples collected by other section
staff during investigations of potentially contaminated sites, during emergency
response activities, and from routine staff compliance investigations.
104 Environmental Monitoring

NUCLEAR FACILITIES
The Nuclear Facilities Unit of the RPMWS develops and implements the
DEQ’s Nuclear Facilities Emergency Response Procedures and the nuclear
accident aspects of the Michigan Emergency Management Plan as they relate
to the DEQ’s responsibilities to respond to accidents or emergencies at any
of Michigan’s commercial nuclear power plants or to large-scale radiological
incidents. These efforts are conducted in cooperation with other state agencies
and under the overall emergency response coordination of the Michigan State
Police. Unit staff also interact with nuclear plant utility staff and staff of the
NRC concerning the day-to-day operations of nuclear power reactors to assure
radiological protection of the public and the environment.

Radioactive Materials Regulations Waste Industrial Smoke Detectors


Remove any batteries from the detector and handle the battery as a universal
waste or under the applicable hazardous waste regulations for that company’s
hazardous waste generator status The specific requirements a company would
have to follow would depend on if the smoke detectors were subject to the
federal nuclear regulations or if it was a hazardous waste. There are two
types of materials commonly found in used smoke detectors.

RADIOACTIVE MATERIALS REGULATIONS


The older models may contain non-exempt radium-226 sources that are
regulated under the state of Michigan. These detectors should not go to a
solid waste landfill but returned to the manufacturer or disposed as radioactive
waste.
• The newer models contain a small americium source. The combined
smoke detector and americium source have a specific exemption in
the federal regulations. Large quantities such as resulting from a major
construction renovation project should not be disposed without first
checking with officials of the NRC or the WHMD radioactive
materials staff. If the smoke detectors are not recycled for metal, some
smoke detectors could be subject to the hazardous waste regulations
because the amount of metal in the detectors may fail the Toxicity
Characteristic Leaching Procedure.
Small quantity generators and large quantity generators can not put
hazardous waste smoke detectors in the trash. Conditionally exempt small
quantity generators may dispose smoke detectors in licensed solid waste
landfills if the landfill will accept them. If these smoke detectors are not
Environmental Monitoring 105

classified as a hazardous waste, then they may be sent to a licensed landfill.


However, companies should contact the landfill if disposing of large amounts
because the waste load may set off the landfill’s radiation detectors. Contact
the DEQ Waste and Hazardous Materials Division regarding potential safety
concerns when numerous smoke detectors are disposed of at the same time
or regarding nuclear regulations.

The Inspection and Mapping of Radioactive Sources


During the ten years since the Norwegian-Russian cooperation was
established in form of The Group of Experts, a magnificent effort in mapping
the present environmental situation has been carried out. That includes sources
of radioactivity in the environment On these missions water samples were
collected, along with sediments and living organisms which have given new
knowledge about the present environmental status of the area. At two
occasions closer investigations were done at dumping places for radioactive
materials, and due to this work future risk of pollution has been evaluated.
At the moment effort is being made to establish a co-ordinated progamme
for surveying the northern sea areas, a system for exchanging data, and an
extension of a network for automatic metering in the county of Murmansk.
Experiences from the cooperation in the Expert Group gave a solid foundation
for joint inspections and risk evaluation In addition to the work done by the
Expert group, inspections of different kinds of pollution in the Arctic are
carried out by the organizations called AMAP (Arctic Monitoring and
Assessment Progammeme). AMAP has established an international data
centre, localized at the Norwegian Radiation Protection Authority (NRPA)
for the purpose of monitoring sources and levels of radioactive pollution.
The work of AMAP is close to the subjects investigated by the Expert group,
resulting in close contact between the two organisations.

RISK EVALUATION
The joint projects between Norway and Russia include risk analysis of
potential scenes of accidents at several nuclear plants in Russia. Substantial
amounts of radioactive waste was released from nuclear plants and directed
out in rivers and ocean, causing severe pollution of the environment. Today,
storing of radioactive waste at plants with poor securing is a big problem.
There is great concern about the consequences if accidents take place, what
the effects might be on the local environment, and even for other countries.
Calculations and risk evaluations have been made in case of potential
106 Environmental Monitoring

accidents occur at the nuclear plant in Majak in Ural. In management of the


Expert group, fieldwork has been carried out twice, and potential accidents
have been considered.
Two of the most important potential scenarios at Majak are an explosion in
a container for high-level waste, and destruction of the lower dam at the plant.
This scenario has potential for large releases of radioactive substances with
scattering by the river Ob to the Kara Sea, but the level of pollution will be
relatively limited A similar evaluation is being done at the nuclear plant of
Krasnojarsk close to the river Jenisej. Norwegian authorities have given special
attention to the nuclear plant at Kola. This plant is placed 200 km from the
Norwegian border. Analysis has been made on long-term effects on the
environment and population in the Nort-West of Russia from possible accidents
at this plant. The worst thinkable scenarios have been used in the modelling;
meaning large fallouts, and bad weather conditions during an accident. The
results indicate that the negative effects will be long lasting.

Radioactive Mine Waste Polluting Colourado River


Water tests reveal that “uranium mill waste” leaching into the Colourado
River has made the water radioactive at “one-third the level considered
dangerous,” the San Diego Union Tribune 1/10. The mine’s owner, Atlas
Corporation, has declared bankruptcy, leaving the bulk of the enormous clean
up costs to taxpayers. The huge pile of mine waste “sits 750 feet from the
river,” and is leaking “an estimated 28,800 gallons of radioactive pollution
and toxic chemicals” into the river each day.

Types of Radiation
Radiation is classified as being ionizing or nonionizing. Both types can
be harmful to humans and other organisms. Radioactive Pollution-
Nonionizing Radiation Radioactive Pollution - Ionizing Radiation
Radioactive Pollution - Lifestyle and Radiation Dose Radioactive Pollution
- Nuclear Weapons Testing Radioactive Pollution - Nuclear Power Plants
Radioactive Pollution - Biological Effects of Radioactivity

NONIONIZING RADIATION
Nonionizing radiation is relatively long-wavelength electro-magnetic
radiation, such as radio waves, microwaves, visible radiation, ultraviolet
radiation, and very low-energy electromagnetic fields. Nonionizing radiation
is generally considered less dangerous than ionizing radiation. However, some
Environmental Monitoring 107

forms of nonionizing radiation, such as ultraviolet, can damage biological


molecules and cause health problems. Scientists do not yet fully understand
the longer-term health effects of some forms of nonionizing radiation, such
as that from very low-level electromagnetic fields (e.g., high-voltage power
lines), although the evidence to date suggests that the risks are extremely
small.

Ionizing Radiation
Ionizing radiation is the short wavelength radiation or particulate radiation
emitted by certain unstable isotopes during radioactive decay. There are about
70 radioactive isotopes, all of which emit some form of ionizing radiation as
they decay from one isotope to another. A radioactive isotope typically decays
through a series of other isotopes until it reaches a stable one. As indicated
by its name, ionizing radiation can ionize the atoms or molecules with which
it interacts.
In other words, ionizing radiation can cause other atoms to release their
electrons. These free electrons can damage many biochemicals, such as
proteins, lipids, and nucleic acids (including DNA). In intense, this damage
can cause severe human health problems, including cancers, and even death.
Ionizing radiation can be either short-wavelength electromagnetic radiation
or particulate radiation.
Gamma radiation and X-radiation are short-wavelength electromagnetic
radiation. Alpha particles, beta particles, neutrons, and protons are particulate
radiation. Alpha particles, beta particles, and gamma rays are the most
commonly encountered forms of radioactive pollution. Alpha particles are
simply ionized helium nuclei, and consist of two protons and two neutrons.
Beta particles are electrons, which have a negative charge. Gamma radiation
is high-energy electromagnetic radiation. Scientists have devised various units
for measuring radioactivity.
A Curie (Ci) represents the rate of radioactive decay. One Curie is 3.7 ×
10
10 radioactive disintegrations per second. A rad is a unit representing the
absorbed dose of radioactivity. One rad is equal to an absorbed energy dose
of 100 ergs per gram of radiated medium. One rad = 0.01 Grays. A rem is a
unit that measures the effectiveness of radioactivity in causing biological
damage. One rem is equal to one rad times a biological weighting factor. The
weighting factor is 1.0 for gamma radiation and beta particles, and it is 20
for alpha particles. One rem = 1000 millirem = 0.01 Sieverts. The radioactive
half-life is a measure of the persistence of radioactive material. The half-life
108 Environmental Monitoring

is the time required for one-half of an initial quantity of atoms of a radioactive


isotope to decay to a different isotope.

Sources of Radioactive Pollution


In the United States, people are typically exposed to about 350 millirems
of ionizing radiation per year. On average, 82% of this radiation comes from
natural sources and 18% from anthropogenic sources (i.e., those associated
with human activities). The major natural source of radiation is radon gas,
which accounts for about 55% of the total radiation dose. The principal
anthropogenic sources of radioactivity are medical X-rays and nuclear
medicine. Radioactivity from the fallout of nuclear weapons testing and from
nuclear power plants make up less than 0.5% of the total radiation dose, i.e.,
less than 2 millirems. Although the contribution to the total human radiation
dose is extremely small, radioactive isotopes released during previous
atmospheric testing of nuclear weapons will remain in the atmosphere for
the next 100 years.

Lifestyle and Radiation Dose


People who live in certain regions are exposed to higher doses of radiation.
Residents of the Rocky Mountains of Colourado receive about 30 millirems
more cosmic radiation than people living at sea level. This is because the
atmosphere is thinner at higher elevations, and therefore less effective at
shielding the surface from cosmic radiation. Exposure to cosmic radiation is
also high while people are flying in an airplane, so pilots and flight attendants
have an enhanced, occupational exposure. In addition, residents of certain
regions receive higher doses of radiation from radon-222, due to local
geological anomalies.
Radon-222 is a colourless and odorless gas that results from the decay of
naturally occurring, radioactive isotopes of uranium. Radon-222 typically
enters buildings from their basement, or from certain mineral-containing
construction materials. Ironically, the trend towards improved home insulation
has increased the amount of radon-222 which remains trapped inside houses.
Personal lifestyle also influences the amount of radioactivity to which people
are exposed. Miners, who spend a lot of time underground, are exposed to
relatively high doses of radon-222 and consequently have relatively high
rates of lung cancer. Cigarette smokers expose their lungs to high levels of
radiation, since tobacco plants contain trace quantities of polonium-210, lead-
210, and radon-222. These radioactive isotopes come from the small amount
Environmental Monitoring 109

of uranium present in fertilizers used to promote tobacco growth.


Consequently, the lungs of a cigarette smoker are exposed to thousands of
additional millirems of radioactivity, although any associated hazards are
much less than those of tar and nicotine.

NUCLEAR WEAPONS TESTING


Nuclear weapons release enormous amounts of radioactive materials when
they are exploded. Most of the radioactive pollution from nuclear weapons
testing is from iodine-131, cesium-137, and strontium-90. Iodine-131 is the
least dangerous of these isotopes, although it has a relatively half-life of
about eight days. Iodine-131 accumulates in the thyroid gland, and large
doses can cause thyroid cancer. Cesium-137 has a half-life of about 30 years.
It is chemically similar to potassium, and is distributed throughout the human
body. Based on the total amount of cesium already in the atmosphere, all
humans will receive about 27 millirems of radiation from cesium-137 over
their lifetime. Strontium-90 has a half-life of 38 years. It is chemically similar
to calcium and is deposited in bones. Strontium-90 is expelled from the body
very slowly, and the uptake of significant amounts increases the risks of
developing bone cancer or leukemia.

NUCLEAR POWER PLANTS


Many environmentalists are critical of nuclear power generation. They
claim that there is an unacceptable risk of a catastrophic accident, and that
nuclear power plants generate large amounts of unmanageable nuclear waste.
Nuclear Regulatory Commission has strict requirements regarding the amount
of radioactivity that can be released from a nuclear power reactor. In particular,
a nuclear reactor can expose an individual who lives on the fence line of the
power plant to no more than 10 millirems of radiation per year. Actual
measurements at U.S. nuclear power plants have shown that a person who
lived at the fence line would actually be exposed to much less that 10
millirems.
Thus, for a typical person who is exposed to about 350 millirems of
radiation per year from all other sources, much of which is natural background,
the proportion of radiation from nuclear power plants is extremely small. In
fact, coal- and oil-fired power plants, which release small amounts of
radioactivity contained in their fuels. Although a nuclear power plant cannot
explode like an atomic bomb, accidents can result in serious radioactive
pollution. During the past 45 years, there have been a amount of not-fully
110 Environmental Monitoring

controlled or uncontrolled fission reactions at nuclear power plants in the


United States and elsewhere, which have killed or injured power plant
workers. These accidents occurred in Los Alamos, New Mexico; Oak Ridge,
Tennessee; Richland, Washington; and Wood River Junction, Rhode Island.
The most famous case was the 1979 accident at the Three Mile Island
nuclear reactor in Pennsylvania, which received a great deal of attention in
the press. However, nuclear scientists have estimated that people living within
50 mi (80 km) of this reactor were exposed to less than two millirems of
radiation, most of it as iodine-131, a short-lived isotope. This exposure
constituted less than 1% of the total annual radiation dose of an average
person. However, these data do not mean that the accident at Three Mile
Island was not a serious one; fortunately, technicians were able to reattain
control of the reactor before more devastating damage occurred, and the
reactor system was well contained so that only a relatively small amount of
radioactivity escaped to the ambient environment.
By far, the worst nuclear reactor accident occurred in 1986 in Chernobyl,
Ukraine. An uncontrolled build-up of heat resulted in a meltdown of the
reactor core and combustion of graphite moderator material in one of the
several generating units at Chernobyl, releasing more than 50 million Curies
of radioactivity to the ambient environment. The disaster killed 31 workers,
and resulted in the hospitalization of more than 500 other people from
radiation sickness.
The Ukrainian authorities, during the decade following the Chernobyl
disaster an estimated 10,000 people in Belarus, Russia, and Ukraine died
from cancers and other radiation-related diseases caused by the accident. In
addition to these relatively local effects, the atmosphere transported radiation
from Chernobyl into Europe and throughout the Northern Hemisphere. More
than 500,000 people in the vicinity of Chernobyl were exposed to dangerously
high doses of radiation, and more than 300,000 people were permanently
evacuated from the vicinity. Since radiation-related health problems may
appear decades after exposure, scientists expect that many thousands of
additional people will eventually suffer higher rates of thyroid cancer, bone
cancer, leukemia, and other radiation-related diseases.
Unfortunately, a cover-up of the explosion by responsible authorities,
including those in government, endangered even more people. Many local
residents did not known that they should flee the area as soon as possible, or
were not provided with the medical attention they needed. The large amount
of radioactive waste generated by nuclear power plants is another an important
Environmental Monitoring 111

problem. This waste will remain radioactive for many thousands of years, so
technologists must design systems for extremely long-term storage. One
obvious problem is that the long-term reliability of the storage systems cannot
be fully assured, because they cannot be directly tested for the length of time
they will be used (i.e., for thousands of years). Another problem with nuclear
waste is that it will remain extremely dangerous for much longer than the
expected lifetimes of existing governments and social institutions. Thus, we
are making the societies of the following millennia, however they may be
structured, responsible for the safe storage of nuclear waste that is being
generated in such large quantities today.

Biological Effects of Radioactivity


The amount of injury caused by a radioactive isotope depends on its
physical half-life, and on how quickly it is absorbed and then excreted by an
organism. Most studies of the harmful effects of radiation have been
performed on single-celled organisms. Obviously, the situation is more
complex in humans and other multicellular organisms, because a single cell
damaged by radiation may indirectly affect other cells in the individual. The
most sensitive regions of the human body appear to be those which have
many actively dividing cells, such as the skin, gonads, intestine, and tissues
that grow blood cells (spleen, bone marrow, lymph organs). Radioactivity is
toxic because it forms ions when it reacts with biological molecules.
These ions can form free radicals, which damage proteins, membranes,
and nucleic acids. Radioactivity can damage DNA (deoxyribonucleic acid)
by destroying individual bases (particularly thymine), by breaking single
strands, by breaking double strands, by cross-linking different DNA strands,
and by cross-linking DNA and proteins. Damage to DNA can lead to cancers,
birth defects, and even death. However, cells have biochemical repair systems
which can reverse some of the damaging biological effects of low-level
exposures to radioactivity.
This allows the body to better tolerate radiation that is delivered at a low
dose rate, such as over a longer period of time. In fact, all humans are exposed
to radiation in extremely small doses throughout their life. The biological
effects of such small doses over such a long time are almost impossible to
measure, and are essentially unknown at present. There is, however, a
theoretical possibility that the small amount of radioactivity released into
the environment by normally operating nuclear power plants, and by previous
atmospheric testing of nuclear weapons, has slightly increased the incidence
112 Environmental Monitoring

of certain cancers in human populations. However, scientists have not been


able to conclusively show that such an effect has actually occurred. Currently,
there is disagreement among scientists about whether there is a threshold
dose for radiation damage to organisms.
In other words, is there a dose of radiation below which there are no harmful
biological effects? Some scientists maintain that there is no such threshold,
and that radiation at any dose carries a finite risk of causing some biological
damage. Furthermore, the damage caused by very low doses of radiation
may be cumulative, or additive to the damage caused by other harmful agents
to which humans are exposed. Other scientists maintain that there is a
threshold dose for radiation damage.
They believe that biological repair systems, which are presumably present
in all cells, can fix the biological damage caused by extremely low doses of
radiation. Thus, these scientists claim that the extremely low doses of radiation
to which humans are commonly exposed are not harmful. One of the most
informative studies of the harmful effects of radiation is a long-term
investigation of the survivors of the 1945 atomic blasts at Hiroshima and
Nagasaki by James Neel and his colleagues.
The survivors of these explosions had abnormally high rates of cancer,
leukemia, and other diseases. However, there seemed to be no detectable
effect on the occurrence of genetic defects in children of the survivors. The
radiation dose needed to cause heritable defects in humans is higher than
biologists originally expected. Radioactive pollution is an important
environmental problem. It could become much worse if extreme vigilance is
not utilized in the handling and use of radioactive materials, and in the design
and operation of nuclear power plants.

RADIOACTIVE
Atomic nuclei that are not stable, tend to approach stable configuration(s),
by the process of radioactivity. Atoms are radioactive because the ratio of
neutrons to protons is not ideal. Through radioactive decay, the nucleus
approaches a more stable neutron to proton ratio. Radioactive decay releases
different types of energetic emissions. The three most common types of
radioactive emissions are alpha particles, beta particles, and gamma rays.
Fission also is a form of radioactive decay. Alpha (a) decay occurs when the
neutron to proton ratio is too low. Alpha decay emits an alpha particle, which
consists of two protons and two neutrons. This is the same as a helium nucleus
Environmental Monitoring 113

and often uses the same chemical symbol 4He2. Alpha particles are highly
ionizing (e.g. deposits energy over a short distance).
Since alpha particles lose energy over a short distance, they cannot travel
far in most media. For example, the range of a 5 MeV alpha particle in air is
only 3.5 cm. Consequently, alpha particles will not normally penetrate the
outermost layer of the skin. Therefore, alpha particles pose little external
radiation field hazard. Shielding of alpha particles is easily accomplished
with minimal amounts of shielding. Examples of alpha particle emitting radio-
nuclides include 238U, 239Pu, and 241Am.
238U → 234Th + 4He .
92 90 2
239Pu → 235U + 4He .
94 92 2
241Am → 237Np + 4He .
95 93 2
After the emission of an α particle, the daughter product remaining, will
be reduced by 4 in its mass number, and 2 in its atomic number, as could be
verified in the examples above.

Beta (β-) decay occurs when the neutron to proton ratio is too high. The
radioactive nucleus emits a beta particle, which is essentially an electron, in
order to bring this to a more favourable ratio. Beta particles are less ionizing
than alpha particles. The range of beta particles depends on the energy, and
some have enough to be of concern regarding external exposure. A 1 MeV
beta particle can travel approximately 12 feet in air. Energetic beta particles
can penetrate into the body and deposit dose to internal structures near the
surface. Since beta particles are less ionizing than alpha particles, greater
shielding is required. Low Z materials are selected as beta particle shields to
take care of X-ray emissions associated with slowing down of beta particles
while they travel in a medium. In b emission, the neutron to proton ratio is
reduced by converting a neutron into proton as:
1n → 1p + e–.
0 1
114 Environmental Monitoring

The electron ejected is the β particle that is released. Thus β emission


results in the increase of the proton number, i.e. Z, by 1,but the mass number
A is unaltered. Example of β decay:
40K → 40Ca + β–1
19 20
Gamma (γ) rays are not particulate radiation like the alpha and beta, but a
form of high-energy electromagnetic wave. Gamma rays are the least ionizing
of the three forms discussed. A 1 MeV gamma ray can travel an average of
130 meters in air.
Since gamma radiation can travel far in air, it poses a significant external
radiation hazard. Further, if ingested, it may pose an internal radiation hazard.
Shielding of gamma rays is normally accomplished with high atomic number
materials such as lead. [Gamma rays are electro-magnetic radiations with
energies higher than X-rays. X-rays are produced when electrons of an atom
jump from one orbital location to another. The gamma rays are released
when an atomic nucleus releases its excess energy. It is clear from this that
nuclear transitions involve much larger energies than the atomic transitions.
In other words, energies of nuclear origin are many (103 106) times greater
than the energies of atomic origin].

Emission of γ-rays doesnt change the Mass number nor Atomic number.
If an atom is in exited state it comes to stable state by emitting a γ radiation.
Environmental Monitoring 115

Usually after α or β decay, the product nucleus is formed in an excited


state and it reaches a stable state after γ emission. There are several other
particles, like neutron, proton, 3He, deuterium, etc. that may be liberated in
radioactivity. When a nucleus emits such particle(s) towards reaching a stable
configuration, it is said to decay. The emitted particle is associated with the
mode of decay.

Thus we have alpha decay, beta-decay, gamma decay, neutron-decay, etc. In


the case of gamma emission, the nucleus changes only in its energy state. By
particle emission, the nucleus changes into another, and so is said to be
transmuted or converted. Thus there is a reduction in the original quantity of
the substance, during decay. There is no fixed time between two consecutive
emissions, but on an average, radioactive decay of a substance takes place at a
rate, which is proportional to the number of atoms present, at a given time.
This is expressed in a well-known differential equation, calledradioactivity
decay equation. An atom becomes radioactive if its nucleus suffers instability,
as said earlier. A nucleus may be radioactive due to instability set in while it was
formed in nature. This is called natural radioactivity, like that of 238U. When a
nucleus is disturbed or excited, say, by bombarding it with a particle or gamma
rays, its state of stability is altered and the altered system will become radioactive.
This is referred to as induced radioactivity or artificial radioactivity. There
could be many ways of putting a nucleus in a slightly or heavily unstable or
excited state. But the consequent radioactivity, which is a process of de-
excitation, is governed by common laws. The de-excitation may take place
quickly (say, in micro-seconds) or over a long period (in millions of years), in
a single step or in a series of many steps. Hence when we talk of the radioactivity
of a substance, we talk of the original radioactive material (parent), what fraction
of it gets converted in unit time, what are the particles released (emitted), how
much energy is released, what are the new materials (daughter products) formed,
radioactivity features of the daughter products, and of the end-product (stable)
as well. The rate at which the decay takes place is called activity.
116 Environmental Monitoring

RATE OF RADIOACTIVE DECAY


The nuclei of a given radioactive species have a definite probability of
decaying in unit time; this decay probability has a constant value characteristic
of the particular nuclide. It remains the same irrespective of the chemical or
physical state of the element at all readily accessible temperatures and
pressures. In a given specimen the rate of decay at any instant is always
directly proportional to the number of radioactive atoms of the nuclide under
consideration present at that instant. Thus, if N is the number of the particular
radioactive atoms (or nuclei) present at any time t, the decay rate is given by
dN/dt = -lt
where λ, called the decay constant of the radioactive nuclide, is a measure of
its decay probability in unit time. Upon integration between any arbitrary
zero time, when the number of radioactive nuclei of the specified kind present
is N0, and a time t later, when N of these nuclei remain, radioactive decay is
seen to be an exponential process, the actual decay rate being determined by
the decay constant ë and by the number of the particular nuclei present. ln,
(N/N0) = -lt, N=N0e-lt
Mean life: The reciprocal of the decay constant, represent by tm, is called
as the mean life (or average life) of the radioactive species; thus,
tm=1/l
The mean life is equal to the average life expectancy of the radioactive
species. Half life: It is defined as the time required for the number of
radioactive nuclei of a given kind (or for their activity) to decay to half its
initial value. Because of the exponential nature of the decay, this time is
independent of the amount of the radionuclide present. It can be seen from
the equations given above, that the half life is given by,
t1/2=(ln 2)/l = 0.6931/l or t1/2=0.6931tm
The half life is thus inversely proportional to decay constant and directly
proportional to mean life. The half-lives of known radioactive nuclides range
from a small fraction, e.g., about a millionth of a second to billions of years.

UNITS TO EXPRESS RADIOACTIVITY EMISSION


There are several different units used to describe radiation and its effects.
The simplest unit is that of activity which is measured in number of
disintegrations per second (dps). One dps means a radioactive nucleus gives
off one particle or photon in one second. This unit, in the international system
of units (SI) system, is called the becquerel (Bq), which is equivalent to 1
Environmental Monitoring 117

dps. The other unit prevalent for the activity, is a Curie (Ci). 1 Ci = 3.71010.
These units do not distinguish between alpha, beta or even gamma. These
units provide an understanding of the “strength” of the radioactive sample
but do not account for any of the properties of the radiation emitted. To
describe the degree of hazard to people from a particular radiation requires
other units.
Energy Levels: As mentioned earlier, quantum mechanics is needed to
understand and quantify atomic and subatomic features. The Quantum Theory
recognizes restrictions in the levels of energy acquired by a system. The
electrons in their orbits, or the nucleons in the shells are filled following
such restrictions. The energy of an electron depends on its orbit, and only
certain orbits are permitted by nature.
Similarly the nucleons within a nucleus occupy different energy states
that are permitted. We understand from this that an electron or a nucleon
cannot go to any arbitrary energy level. Thus when the system is specified,
the allowed energy states get specified. These are known as discrete energy
levels. Transition from one such level to the next lower level will involve
release of energy that is exactly the difference between these two levels, and
cannot be a fraction of the same. The energy is said to get quantised. The
electron orbits permitted for an atom are characteristic of that (species of)
atom, and when an electron jumps from one orbit to the lower one, X-ray,
called characteristic X-ray, with energy equal to the difference between the
energies associated with the two orbits emerges.
This helps even identifying the atom from which the X-rays ejected. Similar
nature applies to nuclei too. Though there are no orbits inside a nucleus for the
nucleons to move around, they have their energy states. A given radioactive
gamma-emitting nucleus will eject gamma rays (called gamma quanta)
characteristic of the nucleus. The energies taken by the nucleons within an
unexcited nucleus are called bound levels. Similarly, a nucleus could be raised
(excited) in its internal energy only to certain permitted levels. These levels are
called excited levels.
This varies from nucleus to nucleus, but is fixed for a given nucleus. When
the nucleus is unexcited, it is said to be in its ground state. The separation
between two level energies decreases as the energy increases. When a nucleus
is excited to a particular level, it de-excites to reach the ground state by emitting
a neutron, gamma quanta, or any other particle. Fission also is such a process.
The de-excitation could be in a single step, or in multiple steps involving a
series of particle emissions or gamma or both.
118 Environmental Monitoring

Major Causes of Biodiversity

Eight major causes of biodiversity are as follows:


1. Habitat Loss and Fragmentation 2. Over-exploitation for
Commercialization 3. Invasive Species 4. Pollution 5. Global Climate Change
6. Population Growth and Over-consumption 7. Illegal Wildlife Trade 8.
Species extinction.
1. Habitat Loss and Fragmentation: A habitat is the place where a plant
or animal naturally lives. Habitat loss is identified as main threat to
85% of all species described as threatened or endangered. Factors
responsible for this are deforestation, fire and over-use and
urbanization.
2. Over-exploitation for Commercialization: Over-exploitation of
resources has coasted more environmental degradation than earning.
For example; shrimp farming in India, Thailand, Ecuador and
Indonesia results in Wetland destruction, pollution of coastal waters
and degradation of coastal fisheries. Scientific studies have concluded
that cost of environmental degradation resulting from shrimp farming
was costing more than the earning through shrimp exports.
3. Invasive Species: Invasive species are ‘alien’ or ‘exotic’ species which
are introduced accidentally or intentionally by human. These species
become established in their new environment and spread unchecked,
Environmental Monitoring 119

threatening the local biodiversity. These invasive alien species have


been identified as the second greatest threat to biodiversity after habitat
loss.
4. Pollution: Pollution is a major threat to biodiversity, and one of the
most difficult problems to overcome; Pollutants do not recognize
international boundaries. For example, agricultural run-off, which
contains a variety of fertilizers and pesticides, may seep into ground
water and rivers before ending up in the ocean. Atmospheric pollutants
drift with prevailing air currents and are deposited far from their
original source.
5. Global Climate Change: Many climatologists believe that the
greenhouse effect is likely to raise world temperatures by about 2°C
by 2030, meaning that sea levels will rise by around 30-50 cm by
this time. Global warming, coupled with human population growth
and accelerating rates of resource use will bring further losses in
biological diversity. Vast areas of the world will be inundated causing
loss of human life as well as ecosystems.
6. Population Growth and Over-consumption: From a population of one
billion at the beginning of the 19th century, our species now numbers
more than six billion people. Such rapid population growth has meant
a rapid growth in the exploitation of natural resources— water, foods
and minerals. Although there is evidence that our population growth
rate is beginning to slow down, it is clear that the exploitation of
natural resources is currently not sustainable. Added to this is the
fact that 25 per cent of the population consumes about 75 per cent of
the world’s natural resources. This problem of over-consumption is
one part of the broader issue of unsustainable use.
7. Illegal Wildlife Trade: The international trade in wild plants and
animals is enormous. Live animals are taken for the pet trade, or
their parts exported for medicines or food. Plants are also taken from
the wild for their horticultural or medicinal value.
8. Species extinction: Extinction is a natural process. The geological
record indicates that many hundreds of thousands of plant and animal
species have disappeared over the eras as they have failed to adapt to
changing conditions. Recent findings however indicate that the current
rate of species extinction is at least a hundred to a thousand times
higher than the natural rate.
120 Environmental Monitoring

CAUSES AND CONSEQUENCES


OF BIODIVERSITY DECLINES
What natural and anthropogenic processes influence biodiversity,
ecosystem functioning, and ecosystem stability? How can ecology increase
our ability to understand and manage ecosystems?
Biodiversity is the diversity of life on Earth. This includes the richness
(number), evenness (equity of relative abundance), and composition (types)
of species, alleles, functional groups, or ecosystems. Biodiversity is rapidly
declining worldwide, and there is considerable evidence that ecosystem
functioning (e.g., productivity, nutrient cycling) and ecosystem stability (i.e.,
temporal invariability of productivity) depend on biodiversity. Thus,
biodiversity declines may diminish human wellbeing by decreasing the
services that ecosystems can provide for people (Millennium Ecosystem
Assessment 2005).
Although the causes and consequences of contemporary biodiversity
declines have been extensively explored in ecology, several questions deserve
further consideration. For example, what natural processes influence
biodiversity; what anthropogenic processes influence biodiversity; what are
the consequences of biodiversity declines? Thus far, these questions have
been considered separately within several ecological fields.
Here, I briefly describe previous progress in each of these fields and then
offer a conceptual and mechanistic synthesis across these fields. I conclude
by suggesting novel questions and hypotheses that could be considered in
future studies to increase our ability to understand, conserve, and restore
ecosystems.

What Natural Processes Influence Biodiversity?


Theoretical and empirical studies have identified a vast number of natural
processes that can potentially maintain biodiversity. Biodiversity can be
maintained by moderately intense disturbances that reduce dominance by
species that would otherwise competitively exclude subordinate species. For
example, selective grazing by bison can promote plant diversity in grasslands.
Additionally, biodiversity can be maintained by resource partitioning, when
species use different resources, or spatiotemporal partitioning, when species
use the same resources at different times and places. For instance, plant species
in the tundra can coexist by using different sources of nitrogen or use the
same sources of nitrogen at different times of the growing season or at different
Environmental Monitoring 121

soil depths. Furthermore, biodiversity can be maintained by interspecific


facilitation, which occurs when species positively influence one another by
increasing the availability of limiting resources, or by decreasing the limiting
effects of natural enemies or physical stresses. Although previous theoretical
and empirical studies have identified numerous processes that can maintain
biodiversity, ecologists and conservationists rarely know which of these
mechanisms actually maintains biodiversity at any particular time and place.
Thus, further investigation is needed to identify the natural processes that
actually maintain biodiversity in intact ecosystems.

What Anthropogenic Processes Influence Biodiversity?


Human actions have resulted in multiple changes on a global scale that
often drive contemporary biodiversity declines. In particular, land use changes,
exotic species invasions, nutrient enrichment, and climate change are often
considered some of the most ubiquitous and influential global ecosystem
changes. Unfortunately, the mechanisms by which global ecosystem changes
influence biodiversity and ecosystem processes, and the combined effects of
multiple changes, are often unclear. This greatly reduces the ability to predict
future changes in biodiversity and ecosystem processes. Therefore, further
investigation is needed to predict the consequences of global ecosystem
changes.
In some cases, human actions have promoted biodiversity. Conservation
strategies, such as creating parks to protect biodiversity hotspots, have been
effective but insufficient. For example, although biodiversity is often greater
inside than outside parks, species extinctions continue. Similarly, restoration
strategies, such as reinstating fire as a natural disturbance, have been effective
but insufficient. Specifically, biodiversity and ecosystem services are greater
in restored than in degraded ecosystems but lower in restored than in intact
remnant ecosystems. Despite the positive effects of conservation and
restoration efforts, biodiversity declines have not slowed. Thus, further
investigation is needed to determine new conservation and restoration
strategies.

What are the Consequences of Biodiversity Declines?


There is considerable evidence that contemporary biodiversity declines
will lead to subsequent declines in ecosystem functioning and ecosystem
stability. Biodiversity experiments have tested whether biodiversity declines
will influence ecosystem functioning or stability by manipulating some
122 Environmental Monitoring

component of biodiversity, such as the number of species, and measuring


various types of ecosystem functioning or stability.

Fig. Conceptual framework for considering the causes and


consequences of biodiversity declines.
These studies have been conducted in lab, grassland, forest, marine, and
freshwater ecosystems. From these studies, it is clear that ecosystem
functioning often depends on species richness, species composition, and
functional group richness and can also depend on species evenness and genetic
diversity. Furthermore, stability often depends on species richness and species
composition. Thus, contemporary changes in biodiversity will likely lead to
subsequent changes in ecosystem properties. Further investigation at larger
spatiotemporal scales in managed ecosystems is needed to improve our
understanding of the consequences of biodiversity declines.

SYNTHESIZING BIODIVERSITY RESEARCH


A synthesis across four ecological fields may increase our ability to
understand, conserve, and restore ecosystems by providing a framework for
considering the causes and consequences of biodiversity declines. First,
maintenance of biodiversity research has focused on the effects of natural
processes on biodiversity. Second, biodiversity-stability research has focused
on the effects of biodiversity on various measures of stability. Third,
biodiversity-ecosystem functioning research has focused on the effects of
biodiversity on ecosystem functioning and how this relationship mediates
the effects of global ecosystem changes on human wellbeing.
Environmental Monitoring 123

Fourth, global change ecology has focused on the effects of global


ecosystem changes on biodiversity, ecosystem functioning, and stability.
Combining the relationships explored in each of these four fields produces
an inclusive framework and elucidates two novel questions: What natural
processes promote biodiversity, ecosystem functioning, and stability; do
global ecosystem changes influence ecosystems by altering these natural
processes?
What Natural Processes Promote Biodiversity, Ecosystem Functioning,
and Stability?

Fig. Synthesis of mechanisms from three ecological fields (A-C) to identify natural processes
that promote biodiversity, ecosystem functioning, and ecosystem stability (D): Mechanisms in
bold on the left can be combined and described as stabilizing species interactions because all
of these mechanisms result from negative frequency-dependent natural processes.
The natural processes that are predicted to locally promote biodiversity,
ecosystem stability, and ecosystem functioning have commonly been
considered separately, but are quite congruent. Theoretical and empirical
studies have identified mechanisms that can promote biodiversity, ecosystem
stability, and ecosystem functioning. Interestingly, stabilizing species
interactions, which cause a species to limit itself more than it limits other
species, are predicted to promote biodiversity, ecosystem stability, and
ecosystem functioning. Previous studies have found that stabilizing species
interactions can promote biodiversity, ecosystem stability, and ecosystem
functioning.
Stabilizing species interactions occur when interspecific interactions (i.e.,
between individuals from different species) are more favourable than
intraspecific interactions (i.e., between individuals of the same species). This
results in a rare species advantage, common species disadvantage, or both.
Species interactions are stabilizing when interspecific resource competition
124 Environmental Monitoring

is less than intraspecific resource competition, interspecific apparent


competition is less than intraspecific apparent competition, interspecific
facilitation is greater than intraspecific facilitation, or some combination of
these mechanisms. For example, when species consume different resources
or consume the same resources at different times or places, resource
competition will be stronger between two individuals from the same species
than between two individuals from different species. Consequently, species
have an advantage when rare because competition is relatively weak and a
disadvantage when common because competition is relatively strong. This
can maintain biodiversity because it prevents any particular species from
competitively excluding all other species.
This can promote ecosystem stability in diverse ecosystems because it
results in species asynchrony, wherein decreases in the abundance of some
species are compensated for by increases in the abundance of other species.
This can promote ecosystem functioning in diverse ecosystems because it
results in overyielding, in which species perform better when they are rare
and other species are present than when they are common and other species
are absent.

Fig. A hypothesis tree that can be used to tease apart the relative
importance of various types of stabilizing species interactions
Environmental Monitoring 125

Future studies can be designed to determine the relative importance of


various types of stabilizing species interactions. The relative importance of
competition v. facilitation can be determined by manipulating the density of
individuals. Competition is greater than facilitation when individuals perform
better at low than high density. Adding resources and removing natural
enemies can elucidate the relative importance of resources and natural
enemies, respectively. Ecologists have often focused on resource competition,
but recent studies suggest that facilitation and natural enemies have been
under-appreciated in ecology. Thus, further study is needed to determine which
types of stabilizing species interactions commonly promote biodiversity,
ecosystem functioning, and ecosystem stability.
Do Global Ecosystem Changes Influence Ecosystems by Altering these
Natural Processes?

Fig. Hypothesized effects of anthropogenic and


natural processes on stabilizing species interactions.
It may be possible to predict future changes in biodiversity, ecosystem
functioning, and ecosystem stability by considering how global ecosystem
changes are currently influencing stabilizing species interactions. The United
Nations is currently developing an Intergovernmental Science-Policy Platform
on Biodiversity and Ecosystem Services (IPBES) to monitor biodiversity
and ecosystem services worldwide (Marris 2010). The IPBES will be modeled
after the Intergovernmental Panel on Climate Change (IPCC), and there is
great potential for ecologists to borrow strategies that have been successfully
employed by climatologists. For example, climatologists have modeled the
effects of natural and anthropogenic processes on radiative forcing (i.e., the
126 Environmental Monitoring

change in the difference between the amount of radiation entering and exiting
Earth’s atmosphere) to determine the causes and consequences of climate
change (IPCC 2007). Radiative forcing is central to this discussion because
it is influenced by both natural and anthropogenic processes and it influences
many climate variables. Future ecological studies could take a similar
approach to determine the causes and consequences of changes in biodiversity.
Stabilizing species interactions are central to this discussion because they
can be influenced by both natural and anthropogenic processes, and they can
influence both biodiversity and ecosystem properties.

BIODIVERSITY LOSS
Loss of biodiversity or biodiversity loss is the ongoing extinction of species
worldwide, and also the local reduction or loss of species in a certain habitat
or ecological niche or biome. The latter phenomenon can be temporary or
permanent, depending on whether the environmental degradation that leads
to the loss is reversible through ecological restoration/ ecological resilience
or effectively permanent (e.g. through land loss). Global extinction has so
far been proven to be irreversible.
Even though permanent global species loss is a more dramatic phenomenon
than regional changes in species composition, even minor changes from a
healthy stable state can have dramatic influence on the food web and the
food chain insofar as reductions in only one species can adversely affect the
entire chain (coextinction), leading to an overall reduction in biodiversity,
possible alternative stable states of an ecosystem notwithstanding. Ecological
effects of biodiversity are usually counteracted by its loss. Reduced
biodiversity in particular leads to reduced ecosystem services and eventually
poses an immediate danger for food security, also for humankind.

LOSS RATE
The current rate of global diversity loss is estimated to be a 1000 times
higher than the (naturally occurring) background extinction rate and expected
to still grow in the upcoming years. Locally bounded loss rates can be measured
using species richness and its variation over time. Raw counts may not be as
ecologically relevant as relative or absolute abundances. Taking into account
the relative frequencies, a considerable number of biodiversity indexes has
been developed. Besides richness, evenness and heterogeneity are considered
to be the main dimensions along which diversity can be measured. As with all
diversity measures, it is essential to accurately classify the spatial and temporal
Environmental Monitoring 127

scope of the observation. “Definitions tend to become less precise as the


complexity of the subject increases and the associated spatial and temporal
scales widen. Biodiversity itself is not a single concept but can be split up
into various scales (e.g. ecosystem diversityvs. habitat diversity or even
biodiversity vs. habitat d.) or different subcategories (e.g. phylogenetic
diversity, species diversity, genetic diversity, nucleotide diversity). The
question of net loss in confined regions is often a matter of debate but longer
observation times are generally thought to be beneficial to loss estimates. To
compare rates between different geographic regions latitudinal gradients in
species diversity should also be considered.

FACTORS
Major factors for biotic stress and the ensuing accelerating loss rate are,
amongst other threats:
1. Habitat loss and degradation
Land use intensification (and ensuing land loss/habitat loss) has been
identified to be a significant factor in loss of ecological services due
to direct effects as well as biodiversity loss.
2. Climate change through heat stress and drought stress
3. Excessive nutrient load and other forms of pollution
4. Over-exploitation and unsustainable use (e.g. unsustainable fishing
methods) we are currently using 25% more natural resources than the
planet
5. Invasive alien species that effectively compete for a niche, replacing
indigenous species.

INSECT LOSS
In 2017, various publications describe the dramatic reduction in absolute
insect biomass and number of species in Germany and North America over a
period of 27 years. As possible reasons for the decline, the authors highlight
neo-nicotinoids and other agrochemicals. Writing in the journal PLOS One,
authors Hallman, Sorg, et al (2017), conclude that “the widespread insect
biomass decline is alarming.”

CAUSES OF RECENT DECLINES IN


BIODIVERSITY
The major causes of biodiversity decline are land use changes, pollution,
changes in atmospheric CO2 concentrations, changes in the nitrogen cycle
128 Environmental Monitoring

and acid rain, climate alterations, and the introduction of exotic species, all
coincident to human population growth. For rainforests, the primary factor
is land conversion. Climate will probably change least in tropical regions,
and nitrogen problems are not as important because growth in rainforests is
usually limited more by low phosphorus levels than by nitrogen insufficiency.
The introduction of exotic species is also less of a problem than in temperate
areas because there is so much diversity in tropical forests that newcomers
have difficulty becoming established.
HUMAN POPULATION GROWTH
The geometric rise in human population levels during the twentieth century
is the fundamental cause of the loss of biodiversity. It exacerbates every
other factor having an impact on rainforests (not to mention other ecosystems).
It has led to an unceasing search for more arable land for food production
and livestock grazing, and for wood for fuel, construction, and energy.
Previously undisturbed areas (which may or may not be suitable for the
purposes to which they are constrained) are being transformed into agricultural
or pasture land, stripped of wood, or mined for resources to support the energy
needs of an ever-growing human population. Humans also tend to settle in
areas of high biodiversity, which often have relatively rich soils and other
attractions for human activities. This leads to great threats to biodiversity,
especially since many of these areas have numerous endemic species.
Balmford, et al., (2001) have demonstrated that human population size in a
given tropical area correlates with the number of endangered species, and
that this pattern holds for every taxonomic group. Most of the other effects
mentioned below are either consequent to the human population expansion
or related to it.
The human population was approximately 600,000 million in 1700, and
one billion in 1800. Just now it exceeds six billion, and low estimates are
that it may reach 10 billion by the mid-21st century and 12 billion by 2100.
The question is whether many ecological aspects of biological systems can
be sustained under the pressure of such numbers. Can birds continue to
migrate, can larger organisms have space (habitat) to forage, can ecosystems
survive in anything like their present form, or are they doomed to
impoverishment and degradation?

HABITAT DESTRUCTION
Habitat destruction is the single most important cause of the loss of
rainforest biodiversity and is directly related to human population growth.
Environmental Monitoring 129

As rainforest land is converted to ranches, agricultural land (and then,


frequently, to degraded woodlands, scrubland, or desert), urban areas and
other human usages, habitat is lost for forest organisms. Many species are
widely distributed and thus, initially, habitat destruction may only reduce
local population numbers. Species which are local, endemic, or which have
specialized habitats are much more vulnerable to extinction, since once their
particular habitat is degraded or converted for human activity, they will
disappear. Most of the habitats being destroyed are those which contain the
highest levels of biodiversity, such as lowland tropical wet forests. In this
case, habitat loss is caused by clearing, selective logging, and burning.

POLLUTION
Industrial, agricultural and waste-based pollutants can have catastrophic
effects on many species. Those species which are more tolerant of pollution
will survive; those requiring pristine environments (water, air, food) will
not. Thus, pollution can act as a selective agent. Pollution of water in lakes
and rivers has degraded waters so that many freshwater ecosystems are dying.
Since almost 12% of animals species live in these ecosystems, and most
others depend on them to some degree, this is a very serious matter. In
developing countries approximately 90% of wastewater is discharged,
untreated, directly into waterways.

AGRICULTURE
The dramatic increase in the number of humans during the twentieth
century has instigated a concomitant growth in agriculture, and has led to
conversion of wildlands to croplands, massive diversions of water from lakes,
rivers and underground aquifers, and, at the same time, has polluted water
and land resources with pesticides, fertilizers, and animal wastes. The result
has been the destruction, disturbance or disabling of terrestrial ecosystems,
and polluted, oxygen-depleted and atrophied water resources. Formerly,
agriculture in different regions of the world was relatively independent and
local. Now, however, much of it has become part of the global exchange
economy and has caused significant changes in social organization.
Earlier agricultural systems were integrated with and co-evolved with
technologies, beliefs, myths and traditions as part of an integrated social
system. Generally, people planted a variety of crops in different areas, in the
hope of obtaining a reasonably stable food supply. These systems could only
be maintained at low population levels, and were relatively non-destructive
130 Environmental Monitoring

(but not always). More recently, agriculture has in many places lost its local
character, and has become incorporated into the global economy. This has
led to increased pressure on agricultural land for exchange commodities and
export goods. More land is being diverted from local food production to
“cash crops” for export and exchange; fewer types of crops are raised, and
each crop is raised in much greater quantities than before. Thus, ever more
land is converted from forest (and other natural systems) for agriculture for
export, rather than using land for subsistence crops.
The introduction of monocropping and the use of relatively few plants for
food and other uses – at the expense of the wide variety of plants and animals
utilized by earlier peoples and indigenous peoples – is responsible for a loss
of diversity and genetic variability. The native plants and animals adapted to
the local conditions are now being replaced with “foreign” (or “exotic”)
species which require special inputs of food and nutrients, large quantities of
water. Such exotic species frequently drive out native species. There is
pressure to conform to crop selection and agricultural techniques – all is
driven by global markets and technologies.

GLOBAL WARMING
There is recent evidence that climate changes are having effects on tropical
forest ecology. Warming in general (as distinct from the effects of increasing
concentrations of CO2 and other greenhouse gases) can increase primary
productivity, yielding new plant biomass, increased organic litter, and
increased food supplies for animals and soil flora (decomposers). Temperature
changes can also alter the water cycle and the availability of nitrogen and
other nutrients. Basically, the temperature variations which are now occurring
affect all parts of forest ecosystems, some more than others. These interactions
are unimaginably complex. While warming may at first increase net primary
productivity (NPP), in the longer run, because plant biomass is increasing,
more nitrogen is taken up from the soil and sequestered in the plant bodies.
This leaves less nitrogen for the growth of additional plants, so the increase
in NPP over time (due to a rise in temperature or CO2 levels) will be limited
by nitrogen availability. The same is probably true of other mineral nutrients.
The consequences of warming-induced shifts in the distribution of nutrients
will not be seen rapidly, but perhaps only over many years. These events
may effect changes in species distribution and other ecosystem processes in
complex ways. We know little about the reactions of tropical forests, but
they may differ from those of temperate forests.
Environmental Monitoring 131

In tropical forests, warming may be more important because of its effects


on evapotranspiration and soil moisture levels than because of nutrient
redistribution or NPP (which is already very high because tropical
temperatures are close to the optimum range for photosynthesis and there is
so much available light energy). And warming will obviously act in concert
with other global or local changes – increases in atmospheric CO2 (which
may modify plant chemistry and the water balance of the forest) and land
clearing (which changes rainfall and local temperatures), for examples.
Root, et al.(2003) have determined that more than 80%of plant and animal
species on which they gathered data had undergone temperature-related shifts
in physiology. Highland forests in Costa Rica have suffered losses of
amphibian and reptile populations which appear to be due to increased
warming of montane forests. The golden toad Bufo periglenes of Costa Rica
has become extinct, at least partly because of the decrease in mist frequency
in its cloud forest habitat. The changes in mists appear to be a consequence
of warming trends. Other suspected causes are alterations in juvenile growth
or maturation rates or sex ratios due to temperature shifts. Parmesan and
Yohe (2003), in a statistical analysis, determined that climate change had
biological effects on the 279 species which they examined.
The migratory patterns of some birds which live in both tropical and
temperate regions during the year seem to be shifting, which is dangerous
for these species, as they may arrive at their breeding or wintering grounds at
an inappropriate time. Or they may lose their essential interactions with plants
which they pollinate or their insect or plant food supplies.
Perhaps for these reasons, many migratory species are in decline, and
their inability to coördinate migratory clues with climatic actualities may be
partly to blame. The great tit, which still breeds at the same time as previously,
now misses much of its food supply because its plant food develops at an
earlier time of year, before the birds have arrived from their wintering grounds.
Also, as temperatures rise, some bird populations have shifted, with lowland
and foothill species moving into higher areas. The consequences for highland
bird populations are not yet clear. And many other organisms, both plant and
animal, are being affected by warming.
An increase in infectious diseases is another consequence of climate
change, since the causative agents are affected by humidity, temperature
change, and rainfall. Many species of frogs and lizards have declined or
disappeared, perhaps because of the increase in parasites occasioned by higher
temperatures. As warming continues, accelerating plant growth, pathogens
132 Environmental Monitoring

may spread more quickly because of the increased availability of vegetation


(a “density” effect) and because of increased humidity under heavier plant
cover.
As mentioned above, the fungus Phytophtora cinnamoni has demolished
many Eucalyptus forests in Australia. In addition, the geographical range of
pathogens can expand when the climate moderates, allowing pathogens to
find new, non-resistant hosts. On the other hand, a number of instances of
amphibian decline seem to be due to infections with chrytid fungi, which
flourish at cooler temperatures. An excellent review of this complex issue
may be found in Harvell, et al., (2002).
There may be a link between augmented carbon dioxide levels and marked
increase in the density of lianas in Amazonian forests. This relationship is
suggested by the fact that growth rates of lianas are highly sensitive to CO2
levels. As lianas become more dense, tree mortality rises, but mortality is not
equal among species because lianas preferentially grow on certain species.
Because of this biodiversity may be reduced by increased mortality in some
species but not others.

You might also like