7,129
Views
9
CrossRef citations to date
0
Altmetric
Research articles

Trends and challenges in risk assessment of environmental contaminants

Pages 195-218 | Received 01 Apr 2011, Accepted 14 Jun 2011, Published online: 06 Sep 2011

Abstract

Since its inception in the 1960s, risk assessment of environmental contaminants has evolved and expanded enormously. The present article reviews its origins, gives an overview of the current state of the art, and provides a future perspective by identifying critical knowledge gaps. Many of the risk assessment principles that were introduced between the 1960s and 1980s are still being applied today, and many contamination problems have been solved or reduced. However, new issues have arisen which pose a challenge to the field of risk assessment, e.g. engineered nanoparticles and cumulative exposures. New risk assessment techniques and approaches are required to address these issues, e.g. appropriate dose metrics for exposure to nanoparticles, tools to predict interaction effects in mixtures, and individual-based models that can predict cumulative exposures.

Introduction

Nowadays, risk assessment of environmental contaminants is a well-established discipline. Built upon disciplines such as chemistry, toxicology, epidemiology, and ecology, this discipline supports the assessment of human and ecological risks due to the emission of and exposure to environmental contaminants. Universities all over the world offer a wide range of BSc and MSc programmes to train young scientists in this relatively new discipline. They are trained to understand the processes underlying exposure and toxicity, the sophisticated computer models to predict fate, exposure, and risk, and the hefty guidance documents nowadays produced by regulatory agencies. Once graduated, these students find jobs in academia, consultancy, or (inter)national research institutes. The number of people working in this area is difficult to assess, but current membership numbers of organizations such as the Society for Environmental Toxicology and Chemistry (SETAC; 5500 members) and the Society of Toxicology (SoT; 6500 members) indicate that it must be substantial.

It has not always been like that. At the beginning of the twentieth century, almost nobody worried about the quality of the environment and its impact on human health. These were the times of unlimited industrial growth and increasing “chemicalisation” of society, especially just after the second World War. Large scale pollution problems such as the Minamata disease in Japan, first discovered in 1956, were treated as unfortunate incidents. The big turn came with the publication of Silent Spring by Carson (Citation1962). Retrospectively, Carson – who was trained as an ecologist – can be considered the first “science-based environmental activist”. She used her knowledge of the natural sciences to criticize developments in society that threatened human health and the environment.

We have come a long way since Rachel Carson’s Silent Spring (1962). Risk assessment has become a more structured process over the past 40–50 years, and is increasingly used to manage environmental contaminants. But are we there yet? Which problems did we solve? And which problems persist? These questions are addressed in this review. It describes the history of risk assessment of environmental contaminants, gives an overview of the current state of the art, and provides a future perspective. The focus is on the contribution of the natural and medical sciences to risk assessment of environmental contaminants and on the derivation of safe dose levels.

History

The safe dose concept

Risk assessment of contaminants was conceived when people started realizing that some substances can cause adverse health effects. Although this discovery is difficult to date, it is well-known that the Romans were already aware that lead could cause serious health problems. However, they associated limited exposure with limited risk. What they did not realize was that everyday exposure to low dose levels can result in chronic lead poisoning (Lewis Citation1985). An important scientific benchmark in risk assessment of contaminants was the introduction of the dose concept by the Swiss physician Paracelsus who lived in the fifteenth and sixteenth centuries: All things are poison and nothing is without poison; only the dose permits something not to be poisonous [translated from German]. This triggered the question on which the discipline of chemical risk assessment is based: “What is a safe dose level?”

The safe dose concept nowadays used in risk assessment of contaminants finds its origin in the field of occupational exposure. Workers are the first to develop adverse health effects because of their relatively high and chronic exposure levels. Examples include the development of silicosis in miners, scrotal cancers in chimney sweeps (Hamilton Citation1943), and “aniline cancer” in the dye industry (Kahl et al. Citation2000). Early efforts to limit exposures were largely ad hoc and voluntary (McClellan Citation1999). These efforts became more structured by the formation of organizations such as the American Conference of Governmental Industrial Hygienists (ACGIH) in 1938 and the British Occupational Hygiene Society (BOHS) in 1953. In 1941, the ACGIH established a Threshold Limit Values for Chemical Substances (TLV-CS) Committee. This group was charged with investigating, recommending, and annually reviewing exposure limits for chemical substances. In 1946, the organization adopted its first list of 148 exposure limits, then referred to as Maximum Allowable Concentrations. The term “threshold limit value” (TLV) was introduced in 1956; a threshold concentration “to which it is believed that nearly all workers may be repeatedly exposure, day after day, without adverse effect” (ACGIH 1998). The first documentation of the TLVs was published in 1962. Today’s list of TLVs includes 642 chemical substances and physical agents, as well as 38 biological exposure indices (BEIs) for selected chemicals (ACGIH 2011). The idea of setting safe exposure levels for chemicals was soon followed by other organizations with a regulatory mandate, e.g. recommended exposure limits (RELs) by the US National Institute for Occupational Safety and Health (NIOSH), the permissible exposure limits (PELs) of the Occupational Safety and Health Administration (OSHA) in the US, and the Acceptable Daily Intakes (ADIs), first introduced by the Council of Europe in 1962 and later adopted the Joint FAO/WHO Expert Committee on Food Additives (JECFA) (Lu and Kacew Citation2002). Over time, the safe dose concept has expanded further, and nowadays it covers a wide range of different exposure media (air, water, soil, sediment, food, etc.) and protection endpoints (workers, general public, children, ecosystems, agricultural production, etc.).

Derivation of safe dose levels

Safe dose levels were originally based on a threshold concept: there is a dose level below which adverse health effects do not occur (). In experimental toxicology, this dose level is referred to as the No Observed Effect Level (NOEL). The threshold concept is well established in risk assessment because of its long history and a high degree of consensus (and confidence) by both scientists and the general public (Van der Heijden Citation2006). To this day, this concept forms the basis for the derivation of safe dose levels for many types of contaminants.

Figure 1. Threshold contaminants (dashed line) are assumed to have a threshold dose below which no adverse response occurs. It is assumed that non-threshold contaminants (solid line) have no threshold and that any exposure increases the probability of an adverse response.

Figure 1. Threshold contaminants (dashed line) are assumed to have a threshold dose below which no adverse response occurs. It is assumed that non-threshold contaminants (solid line) have no threshold and that any exposure increases the probability of an adverse response.

In the 1940s, scientists discovered that radiation and genotoxic carcinogens can cause damage by a biological mechanism that is totally different from those that produced other forms of toxicity. These scientists put forth what is referred to as the “non-threshold” hypothesis (Armitage and Doll Citation1954; IRLG 1979; Arora and Gardner Citation1994; ). It was hypothesized that any exposure to radiation or a genotoxic carcinogen that reaches its critical biological target, especially the genetic material, and interacts with it, can increase the probability (the risk) of developing cancer. There is no safe dose level. This resulted in a regulatory distinction between contaminants with and without a threshold.

Threshold contaminants

When deriving safe exposure levels, scientists had to deal with the fact that limited toxicity data were available, e.g. only for healthy adults or for laboratory animals. These data had to be extrapolated to safe exposure levels accounting for differences in sensitivity between laboratory animals and humans, and for variability in sensitivity within the human population. This introduced much uncertainty. For threshold contaminants, Lehman and Fitzhugh (Citation1954) addressed this issue by the introduction of a default safety factor of 100 that was applied to the empirically derived NOEL. The original investigators, Lehman and Fitzhugh (Citation1954), reasoned that the safety factor allowed for several sources of uncertainty (Vermeire et al. Citation1999):

Interspecies variability (allowing extrapolation from the experimental animal to man);

Human variability (taking into account sensitive individuals of the population);

Possible synergistic effects (with eventual toxic outcome) of the many food additives and contaminants which man is exposed to.

In Europe, this scheme was adopted by the JECFA and by the Joint FAO/WHO Expert on Pesticides Residues (JMPR) in 1961, and the safe level was defined as the ADI. The ADI has been defined as “the daily intake of chemical which, during the entire life time, appears to be without appreciable risk on the basis of all known facts at the time” (Truhaut Citation1991). Alternative nomenclature around the world includes the minimal risk level (MRL), the tolerable daily intake/concentration (TDI/TC), the estimated-concentration-of-no-concern (ECNC), and the reference dose/concentration (RfD/RfC) depending on the regulatory agency ().

Table 1. Nomenclature for several human reference levels of exposure.

The default 100-fold assessment factor was soon rationalized as a product of two 10-fold assessment factors. The first 10-fold factor was considered to allow for interspecies differences and the second for human variability (WHO 1987; Dourson et al. Citation1996; Renwick and Lazarus Citation1998). However, advances in biochemistry and toxicology led scientists and regulators to realize that these two default factors would not cover the differences and the complexity of the wide range of metabolic fates and mechanisms of toxicity in laboratory species and in humans. Analyses were performed to investigate the validity of the assessment factors and to further refine them (Dourson and Stara 1983; Calabrese Citation1985; Hattis et al. Citation1987; Renwick Citation1991, Citation1993; Abdel-Rahman and Kadry Citation1995; Naumann and Weideman Citation1995; Renwick and Lazarus Citation1998; Burin and Saunders Citation1999). Notwithstanding these advancements, the original concept of an experimentally derived benchmark (e.g. NOEL) that is divided by a safety factor still holds today.

Non-threshold contaminants

The non-threshold contaminants caused a problem for regulatory agencies. How should they establish a safe dose level if there was no safe dose level? Initially, this problem was solved by defining zero exposure targets. It led to the famous Delaney Amendment in the US, a feature of the federal food law that forbids the deliberate introduction of carcinogens into food (Rodricks Citation2007). Other examples include the so-called “black lists” of substances for which the emission should be reduced to zero (Copius Peereboom and Reijnders Citation1986) and source-oriented measures such as the prescription of “best available technologies”. In practice, a zero concentration level was defined as a level that could not be detected, which – according to Rodricks (Citation2007) – resulted in the odd rule that says: “If we can detect it, it is dangerous; but if we cannot detect it is not harmful.” But as analytical detection limits fell, regulations became more and more strict and were more and more difficult to maintain. The regulators were saved by the introduction of the concept of a “virtually safe dose” by Mantel and Bryan (Citation1961). They proposed to extrapolate the dose–response curve well below the empirical range to derive a dose that corresponds to an excess lifetime risk that was labeled “virtually safe.” Regulators of the US Food and Drug Administration (FDA) set the acceptable lifetime risk level at 1 in 1 million, and the probit model originally proposed by Mantel and Bryan (Citation1961) was replaced by a linear no-threshold model, but the essence of low-dose extrapolation still holds today.

Environmental risk assessment

In the 1950s and 1960s, risk assessment of contaminants focused on issues such as food safety and occupational exposure. Although the potential adverse impact of environmental contaminants on human health was already recognized in the nineteenth century (Brimblecombe Citation1987; Markham Citation1994), most legislation – including provisions for human health risk assessment – was introduced only after large scale pollution incidents in the 1970s and 1980s. Application of risk assessment principles to environmental contaminants implied that the existing methods had to be extended with tools to assess emissions, fate, and exposure. The application of computer models became more and more important because the number of contaminants requiring regulation increased rapidly, and measurement of emissions, environmental concentrations, and exposures was costly.

At approximately the same time, the gloomy prospect of Rachel Carson’s Silent Spring (1962) created the awareness that not only humans, but also other biological life forms were at risk of environmental contaminants. Ecosystem protection became an integral part of environmental policy with laws such as the US Federal Insecticide, Fungicide, and Rodenticide Act of 1972, the US Toxic Substances Control Act of 1976, and the US Clean Water Act of 1977. European states introduced similar legislation such as the Pesticides Act (1962), the Nature Protection Act (1967), the Surface Water Pollution Act (1970), and the Environmental Substances Act (1985) in the Netherlands. A widely used tool in these laws was the prescription of environmental quality standards; environmental concentration levels at which no adverse impacts on the aquatic or terrestrial ecosystem were expected.

Scientific approaches to derive safe concentration levels for ecosystem protection can be traced back to Hart et al. (Citation1945), who tentatively proposed a multiplication factor of 0.3 to derive a presumed harmless concentration of any substance from acute toxicity data. Multiplication factors soon became standardized at factors of 0.1, 0.05, and 0.01, depending on chemical properties, available toxicity data, and the predicted endpoint (NTAC 1968; Warner Citation1976). These multiplication factors were derived mainly on the basis of convenience and little or no data were available to support them. Later studies have shown that these early assessment factors are inappropriate for predicting chronic toxicity from acute toxicity information because of variation among chemicals and species (Giesy and Graney Citation1989).

Over time, multiplication factors in ecotoxicology were replaced by their inverse values, generally referred to as uncertainty, safety, or assessment factors. The toxicity value of the most sensitive species, typically a no observed effect concentration (NOEC) or LC50 value, is divided by this assessment factor to obtain an exposure level that is considered safe for the entire ecosystem. Examples include the 1000- to 1-fold assessment factors used by the US Office of Pollution Prevention and Toxics (OPPT) to set “concern levels” for new chemicals (Zeeman and Gilford 1993; Zeeman Citation1995), and those used in association with existing and new substances legislation within the European Union (EC 2003; ). These assessment factors are typically derived based on professional experience, and are designed to ensure conservative outcomes in tiered risk assessment procedures (Nabholz et al. Citation1989; OECD 1989).

Table 2. Assessment factors used in the European Union to derive an aquatic predicted no effect concentration (PNEC) from a limited set of toxicity data (EC 2003).

The scientific basis of assessment factors improved considerably when statistical approaches were introduced to analyze toxicity data from different studies, resulting in mathematical relationships between effect levels, endpoints, and chemicals. Early examples are the derivation of acute-to-chronic ratios by Kenaga (Citation1982) and intraspecies relationships by Dourson and Stara (1983). The statistical approaches also created alternatives to the application of fixed assessment factors. An influential statistical approach in the extrapolation of ecotoxicity data was introduced by Stephan et al. (Citation1985) and Kooijman (Citation1987). They assumed that interspecies differences in toxic endpoints can be described by a statistical distribution, now generally referred to as a species sensitivity distribution (SSD) (Posthuma et al. Citation2002). These distributions are estimated based on the outcomes of single-species toxicity tests and are used to estimate safe exposure levels for ecosystems. Although the SSD approach has been criticized for its limited ecological relevance (Forbes and Forbes Citation1993; Forbes and Calow Citation2002a, Citation2002b; Calow and Forbes Citation2003), it is widely applied by regulatory agencies to derive exposure levels that are considered safe for ecosystems (Traas Citation2001; EC 2003).

Risk assessment paradigms

During the 1960s and 1970s, each regulatory body – and sometimes even different departments of the same regulatory body – had different approaches to assess risks. It was felt that a more consistent and harmonized approach would improve risk assessment quality and efficiency. In response, a structured risk assessment model was developed in the US in the 1970s and early 1980s (e.g. Cairns et al. Citation1979; IRLG 1979). It was inspired by the lessons learnt in the field of radiation protection (McClellan Citation1999). The new risk assessment model was formalized in 1983 by the US National Research Council and is known as the ‘Red Book’ (a.k.a. Risk Assessment in the Federal Government: managing the process; NRC 1983). An important feature of the Red Book was the strict distinction between risk assessment (belonging to the scientific domain) and risk management (belonging to the policy domain). The risk assessment process was organized in four steps, which are as follows: hazard identification, dose–response assessment (also known as effect assessment), exposure assessment, and risk characterization (). Today, these four steps form the basis for many books, courses, and educational programmes dealing with risk assessment.

Figure 2. The risk assessment process consists of hazard identification, dose–response assessment, exposure assessment and risk characterization.

Figure 2. The risk assessment process consists of hazard identification, dose–response assessment, exposure assessment and risk characterization.

Although the strict distinction between risk assessment and risk management was initially well received, it was also criticized. Social science research showed that the perception of risk is not only determined by risk assessment facts, but also by psychological, social, cultural, and political factors (Slovic Citation2003). In response, it was proposed to include stakeholders, decision-makers, and experts in the risk assessment process (NRC 1996). In a more recent publication, the NRC (2009) stresses the importance of stakeholder participation in the formative stages of the risk assessment process, i.e. the definition of an appropriate problem formulation.

State of the art

Risk assessment of environmental contaminants has expanded enormously over the last few decades. The number of environmental contaminants has increased dramatically, our knowledge of the underlying biological processes has grown, and the application domain of risk assessment methods has extended to include areas such as environmental life cycle impact assessment. This section gives a brief overview of the state of the art in risk assessment of environmental contaminants.

Human risk assessment

The principles of deriving a safe dose for humans are still more or less the same as 50 years ago. The distinction between threshold and non-threshold contaminants still holds, although the distinction is increasingly being challenged. Opponents argue that information about the mode-of-action of a contaminant should be the basis for assessing its risk, and not the endpoint, e.g. cancer (Bogdanffy et al. Citation2001). It has furthermore been argued that dose-thresholds in a strict quantitative sense cannot exist and can only be translated into quantitative terms by defining thresholds in the effect-size of continuous end-points: “What size of an effect can the organism cope with and when is the effect adverse?” (Slob Citation1999). There are at least two non-carcinogenic pollutants for which a toxicity threshold cannot be established, i.e. lead and PM10 (Schwartz et al. Citation2001; Brown and Rhoads Citation2008). However, it is unlikely that the regulatory distinction between threshold and non-threshold contaminants will disappear because of the legacy of NOEL toxicity tests and because convincing biological evidence for the existence of a threshold for genotoxic carcinogens is lacking.

When comparing risk assessment documents of the early days with more recent documents, one thing that catches the eye is the level of biological detail. The size of the documents has increased accordingly [e.g. 438 pages for Agency for Toxic Substances and Disease Registry’s (ATSDR 2007) latest Toxicological Profile for Benzene]. More and more data have become available about the mechanisms involved in the fate of toxic substances in organisms (humans as well as laboratory animals), e.g. about absorption, transport, excretion, and, especially, metabolism. These data are used to evaluate issues such as the usefulness of data on laboratory animals for humans (interspecies extrapolation) and the identification of sensitive subgroups in the human population (intraspecies extrapolation). The mechanistic data are mainly used in a qualitative way; the quantitative procedure is still very similar to that introduced in the 1950s and 1960s.

One area of significant change is the replacement of the NOEL (or NOEC) by the benchmark dose as a starting point for deriving safe concentration levels. Although the term NOEL suggests that no effects occur, several studies have shown that the true effect level of the experimental NOEL may be considerable (Leisenring and Ryan Citation1992). Experimental NOEL values depend on the experimental design of the study, i.e. the number of animals per dose group and the spacing between the dose groups. This undesirable property of the NOEL has lead researchers to develop more robust toxicity measures, e.g. the benchmark dose (Crump Citation1984). The benchmark dose concept uses all available toxicity data to estimate a dose–response relationship and, subsequently, an exposure level is determined that corresponds to a predefined effect or response level (e.g. ED10 or ED5). It is widely felt that the benchmark dose is superior to the NOEL because it is statistically more robust and uses all dose–response data. However, consensus is needed on how to determine an acceptable effect level for a benchmark dose.

Another major change in human health risk assessment is the increased focus on sensitive subgroups in the human population. Although it has always been recognized that sensitive groups such as children, elderly, and pregnant women may be more sensitive to contaminants due to increase susceptibility or exposure, extensive data were generally lacking. One of the oldest cases is lead, where separate standards were defined for adults and children (JECFA 1986). Initially, children were considered “young adults,” but this perception has changed over the last 20 years (Hines et al. Citation2010). The biological system of young children, its detoxification processes, and exposures, among other factors, are not those of adults, and, therefore, they may have greater or less sensitivity to exogenous agents. More generally, studies on interindividual variability in metabolic processes of pharmaceuticals and xenobiotics have shown that variability in neonates, elderly, ethnic (sub)populations, and people suffering from genetic polymorphisms can be larger than the default factor of 10 routinely used in risk assessment (Dorne Citation2010).

Ecological risk assessment

The methods used in ecological risk assessment stem from a more recent date than those used in human risk assessment. Within this context, it does not surprise that current methods are more or less the same as those outlined in the history section. Depending on data availability, safe exposure levels are derived using either safety factors, SSDs, results from mesocosm studies, or a combination of these techniques. One area where progress was made was the bioavailability of metals. Development of the Biotic Ligand Model (BLM) resulted in a predictive tool that can account for variations in metal toxicity using information on the chemistry of local water sources (Paquin et al. Citation2002). Another improvement was the inclusion of secondary poisoning as an endpoint in ecological risk assessment (Traas et al. Citation1996).

Integration of human and ecological risk assessment

Risks assessment of contaminants is often performed separately for humans and ecosystems. Although both fields have partly similar backgrounds, they have evolved rather independently over the last few decades. However, there is a growing awareness that integration of human and ecological risk assessment (HRA and ERA) may improve assessment quality and efficiency (WHO 2001; Suter et al. Citation2003). Integration of HRA and ERA provides several advantages, e.g. coherent expression of assessment results, inclusion of interdependencies between human health and ecosystems, and improvement of scientific quality and efficiency.

Many of the differences between HRA and ERA originate from a difference in protection goals, i.e. HRA aims to protect individual humans whereas ERA aims to protect species. The diversity in toxicokinetic and dynamic processes is much larger in ERA, which limits the use of similar data and approaches. However, there are also commonalities. For each of the steps of the risk assessment process (), integration options can be identified. Examples include:

The development of an integrated analysis plan for HRA and ERA, which defines the sources, stressors, endpoints, scale levels, methods, and conceptual models included in the assessment.

The integrated characterization of sources, emissions, fate, and exposure for human and ecological endpoints. Some pathways are unique to humans (e.g., contamination of food packaging) or ecological receptors (e.g., drinking from waste sumps), but there is much commonality. Exposure assessment may benefit from an exchange of knowledge between modelers of human and ecological (i.e. wildlife) exposure (Loos et al. Citation2010a).

The characterization of (common) modes of action of stressors in organisms can improve dose–response modeling, the use of biomarkers and indicators, and the extrapolation between observed and estimated endpoints. Both fields may benefit from an improved mechanistic understanding of toxicity processes that is triggered by the development and application of the novel “omics” techniques.

The use of similar procedures and methods to determine causation, combine lines of evidence, and present the results (characterization of risks). Further improvement is possible if HRA and ERA use consistent approaches to deal with uncertainty.

The most challenging integration option is the development of unified endpoints for HRA and ERA (Suter Citation2004).

Although integration of HRA and ERA has major benefits, there are also caveats and limitations. Munns et al. (Citation2003) identified two potential issues, i.e. (1) the increased complexity of the assessment and (2) a potential associated increase in costs. The challenge will be to identify appropriate degrees and types of integration that capitalize on the benefits and identify areas where differentiated analysis and action is justified (Assmuth and Hildén Citation2008).

Uncertainty

Risk assessment is a knowledge-based process, i.e. scientific knowledge about the use, emission, fate, exposure, and/or effects of the environmental stressors is gathered and reviewed to produce a risk estimate. The outcome of the assessment is a qualitative or quantitative description of the risk which is typically uncertain due to our limited knowledge about the physical, chemical, and biological processes underlying the risk. The term uncertainty is generally used as an overarching term to indicate that the results of a risk assessment are not necessarily true or applicable to a particular situation. Several authors have pointed out that there are many different types of uncertainty (Funtowicz and Ravetz Citation1990; Morgan and Henrion Citation1990; Renn Citation1998; Ragas et al. Citation2009a).

Uncertainty has always been recognized as an intrinsic property of risk assessment, e.g. by the use of safety factors. However, there has been an increasing number of attempts over the last few decades to address uncertainty more explicitly, e.g. by quantification. Traditional deterministic risk assessments suffer from the disadvantage that the likelihood of the predicted risk remains obscured behind a number of conservative or less conservative assumptions. To overcome this problem, probabilistic techniques, such as Monte Carlo simulation, were introduced (Hammersley and Handscomb Citation1964; NRC 1975; McKone and Ryan Citation1989). Monte Carlo simulation allows users to define uncertain or variable input parameters as probability distributions and propagates these into an output distribution which provides insight in the likelihood and the range of possible risks. More refined probabilistic techniques, such as nested Monte Carlo simulation, allow the modeler to distinguish between variability and uncertainty (Frey Citation1992; Burmaster and Wilson Citation1996; Ragas et al. Citation2009b). Data on uncertainty can be used to prioritize research needs, whereas data on variability can be used to identify subpopulations or species at risk and the measures to reduce this risk. Probabilistic risk assessment is nowadays almost standard practice in risk assessment of environmental contaminants.

Although probabilistic risk assessment is propagated by many regulatory authorities, practical implementation of the results is often hampered by a lack of regulatory guidance on what levels of uncertainty and variability are acceptable. For example, is a 95% probability that 99% of the population does not exceed the ADI acceptable? There seems to be a deep-rooted reluctance under regulators to specify such criteria, especially in the human health sector. This may be caused by the fact that a protection level of 99% of the population, or similar, implies that part of the population is unprotected. Although it is generally acknowledged that full protection of every single individual is unfeasible, explicit quantification seems a step too far for most regulators that have to justify their decisions to the general public. The dilemma seems less problematic for ecological risk assessment. For example, safe concentration levels for aquatic and terrestrial ecosystems in Europe correspond to a level at which there is a 50% probability that the species-specific NOEC is not exceeded for 95% of the species.

Although probabilistic assessments are a step forward, they do not provide the full picture of uncertainty in risk assessment. A Monte Carlo simulation may show that parameter uncertainty is relatively small, but this result becomes meaningless if the model assumptions on which the model is based are highly uncertain. A good uncertainty analysis should therefore thoroughly identify all sources of uncertainty on all levels of the assessment, i.e. problem definition, model assumptions and equations, the input scenario(s), and uncertainty in the input parameters (Van der Sluijs et al. Citation2005). An input/output analysis suffices only if it can be shown that other sources of uncertainty do not dominate. These other sources of uncertainty can be difficult to identify and quantify because standard and formalized techniques are lacking. Examples include scenario analysis, model comparison, expert elicitation, and the evaluation of fundamental uncertainty (Sørensen et al. Citation2010). The most problematic category of uncertainties is the uncertainties that we do not know that exist, or the unknown unknowns (Modica and Rustichini Citation1994). The insight that it is impossible to identify and quantify all sources of uncertainty in an assessment should provide a stimulus to always err on the side of caution with the production and use of environmental contaminants.

Legislation and regulation

Regulations involving risk assessment were first implemented in the domains of worker protection and food safety. In the 1960s and 1970s, this application domain was extended to include legislation that aimed to control contaminants in the environment. Distinction can be made between legislation that aims to safeguard the quality of the environment (i.e. water, air, and soil) and legislation that controls the production and use of chemicals. Risk assessment is also increasingly being used beyond the legislative domain, e.g. to determine priorities in environmental health management (Geelen et al. Citation2009) and in environmental life cycle analysis (Huijbregts et al. Citation2005).

In most cases, the authorities are responsible for performing the risk assessment. The increasing number of contaminants and level of risk assessment detail have resulted in an enormous work load for the authorities. Only a small part of the environmental contaminants is assessed thoroughly because of a lack of data and capacity. This has led the European Union to introduce a new piece of legislation in which the “burden of proof” has shifted from the authorities to industry. The EU regulation on Registration, Evaluation, Authorization and Restriction of Chemical substances (REACH; EC 1907/2006) requires manufacturers and importers to gather information on the properties of their chemical substances, and to register the information in a central database run by the European Chemicals Agency (ECHA) in Helsinki. It is expected that REACH will have a huge impact on the way we deal with chemicals, but it is yet too early to say that it is a success. The first phase of registration closed on November 30, 2010 and resulted in 3421 dossiers of which 2247 were complete (ECHA 2011).

Future

Risk assessment of contaminants has developed since the 1950s into a normative discipline with a firm scientific basis. Risk assessments have become more detailed in response to our increasing mechanistic understanding. The disciplinary basis was enlarged by including areas such as eco(toxi)cology, environmental chemistry, and exposure science. Its application domain was widened to include areas such as environmental health indicators and environmental life cycle analysis. But are we there yet? Are we able to accurately assess the risks of environmental contaminants? And does our risk assessment tool box contain the right instruments? These questions are addressed below by discussing some recent issues and developments in risk assessment of environmental contaminants.

Nanotechnology

Nanotechnology makes it possible to manipulate matter on a molecular scale resulting in unique properties that enable new applications and products. Engineered nanoparticles () are already widely used in consumer products, e.g. in sun screens and socks, and it is expected that their production and use will increase spectacularly over the next decade (Xia et al. Citation2009). However, little is known about their potentially adverse impact on human health and the environment. Their specific properties may lead to environmental behavior, interactions, and effects that differ significantly from those known for traditional contaminants.

Figure 3. Transmission electron microscopy images of prepared mesoporous silica nanoparticles with mean outer diameter: (a) 20 nm, (b) 45 nm, and (c) 80 nm. Photo (d) is a scanning electron microscope image corresponding to (b). The insets are a high magnification of mesoporous silica particles.

Figure 3. Transmission electron microscopy images of prepared mesoporous silica nanoparticles with mean outer diameter: (a) 20 nm, (b) 45 nm, and (c) 80 nm. Photo (d) is a scanning electron microscope image corresponding to (b). The insets are a high magnification of mesoporous silica particles.

The risks of engineered nanoparticles can be assessed following the traditional steps of hazard identification, exposure assessment, effect assessment, and risk characterization. The main problem is that exposure and effects are difficult to assess because data on physicochemical properties and toxicity are largely lacking. Furthermore, conventional prediction models such as multimedia fate models have limited relevance for nanoparticles because their behavior is driven by other physicochemical properties, e.g. size and surface instead of Kow. Experiments with engineered nanoparticles are hampered by experimental problems (i.e. aggregation and precipitation) and a lack of purity and analytical detection methods. Toxicity studies are struggling with the fact that conventional dose metrics, i.e. expressed on a weight basis, are unsuitable since toxicity of engineered nanoparticles does not scale with weight but with other physicochemical properties such as surface area. Although clinical toxicity by engineered nanoparticles has not been reported yet, several laboratory tests have shown that nanoparticles can initiate adverse biological responses (Colvin Citation2003; Johnston et al. Citation2010; Scown et al. Citation2010). But the question is whether these tests were done at environmentally realistic concentration levels. Once emitted to the environment, nanoparticles tend to aggregate and deposit, particularly in the aquatic environment (Klaine et al. Citation2008). This suggests that sediment-dwelling and benthic organisms may be more prone to exposure than pelagic species (Johnston et al. Citation2010). Indirect toxicity, e.g. the inhibition of beneficial bacteria by silver nanoparticles released to the environment, may also be a reason for concern (Choi and Hu Citation2008).

The fact that engineered nanoparticles are already produced on such a large scale and released into the environment, even before we are able to accurately assess their potential risk, triggers the question whether current regulations are adequate. It was first thought that current legislation would suffice, but there now is a growing awareness that engineered nanoparticles require specific regulations (Eisenberg et al. 2010). A more precautionary approach seems appropriate, e.g. by means of source reduction, limiting biopersistence, arranging for low toxicity of degradation products, and controlling the size of particles that originate due to cleaning, wear, tear, and corrosion (Reijnders Citation2006).

Hormesis

An actual topic in dose–response modeling is the concept of hormesis (Calabrese Citation2005; Calabrese and Baldwin Citation2005). Hormesis can be defined as a stimulatory effect (e.g. an increase in growth or fecundity) that typically extends 30–60% above control levels, and over a 10-fold range below the NOEC (Chapman Citation2002). Calabrese and co-workers claim that hormesis is the rule rather than the exception, which may have profound consequences for dose–response modeling and the derivation of safe exposure levels. It has been hypothesized that low doses of contaminants trigger repair mechanisms and that these repair mechanisms also fix the damage caused by other stressors. Opponents argue that the low-dose stimulation found in laboratory tests has limited relevance for real exposure situations where repair mechanisms may already be triggered by other stressors. A generally accepted mechanistic basis for hormesis is currently lacking, which impedes the use of hormesis as the principal dose–response default assumption in risk assessment (Kitchin and Drane Citation2005). As long as there is no general consensus among scientists and risk assessors on the relevance of hormesis, it seems unlikely that it will be implemented in HRA and ERA practice (Reijnders Citation2002).

Mechanistic information

The development and application of new technologies in toxicogenomics, epigenetics, and toxicokinetics are producing a wealth of data on toxic responses in organisms. Enzymes involved in the transport, metabolism, and excretion of toxicants are being identified, as well as the genes and other inheritable factors that control these enzymes. If progress continues in this rate, it will not be long before the molecular mechanisms underlying many toxic responses are unraveled. This is expected to have an unprecedented impact on toxicity testing and risk assessment. It could transform toxicity testing from a system based on whole-animal testing to one founded primarily on in vitro methods that evaluate changes in biologic processes using cells, cell lines, or cellular components (NRC 2007). One of the current challenges is to translate the results of in vitro tests to the organism level. This requires thorough understanding of the molecular and physiological mechanisms involved in the propagation of toxic responses. If the result of an in vitro test can be interpreted in a biologically meaningful way, the next step is to link it to a physiologically based pharmacokinetic (PBPK) model that predicts the dose at the target tissue. Taking it even one step further, our improved mechanistic understanding could also be used to create a database of the most important molecular receptors involved in toxic responses. This database could be used to identify potential targets for new substances using quantitative structure activity relationships.

Cost-benefit analysis

In cost-benefit analysis, the costs associated with reducing risks are weighed against the benefits in terms of improved health or ecosystem quality. Cost-benefit analysis was introduced in the 1990s in the US in response to the perception that risk assessment practice led to excessive and unnecessarily strict regulations (Mazurek Citation1996). An example is the US EPA drinking water standard for trihalomethane. In terms of risk, the baseline mortality risk per million individuals exposed to trihalomethane in drinking water is estimated to be about 420. Regulating at such a level of exposure is estimated to cost approximately $200,000 per premature death averted (Breyer Citation1993).

A disadvantage of traditional cost-benefit analyses is that the health effects (e.g., number of deaths) have to be weighed against monetary benefits. This always involves normative value judgments: how much is a life worth? Although pragmatic choices can be made, the issue of pricing life is a moral issue which triggers strong opinions (Funtowicz and Ravetz Citation1994; Ackerman and Heinzerling Citation2002). Another disadvantage of traditional cost-benefit analysis is that it does not account for uncertainty. The health benefits are often highly uncertain due to conservative assumptions in the risk assessment process, while the costs are often estimated on a less conservative basis. For example, a cost-benefit analysis for nationwide soil remediation operations in The Netherlands showed that the benefits outweigh the costs. However, when uncertainty in discount rates is considered, the results become ambiguous and it was shown that the costs could even outweigh the benefits (Van Wezel et al. Citation2008).

A more fruitful approach is to compare the impacts of alternative management strategies in order to identify the most efficient strategy. For example, Assmuth and Jalonen (Citation2005) argue that the health risks from dioxins in fatty Baltic fish are probably exceeded by the health benefits of consuming such fish and fish oil, especially for people who are vulnerable to cardiovascular disorders. This implies that people should not be discouraged to eat fatty fish caught in the Baltic sea, unless it can be replaced by an equally healthy substitute. Geelen et al. (Citation2009) showed that the health impact of PM10 concentrations in The Netherlands, expressed in Disability Adjusted Life Years (DALYs), is much larger that the health impact of carcinogenic substances in air. In terms of health gain, it therefore seems to make sense to give priority to the reduction of PM10 emissions instead of carcinogenic substances.

Cumulative risk assessment

Risk assessment of environmental contaminants has traditionally focused on single stressors and single sources. This can partly be explained by the fact that the acute pollution problems of the twentieth century were caused by a limited number of contaminants which were emitted in relatively large quantities by a limited number of sources. Consequently, risk assessments focused on particular contaminants or sources. The main questions addressed were “Can this contaminant cause adverse effects?” or “Can this production plant cause adverse effects?” These assessments typically aim to capture the chain of events starting with the emission of the contaminant into the environment and ending with the emergence of adverse health effects. This approach has been very effective in reducing environmental problems, which are obviously caused by single stressors and/or sources, e.g. lead, asbestos, and acid rain.

Now the overt pollution problems have largely been solved, awareness is growing that exposure to single contaminants is the exception rather than the rule. One of the reasons is that some health problems are on the rise without an obvious explanation. It is estimated that 7.4% of the married women in the US aged between 15 and 44 are infertile (Chandra et al. Citation2005). The diagnoses of autism, attention deficit, and hyperactivity disorders (ADD and ADHD), and the prevalence of childhood brain cancer and acute lymphocytic leukemia have all increased over the past 30 years (Trasande and Landrigan Citation2004; Jahnke et al. Citation2005). Air pollution and early life exposure to environmental pollutants are commonly cited as possible causes. Furthermore, a range of health issues is suspected to be related to cumulative stress. For example, neuro-developmental disorders can be caused by heavy metals, dioxins, PCBs (polychlorinated biphenyl), and pesticides (Brent Citation2004); childhood cancer could be related to a number of physical, chemical, and biological agents (e.g. parental tobacco smoke, parental occupational exposure to solvents; Trasande and Landrigan Citation2004). As a consequence, several national and international institutions have recognized the need to evaluate risks from multiple stressors (NRC 1994; PCCRARM 1997; Mileson et al. Citation1999; US EPA 2003; WHO 2009). Although researchers and risk assessors have always recognized the need to address cumulative stress, progress has been slow because of lacking knowledge, experimental limitations, and scarce funding (Callahan and Sexton Citation2007).

Over the last few years, many reports on cumulative risk assessment have been published (Mileson et al. Citation1999; US EPA 2003, 2007; WHO 2009). However, relatively few case studies have been performed, and those that have been performed focus on a limited number of contaminants originating from the same process, e.g. drinking water disinfection by products (DBPs; Teuschler et al. Citation2004), or having a similar mode of action, e.g. phthalates (NRC 2008) and organophosphorus pesticides (US EPA 2001, 2002, 2006). Conceptually, these assessments are comparable with conventional contaminant- or source-oriented assessments: the focus is on the emission source or a group of contaminant(s), and the aim is to describe the chain of events starting with the emission of the contaminant(s) and ending with the adverse health effects. These assessments are a step forward because mixture effects are explicitly addressed. However, these assessments do not answer the main question underlying the increased incidence of some health problems: can these problems be explained by exposure to multiple stressors?

The main question underlying cumulative risk assessment is not whether a particular set of contaminants or a particular source causes adverse health effects, but how the cocktail of environmental stressors influences the health of humans and ecosystems. An example is a farmer who grows crops on a metal polluted soil, sprays pesticides, and lives in a house with high radon levels. These exposures are totally unrelated in terms of sources and substances, but the overall exposure may still pose a significant health threat to the farmer. And this threat will never be discovered if risk assessments focus exclusively on substances with a similar mode of action or substances that originate from the same source. If our aim is to reduce potential health problems caused by multiple stressors, we first need to assess how humans or ecosystems are exposed to multiple stressors, either concurrently, sequentially, or both, and subsequently we need to assess whether this exposure pattern may give rise to adverse health effects. Therefore, true cumulative risk assessment requires another perspective, i.e. that of the receptor, the exposed individual, population, or (eco)system. After all, it is the receptor that integrates multiple stressors over space and time, and it is the vulnerability or sensitivity of the receptor that determines whether adverse health effects will arise.

Accounting for cumulative exposures will have major implications for risk assessment. Examples include:

The combination of different exposures and stressors is sheer endless and it will be unfeasible to assess the risk of all cumulative risks in detail. Instruments are needed to identify those cumulative exposure situations that may result in the highest risk.

The spatiotemporal activity patterns of the exposed organisms and the spatiotemporal distribution of the pollutants together determine cumulative exposure. Novel exposure assessment methods and models are needed that focus on the receptor (e.g. an individual or a wildlife organism), which integrates multiple exposures over space and time. Examples include person-oriented monitoring techniques (Lai et al. Citation2004) and individual-based exposure models such as cumulative and aggregate risk evaluation system (CARES; ILSI 2008), LifeLine™ (LifeLine Group 2007), and EcoSPACE (Loos et al. Citation2010b).

Refined concepts and models are needed for risk assessment of chemical mixtures (Ragas et al. Citation2011ab). The current default models of concentration and response addition are a good starting point to assess the risk of contaminants with a (dis)similar mode of action, but these models ignore potential interaction effects. Models that do account for interactions are still in its infancy, e.g. the interaction-based hazard index developed by the US EPA (2000), and they lack a sound empirical and mechanistic basis (Ragas et al. Citation2011b).

A constant exposure regime is typically applied in toxicity tests. However, in real life, humans and other organisms are exposed to stressors that vary over time; chemical stressors as well as physical stressors (e.g. temperature). Especially in a cumulative exposure situation, the exposure regime may have a profound impact on the risk. The order of the exposure events (first exposure to substance A, then to substance B or vice versa) can result in different effect levels, for example if one of the substances induces the metabolism of the other. It stresses the importance of mechanistic insight in risk assessment of cumulative exposures.

Although scientific tools and insights are important for the implementation of cumulative risk assessment, they are not sufficient. This can best be illustrated with the above-mentioned example of a farmer that grows crops on a metal polluted soil, sprays pesticides, and lives in a house with high radon levels. These exposures fall under different sectorial regulations, i.e. soil protection, pesticide registration, and safe building materials. However, protection of the farmer’s health requires an integrated approach that considers the effects of all exposures together. It can, therefore, be concluded that the identification and regulation of cumulative risks requires a broader regulatory context than currently provided. The focus should not be on a specific substance, mixture, or compartment, but on the integrated protection of human and ecosystem health from exposure to multiple stressors through different routes and at different moments in time.

Conclusions

Regulations for risk assessment of environmental contaminants were implemented during the 1960s and 1970s, after the adverse impacts of the uncontrolled large scale production and use of chemicals became evident. Much was learned from the areas of work place protection, food safety, and radiation protection. Risk assessment expanded considerably, e.g. with the inclusion of ecosystem protection and models that predict the environmental fate of contaminants. Since then, the risks of many environmental contaminants have been assessed and measures have been taken to reduce these risks. There are success stories to be told, e.g. the phase out of lead in paints and petrol, the ban on the production of PCBs and chlorofluorocarbons (CFCs), and a substantial reduction in the emission of volatile organic chemicals. But new issues have arisen, e.g. engineered nanoparticles, cumulative exposures, and the risks of new emerging substances such as pharmaceuticals, endocrine disruptors, and flame retardants. Some of these issues can be addressed with existing methods and regulations. Other issues, e.g. engineered nanoparticles and cumulative risks, call for a reconsideration of current approaches. For example, risk assessment of engineered nanoparticles requires the development of new experimental techniques, dose metrics, and structure–activity relationships. Risk assessment of cumulative exposures requires a paradigm shift, i.e. the traditional focus on stressors, emission sources, and environmental compartments should be moved to the receptors; in science as well as in legislation. The activity pattern of the receptor determines its cumulative exposure profile and should be combined with personal toxicity and sensitivity data, e.g. data on genetic polymorphisms, to obtain an estimate of the cumulative risk. This process will benefit from our increased mechanistic understanding that is triggered by the results of new experimental techniques such as genomics, metabolomics, proteomics, and toxicity tests with in vitro cell line systems. These developments hold the promise of a future where the risk of single and multiple stressors can be predicted using a combination of emission data, environmental characteristics, in vitro toxicity data, personal characteristics (i.e. activity patterns and genetic data), and computer models that link these data based on mechanistic insights. This could be a future where the use of laboratory animals has become obsolete, or at least reduced to an absolute minimum, and where HRA and ERA are optimally integrated. Risk assessors and scientists can help to realize this future by actively promoting the development and implementation of new risk assessment techniques and regulations.

References

  • Abdel-Rahman , M S and Kadry , A M . 1995 . Studies on the use of uncertainty factors in deriving RfDs . Human Ecol Risk Assess , 1 : 614 – 625 .
  • [ACGIH] American Conference of Governmental Industrial Hygienists . 1998 . 1998 Threshold limit values and biological exposure indices for chemical substances and physical agents , Cincinnati (OH) : ACGIH .
  • [ACGIH] American Conference of Governmental Industrial Hygienists . 2011 . History [cited 2011 Jan 15] Available from: http://www.acgih.org/
  • Ackerman , F and Heinzerling , L . 2002 . Pricing the priceless: cost-benefit analysis of environmental protection . Univ Penn Law Rev , 150 : 1553 – 1584 .
  • Armitage , P and Doll , R . 1954 . The age distribution of cancer and a multistage theory of carcinogeneses . Br J Cancer , 8 : 1 – 12 .
  • Arora , S and Gardner , F . 1994 . Gofman on the health effects of radiation: “There is no safe threshold” . Synapse , 38 ( 16 ) [cited 2011 Apr 1]. Available from: http://www.ratical.org/radiation/CNR/synapseP.html
  • Assmuth , T and Hildén , M . 2008 . The significance of information frameworks in integrated risk assessment and management . Environ Sci Policy , 11 : 71 – 86 .
  • Assmuth , T and Jalonen , P . 2005 . Risks and management of dioxin-like compounds in Baltic Sea fish: an integrated assessment , Copenhagen, , Denmark : Nordic Council of Ministers .
  • [ATSDR] Agency for Toxic Substances and Disease Registry . 2007 . Toxicological profile for benzene , 438 Atlanta, GA USA : U.S. Department of Health and Human Services, Public Health Service .
  • Bogdanffy , M S , Daston , G , Faustman , E M , Kimmel , C A , Kimmel , G L , Seed , J and Vu , V . 2001 . Harmonization of cancer and noncancer risk assessment: proceedings of a consensus-building workshop . Toxicol Sci , 61 : 18 – 31 .
  • Brent , R L . 2004 . Environmental causes of human congenital malformations: the pediatrician's role in dealing with these complex clinical problems caused by a multiplicity of environmental and genetic factors . Pediatrics , 113 : 957 – 968 .
  • Breyer , S G . 1993 . Breaking the vicious circle: toward effective risk regulation , 127 Cambridge (MA) : Harvard University Press .
  • Brimblecombe , P . 1987 . The big smoke: a history of air pollution in London since medieval times , 185 New York, NY : Methuen .
  • Brown , M J and Rhoads , G G . 2008 . Guest editorial: responding to blood lead levels < 10 micro g/dL . Environ Health Perspect , 116 : A60 – A61 .
  • Burin , G J and Saunders , D R . 1999 . Addressing human variability in risk assessment – the robustness of the intraspecies uncertainty factor . Regul Toxicol Pharmacol , 30 : 209 – 216 .
  • Burmaster , D E and Wilson , A M . 1996 . An introduction to second-order random variables in human health risk assessments . Human Ecol Risk Assess , 2 : 892 – 919 .
  • Cairns , Jr. J , Dickson , K L and Maki , A W . 1979 . Estimating the hazard of chemical substances to aquatic life . Hydrobiologia , 64 : 157 – 166 .
  • Calabrese , E J . 1985 . Uncertainty factors and interindividual variation . Regul Toxicol Pharmacol , 5 : 190 – 196 .
  • Calabrese , E J . 2005 . Paradigm lost, paradigm found: the re-emergence of hormesis as a fundamental dose response model in the toxicological sciences . Environ Pollut , 138 : 378 – 411 .
  • Calabrese , E J and Baldwin , L A . 2005 . Toxicology rethinks its central belief. Hormesis demands a reappraisal of the way risks are assessed . Nature , 421 : 691 – 692 .
  • Callahan , M and Sexton , K . 2007 . If cumulative risk assessment is the answer, what is the question? . Environ Health Perspect , 115 : 799 – 806 .
  • Calow , P and Forbes , VE . 2003 . Does ecotoxicology inform ecological risk assessment? . Environ Sci Technol , 37 : 146A – 151A .
  • Carson , R . 1962 . Silent spring , 317 Boston, MA : Houghton Mifflin Co .
  • Chandra , A , Martinez , G M , Mosher , W D , Abma , J C and Jones , J . 2005 . Fertility, family planning, and reproductive health of U.S. women: data from the 2002 National Survey of Family Growth , National Center for Health Statistics. Vital Health Stat. 23:175 .
  • Chapman , P M . 2002 . Ecological risk assessment (ERA) and hormesis . Sci Total Environ , 288 : 131 – 140 .
  • Choi , O and Hu , Z . 2008 . Size dependent and reactive oxygen species related nanosilver toxicity to nitrifying bacteria . Environ Sci Technol , 42 : 4583 – 4588 .
  • Colvin , V L . 2003 . The potential environmental impact of engineered nanomaterials . Nat Biotechnol , 21 : 1166 – 1170 .
  • Copius Peereboom , JW and Reijnders , L . 1986 . Hoe gevaarlijk zijn milieugevaarlijke stoffen? (in Dutch) , 245 Meppel, , The Netherlands : Boom .
  • Crump , K S . 1984 . A new method for determining allowable daily intakes . Fund Appl Toxicol , 4 : 854 – 871 .
  • Dorne , J LCM . 2010 . Metabolism, variability and risk assessment . Toxicology , 268 : 156 – 164 .
  • Dourson , M L . 1994 . “ Methodology for establishing oral reference doses (RfDs) ” . In Risk assessment of essential elements , Edited by: Mertz , W , Abernathy , C O and Olin , S S . 51 – 61 . Washington, DC : ILSI press .
  • Dourson , M L , Felter , S P and Robinson , D . 1996 . Evolution of science-based uncertainty factors in noncancer risk assessment . Regul Toxicol Pharmacol , 24 : 108 – 120 .
  • Dourson , M L and Starra , J F . 1983 . Regulatory history and experimental support of uncertainty (safety) factors . Regul Toxicol Pharmacol , 3 : 224 – 238 .
  • [EC] European Commission . 2003 . Technical guidance document on risk assessment , Luxembourg : Office for Official Publications of the European Communities .
  • [ECHA] European Chemicals Agency . 2011 . List of registered phase-in substances [cited 2011 Jan 23] Accessed from http://echa.europa.eu/
  • Eisenberger , I , Nentwich , M , Fiedeler , U , Gazsó , A and Simkó , M . 2010 . Nano regulation in the European Union. NanoTrust-Dossier No. 017 , Vienna : Austrian Academy of Sciences . [cited 2011 Apr 1] Available from: http://epub.oeaw.ac.at/ita/nanotrust-dossiers/dossier017en.pdf
  • Forbes , V E and Calow , P . 2002a . Extrapolation in ecological risk assessment: balancing pragmatism and precaution in chemical controls legislation . BioScience , 52 : 249 – 257 .
  • Forbes , V E and Calow , P . 2002b . Applying weight-of-evidence in retrospective ecological risk assessment when quantitative data are limited . Human Ecol Risk Assess , 8 : 1625 – 1639 .
  • Forbes , T L and Forbes , V E . 1993 . A critique of the use of distribution-based extrapolation models in ecotoxicology . Funct Ecol , 7 : 249 – 254 .
  • Frey , H C . 1992 . Quantitative analysis of uncertainty and variability in environmental policy making , Pittsburgh, PA : Research Associate Center for Energy and Environmental Studies .
  • Funtowicz , S O and Ravetz , J R . 1990 . Uncertainty and quality in science for policy , Dordrecht, The Netherlands : Kluwer Academic Publishers .
  • Funtowicz , S O and Ravetz , J R . 1994 . The worth of a songbird: ecological economics as a post-normal science . Ecol Econ , 10 : 197 – 208 .
  • Geelen , L MJ , Huijbregts , M AJ , Den Hollander , H , Ragas , A MJ , Van Jaarsveld , H A and De Zwart , D . 2009 . Confronting environmental pressure, environmental quality and human health impact indicators of priority air emissions . Atmos Environ , 43 : 1613 – 1621 .
  • Giesy , Jr. J P and Graney , R L . 1989 . Recent developments in and intercomparisons of acute and chronic bioassays and bioindicators . Hydrobiologia , 188/189 : 21 – 60 .
  • Hamilton , A . 1943 . Exploring the dangerous trades , 433 Boston, MA : Little Brown .
  • Hammersley , J S and Handscomb , D C . 1964 . Monte Carlo methods , New York, NY : John Wiley and Sons, Inc .
  • Hart , W B , Doudoroff , P and Greenbank , J . 1945 . The evaluation of the toxicity of industrial wastes, chemicals and other substances to freshwater fishes. Final Report , Philadelphia, PA : Waste Control Laboratory, Atlantic Refining .
  • Hattis , D , Erdreich , L S and Ballew , M . 1987 . Human variability in susceptibility to toxic chemicals – a preliminary analysis of pharmacokinetic data from normal individuals . Risk Anal , 7 : 415 – 426 .
  • Hines , R N , Sargent , D , Autrup , H , Birnbaum , L S , Brent , R L , Doerrer , N G , Cohen Hubal , E A , Juberg , D R , Laurent , C , Luebke , R Olejniczak , K . 2010 . Approaches for assessing risks to sensitive populations: lessons learned from evaluating risks in the pediatric population . Toxicol Sci , 113 : 4 – 26 .
  • Huijbregts , M AJ , Struijs , J , Goedkoop , M , Heijungs , R , Hendriks , A J and Van de Meent , D . 2005 . Human population intake fractions and environmental fate factors of toxic pollutants in life cycle impact assessment . Chemosphere , 6 : 1495 – 1504 .
  • [ILSI] International Life Sciences Institute . 2008 . Cumulative and aggregate risk evaluation system (CARES) V3.0 , Washington (DC) : ILSI .
  • [IRLG] Interagency Regulatory Liaison Group . 1979 . Scientific bases for identification of potential carcinogens and estimation of risks . J Natl Cancer Inst , 63 : 241 – 268 .
  • Jahnke , G D , Iannucci , A R , Scialli , A R and Shelby , M D . 2005 . Center for the evaluation of risks to human reproduction – the first five years . Birth Defects Res B Dev Reprod Toxicol , 74 : 1 – 8 .
  • Jarabek , A M . 1994 . Inhalation RfC methodology: Dosimetric adjustments and dose–response estimation of noncancer toxicity in the upper respiratory tract . Inhal Toxicol , 6 : 301 – 325 .
  • [JECFA] Joint FAO/WHO Expert Committee on Food Additives . 1986 . Evaluation of certain food additives and the contaminants mercury, lead and cadmium , Geneva, , Switzerland : World Health Organization . Report No.: 18
  • Johnston , H J , Hutchison , G R , Christensen , F M , Peters , S , Hankin , S , Aschberger , K and Stone , V . 2010 . A critical review of the biological mechanisms underlying the in vivo and in vitro toxicity of carbon nanotubes: the contribution of physico-chemical characteristics . Nanotoxicology , 4 : 207 – 246 .
  • Kahl , T , Schröder , K-W , Lawrence , F R , Marshall , W J , Höke , H and Jckh , R . 2000 . “ Aniline ” . In Ullmann’s encyclopedia of industrial chemistry. Vol. 3 , New York, NY : Wiley. p. 161–176 .
  • Kenaga , E E . 1982 . Predictability of chronic toxicity from acute toxicity of chemicals in fish and aquatic invertebrates . Environ Toxicol Chem , 1 : 347 – 358 .
  • Kitchin , K T and Drane , J W . 2005 . A critique of the use of hormesis in risk assessment . Human Exp Toxicol , 24 : 249 – 253 .
  • Klaine , S J , Alvarez , P J , Batley , G E , Fernandes , T F , Handy , R D , Lyon , D Y , Mahendra , S , McLaughlin , M J and Lead , J R . 2008 . Nanomaterials in the environment: behavior, fate, bioavailability, and effects . Environ Toxicol Chem , 27 : 1825 – 1851 .
  • Kooijman , S ALM . 1987 . A safety factor for LC50 values allowing for differences in sensitivity among species . Water Res , 21 : 269 – 276 .
  • Lai , H K , Kendall , M , Ferrier , H , Lindup , I , Alm , S , Hanninen , O , Jantunen , M , Mathys , P , Colvile , R Ashmore , M R . 2004 . Personal exposures and microenvironment concentrations of PM2.5, VOC, NO2 and CO in Oxford, UK . Atmos Environ , 38 : 6399 – 6410 .
  • Lehman , A J and Fitzhugh , O G . 1954 . 100-Fold margin of safety . Assoc Food Drug Officials US Q Bull , 18 : 33 – 35 .
  • Leisenring , W and Ryan , L . 1992 . Statistical properties of the NOAEL . Regul Toxicol Pharmacol , 15 : 161 – 17 .
  • Lewis , J . 1985 . Lead poisoning: a historical perspective , EPA J, , USA : Environmental Protection Agency . [cited 2011 Jul 1] Available from: http://www.epa.gov/aboutepa/history/topics/perspect/lead.html
  • LifeLine Group Inc . 2007 . User’s manual. LifeLine™ Version 4.4 , Annandale, VA : The LifeLine Group .
  • Loos , M , Ragas , A MJ , Plasmeijer , R , Schipper , A M and Hendriks , A J . 2010b . Eco-SpaCE: an object-oriented, spatially explicit model to assess the risk of multiple environmental stressors on terrestrial vertebrate populations . Sci Total Environ , 408 : 3908 – 3917 .
  • Loos , M , Schipper , A M , Schlink , U , Strebel , K and Ragas , A MJ . 2010a . Receptor-oriented approaches in wildlife and human exposure modelling: a comparative study . Environ Model Software , 25 : 369 – 382 .
  • Lu , F C and Kacew , S . 2002 . Lu’s basic toxicology: fundamentals, target organs and risk assessment , 392 London, UK : Taylor & Francis .
  • Mantel , N and Bryan , W . 1961 . “Safety” testing of carcinogenic agents . J Nat Cancer Inst , 27 : 455 – 470 .
  • Markham , A . 1994 . A brief history of pollution , 162 New York, NY : St. Martin's Press .
  • Mazurek , JV . 1996 . The role of health risk assessment and cost-benefit analysis in environmental decision making in selected countries: an initial survey , Washington, DC : Resources for the Future, Discussion Paper 96–36 .
  • McClellan , O . 1999 . Human health risk assessment: a historical overview and alternative paths forward . Inhal Toxicol , 11 : 477 – 518 .
  • McKone , T E and Ryan , P B . 1989 . Human exposures to chemicals through food chains: an uncertainty analysis . Environ Sci Technol , 23 : 1154 – 1163 .
  • Meek , M E , Newhook , R , Liteplo , R G and Armstrong , V C . 1994 . Approach to assessment of risk to human health for priority substances under the Canadian Environmental Protection Act . Environ Carcin Ecotoxicol Rev , C12 : 105 – 134 .
  • Mileson B Faustman E Olin S Ryan P B Ferenc S Burke T A framework for cumulative risk assessment International Life Sciences Institute, ILSI Press Washington, DC 1999
  • Modica , S and Rustichini , A . 1994 . Awareness and partitional information structures . Theory Decis , 37 : 107 – 124 .
  • Morgan , M G and Henrion , M . 1990 . Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis , 332 Cambridge, UK : Cambridge University Press .
  • Munns , Jr, W R , Suter , II, G W , Damstra , T , Kroes , R , Reiter , L W and Marafante , E . 2003 . Integrated risk assessment – results from an international workshop . Human Ecol Risk Assess , 9 : 379 – 386 .
  • Nabholz , J V , Zeeman , M and Rodier , D . 1989 . “ Case study 1: assessing the ecological risks of a new chemical ” . In Uncertainty analysis in ecological risk assessment , Edited by: Warren-Hicks , W J and Moore , D RJ . 207 – 225 . Pensacola, FL : Society of Environmental Toxicology and Chemistry .
  • Naumann , B D and Weideman , P A . 1995 . Scientific basis for uncertainty factors used to establish occupational exposure limits for pharmaceutical active ingredients . Human Ecol Risk Assess , 1 : 590 – 613 .
  • [NRC] National Research Council . 1975 . Reactor safety: an assessment of accident risks in the U.S. commercial nuclear power plants. NUREG-75–014 (WASH-1400) , Washington, DC : Nuclear Regulatory Commission .
  • [NRC] National Research Council . 1983 . Risk assessment in the federal government: managing the process , Washington, DC : National Academy Press .
  • [NRC] National Research Council . 1994 . Science and judgment in risk assessment , Washington, DC : National Academy Press .
  • [NRC] National Research Council . 1996 . Understanding risk: informing decisions in a democratic society , Washington, DC : National Academy Press .
  • [NRC] National Research Council . 2007 . Toxicity testing in the 21st century. A vision and a strategy , Washington, DC : National Academy Press .
  • [NRC] National Research Council . 2008 . Phthalates and cumulative risk assessment: the task ahead , Washington, DC : National Academy Press .
  • [NRC] National Research Council . 2009 . Science and decisions: advancing risk assessment , Washington, DC : National Academy Press .
  • [NTAC] National Technical Advisory Committee . 1968 . Water quality criteria , Washington, DC : U.S. Federal Water Pollution Control Administration .
  • [OECD] Organization for Economic Cooperation and Development . 1989 . Report of the OECD workshop on ecological effects assessment. Environmental Monograph 26 Geneva, , Switzerland: OECD
  • Paquin , P R , Gorsuch , J W , Apte , S , Batley , G E , Bowles , K C , Campbell , P G , Delos , C G , Di Toro , D M , Dwyer , R L Galvez , F . 2002 . The biotic ligand model: a historical overview . Comp Biochem Physiol C , 133 : 3 – 35 .
  • [PCCRARM] Presidential/Congressional Commission on Risk Assessment and Risk Management . 1997 . Framework for environmental health risk management , Washington, DC : PCCRARM .
  • Pohl , H R and Abadin , H G . 1995 . Utilizing uncertainty factors in minimal risk levels derivation . Regul Toxicol Pharmacol , 22 : 180 – 188 .
  • Posthuma , L , Suter II , G W and Traas , T P , eds. 2002 . Species sensitivity distributions in ecotoxicology , 587 Boca Raton, FL : Lewis Publishers .
  • Rademaker , B C and Linders , J BHJ . 1994 . Progress report 3: estimated-concentration-of-non-concern of polluting agents in drinking water and air for humans , Bilthoven, , (The Netherlands) : National Institute of Public Health and Environmental Protection (RIVM) .
  • Ragas , A MJ , Brouwer , F PE , Büchner , F L , Hendriks , H WM and Huijbregts , M AJ . 2009b . Separation of uncertainty and interindividual variability in human exposure modeling . J Expo Anal Environ Epidemiol , 19 : 201 – 212 .
  • Ragas , A MJ , Huijbregts , M AJ , Henning-de Jong , I and Leuven , R SEW . 2009a . Uncertainty in environmental risk assessment: implications for risk-based management of river basins . Integr Environ Assess Manag , 5 : 27 – 37 .
  • Ragas , A MJ , Oldenkamp , R , Preeker , N L , Wernicke , J and Schlink , U . 2011b . Cumulative risk assessment of chemical exposures in urban environments . Enviro Int , 37 : 872 – 881 .
  • Ragas , A MJ , Teuschler , L K , Posthuma , L and Cowan , C . 2011a . “ Human and ecological risk assessment of chemical mixtures ” . In Mixture toxicity – linking approaches from ecological and human toxicology , Edited by: Van Gestel . 157 – 215 . New York, NY : SETAC, CRC Press .
  • Reijnders , L . 2002 . Scientific foundations of hormesis (Book review) . J Clean Prod , 10 : 609 – 610 .
  • Reijnders , L . 2006 . Cleaner nanotechnology and hazard reduction of manufactured nanoparticles . J Clean Prod , 14 : 124 – 133 .
  • Renn , O . 1998 . The role of risk perception for risk management . Reliab Eng Syst Safe , 59 : 49 – 62 .
  • Renwick , A G . 1991 . Safety factors and establishment of acceptable daily intakes . Food Addit Contam , 8 : 135 – 150 .
  • Renwick , A G . 1993 . Data-derived safety factors for the evaluation of food additives and environmental contaminants . Food Addit Contam , 10 : 275 – 305 .
  • Renwick , A G and Lazarus , N R . 1998 . Human variability and noncancer risk assessment – an analysis of the default uncertainty factor . Regul Toxicol Pharmacol , 27 : 3 – 20 .
  • Rodricks , J V . 2007 . 2006 American College of toxicology distinguished service award lecture: Has quantitative risk assessment been of benefit to the science of toxicology? . Int J Toxicol , 26 : 3 – 12 .
  • Schwartz , J , Ballester , F , Saez Pérez-Hoyos , M S , Bellido , J , Cambra , K , Arribas , F , Cañada , A , Pérez-Boillos , M J and Sunyer , J . 2001 . The concentration–response relation between air pollution and daily deaths . Environ Health Perspect , 109 : 1001 – 1006 .
  • Scown , T M , Van Aerle , R and Tyler , C R . 2010 . Review: do engineered nanoparticles pose a significant threat to the aquatic environment? . Crit Rev Toxicol , 40 : 653 – 670 .
  • Slob , W . 1999 . Thresholds in toxicology and risk assessment . Int J Toxicol , 18 : 259 – 268 .
  • Slovic , P . 2003 . Going beyond the Red Book: the sociopolitics of risk . Human Ecol Risk Assess , 9 : 1181 – 1190 .
  • Sørensen , P B , Thomsen , M , Assmuth , T , Grieger , K D and Baun , A . 2010 . Conscious worst case definition for risk assessment, part I: a knowledge mapping approach for defining most critical risk factors in integrative risk management of chemicals and nanomaterials . Sci Total Environ , 408 : 3852 – 3859 .
  • Stephan , C E , Mount , D I , Hansen , D J , Gentile , J H , Chapman , G A and Brungs , W A . 1985 . Guidelines for deriving numerical national water quality criteria for the protection of aquatic organisms and their uses , Washington, DC : US-Environmental Protection Agency . Report No.: PB85-227049
  • Suter , II G W . 2004 . Bottom-up and top-down integration of human and ecological risk assessment . J Toxicol Environ Health A , 67 : 779 – 790 .
  • Suter , II G W , Vermeire , T , Munns , Jr., W R and Sekizawa , J . 2003 . Framework for the integration of health and ecological risk assessment . Human Ecol Risk Assess , 9 : 281 – 301 .
  • Teuschler , L K , Rice , G E , Wilkes , C R , Lipscomb , J C and Power , F W . 2004 . A feasibility study of cumulative risk assessment methods for drinking water disinfection by-product mixtures . J Toxicol Environ Health A , 67 : 755 – 777 .
  • Traas , T P . 2001 . Guidance document on deriving environmental risk limits , 117 Bilthoven, , (The Netherlands) : National Institute of Public Health and the Environment (RIVM). Report No.: 601501012 .
  • Traas , T P , Luttik , R and Jongbloed , R H . 1996 . A probabilistic model for deriving soil quality criteria based on secondary poisoning of top predators. 1. Model description and uncertainty analysis . Ecotoxicol Environ Safe , 34 : 264 – 278 .
  • Trasande , L and Landrigan , P J . 2004 . The National Children's Study: a critical national investment . Environ Health Perspect , 112 : A789 – A790 .
  • Truhaut , R . 1991 . The concept of the acceptable daily intake: an historical review . Food Addit Contam , 8 : 151 – 162 .
  • [US EPA] United States Environmental Protection Agency . 2000 . Supplementary guidance for conducting health risk assessment of chemical mixtures , Cincinnati, OH : ORD/NCEA . Report No.: EPA/630/R–00/002
  • [US EPA] US Environmental Protection Agency . 2001 . Preliminary cumulative risk assessment of the organophosphorus pesticides , Washington, DC : United States Environmental Protection Agency Office of Pesticide Programs .
  • [US EPA] U.S. Environmental Protection Agency . 2002 . Organophosphate pesticides: revised cumulative risk assessment , Washington, DC : United States Environmental Protection Agency Office of Pesticide Programs .
  • [US EPA] U.S. Environmental Protection Agency . 2003 . Framework for cumulative risk assessment , Washington (DC) : United States Environmental Protection Agency, Risk Assessment Forum .
  • [US EPA] U.S. Environmental Protection Agency . 2006 . Organophosphorus cumulative risk assessment 2006 update , Washington, DC : United States Environmental Protection Agency, Office of Pesticide Programs .
  • [US EPA] U.S. Environmental Protection Agency . 2007 . Concepts, methods and data sources for cumulative health risk assessment of multiple chemicals, exposures and effects: a resource document , Cincinnati, OH : United States Environmental Protection Agency, National Center for Environmental Assessment . Report No.: EPA/600/R–06/013F
  • Van der Heijden , C A . 2006 . “ Carcinogen risk assessment: other approaches ” . In Safework bookshelf 2006 Edited by: International Labour Office . Geneva, , Switzerland [cited 2011 Apr 1 ]. Available from: http://www.ilo.org/safework_bookshelf/
  • Van der Sluijs , J P , Craye , M , Funtowicz , S , Kloprogge , P , Ravetz , J and Risbey , J . 2005 . Combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: the NUSAP system . Risk Anal , 25 : 481 – 492 .
  • Van Wezel , A P , Franken , R OG , Drissen , E , Versluijs , K CW and Van den Berg , R . 2008 . Societal cost-benefit analysis for soil remediation in the Netherlands . Integr Environ Assess Manag , 4 : 61 – 74 .
  • Vermeire , T , Stevenson , H , Pieters , M N , Rennen , M , Slob , W and Hakkert , B C . 1999 . Assessment factors for human health risk assessment: a discussion paper . Crit Rev Toxicol , 29 : 439 – 490 .
  • Warner , R E . 1976 . Bioassays for microchemical environmental contaminants: with special reference to water supplies , Stockholm (Sweden) : World Health Organization, Bulletin 36 .
  • [WHO] World Health Organization . 1987 . Principles for the safety assessment of food additives and contaminants in food. International programme on chemical safety, environmental health criteria 70 , 174 Geneva, , (Switzerland) : WHO .
  • [WHO] World Health Organization . 1994 . Assessing human health risks of chemicals: derivation of guidance values for health-based exposure limits. International programme on chemical safety, environmental health criteria 170 , 73 Geneva, , (Switzerland) : WHO .
  • [WHO] World Health Organization . 2001 . Integrated risk assessment. Report prepared for the WHO/UNEP/ILO international programme on chemical safety. International programme on chemical safety , Geneva, , (Switzerland) : World Health Organization. WHO/IPCS/IRA/01/12 .
  • [WHO] World Health Organization . 2009 . Assessment of combined exposures to multiple chemicals: report of a WHO/IPCS international workshop. International programme on chemical safety (IPCS), inter-organization programme for the sound management of chemicals (IOMC) , Geneva, , (Switzerland) : WHO .
  • Xia , T , Li , N and Nel , A E . 2009 . Potential health impact of nanoparticles . Ann Rev Pubic Health , 30 : 137 – 150 .
  • Zeeman , M G . 1995 . “ Ecotoxicity testing and estimation methods developed under section 5 of the Toxic Substances Control Act (TSCA) ” . In Fundamentals of aquatic toxicology , 2nd ed , Edited by: Rand , G M . 703 – 715 . Washington, DC : Taylor and Francis .
  • Zeeman , M G and Gilford , J . 1993 . “ Ecological hazard evaluation and risk assessment under EPA’s Toxic Substances Control Act (TSCA): an introduction ” . In Environmental toxicology and risk assessment , Edited by: Landis , W G , Hughes , J S and Lewis , M A . 7 – 21 . ASTM STP 1179. Philadelphia, PA : American Society for Testing and Materials .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.