The science of toxicology has a long and profound history. It has its roots in the ancient Greek and Roman empires where physicians made early attempts to classify plants and distinguish between toxic and therapeutic plants. In the late fifteenth century and early sixteenth century, a Swiss scientist known as Paracelsus pioneered the use of chemicals and minerals in medicine. He is credited with the well-known phrase "The dose makes the poison." Although the original phrase was a little different, it emphasizes the fact that any substance can be harmful to living organisms if the dose or the concentration is high enough. Toxicologists believe that most chemicals, drugs, pollutants, and natural medicinal plants adhere to this principle. Paracelsus is often referred to as the father of toxicology. The development of modern toxicology is largely attributed to the Spanish scientist Orfila who is the father of modern toxicology and is considered to be the founder of the science of toxicology. Although different sources define toxicology a little differently, they all point to the fact that toxicology is the science that studies the adverse effects of chemical substances on living organisms including humans, animals, and the environment. It also involves the diagnosis and treatment of possible exposure to toxic substances and toxicants. Toxicologists use the power of science to predict how chemicals and damaging plants and minerals can be harmful. Not only is there individual variability in response, but other variables such as the exposure level, route of exposure, duration of exposure, age, gender, and the environment are used by toxicologists to determine the effect of toxicity.
Branches of Toxicology
Today, toxicology has grown into a multifaceted discipline. We provide here a list of these branches. The list is not exhaustive and there could be other branches as well, but it provides the most common branches with a brief description of each.
a. Aquatic Toxicology: Study of the effect of toxins such as chemical waste or natural material on aquatic organisms.
b. Chemical Toxicology: Involves the study of the structure and the mechanism of action of chemical toxicants.
c. Clinical Toxicology: Study concerning how much poison is present in the body as well as problems occurring due to overdose of drugs.
d. Developmental and Reproductive Toxicology: This branch of toxicology is concerned about the effect of toxins on the offspring when the parent, primarily the mother, is exposed to the toxicant during conception or pregnancy. It also concerns the multigenerational effects of toxic substances.
e. Ecotoxicology: Study of the effect of toxic substances in the ecosystem.
f. Environmental Toxicology: Study of the effects of pollutants naturally present in the environment, such as in air, water, and soil, on humans and other living organisms.
g. Forensic Toxicology: A topic within the general framework of forensic science that deals with the identification and quantification of poison, often leading to the determination of the cause of death.
h. Industrial Toxicology: Study of the effects of exposure to industrial waste and chemicals released from industries including, but not limited to, soot and other air pollutants.
i. Molecular Toxicology: This is concerned about the study of the cellular and molecular processes of toxicity.
j. Regulatory Toxicology: Study of toxicological processes based on the characteristics and guidelines of regulatory agencies.
k. Neurotoxicology: Study of the effects of toxicants on the brain and the nervous system.
l. Nutritional Toxicology: Concerned about food additives and nutritional habits, as well as the hazards posed by the way food is prepared, and so on.
m. Occupational Toxicology: Study of workplace-related health hazards, particularly in the chemical and mining industries.
n. Veterinary Toxicology: Study of the process of toxicity in animals.
o. Immunotoxicology: This is the study and analysis of how toxicity can damage the immune system.
p. Analytical Toxicology: Application of analytical chemistry methods in the quantitative and qualitative evaluation of toxic effects.
q. Mechanistic Toxicology: Similar to Chemical Toxicology, this study deals with the mechanism of action.
Some other branches of toxicology mentioned in the literature are Behavioral Toxicology, Comparative Toxicology, and Genetic Toxicology. Clearly many of these branches are interrelated and cannot be studied in isolation.
Basic Elements of Toxicology
Toxicity is defined as any undesirable or adverse effect of exogenous substances on humans, animals, and the environment. These substances include chemicals such as food additives, drugs and medicines, organic plants, or inorganic material such as mercury and lead. A specific undesirable outcome, such as carcinogenicity or neurotoxicity, is called a toxicological endpoint. Outcomes of toxicology testing experiments can be both continuous such as changes in brain weight and qualitative such as the presence or absence of a specific endpoint like cancer or can be evaluated on an ordinal scale such as low, moderate, or high. Toxicology tests have for a long time used laboratory animals in bioassay experiments to perform in vivo studies and to determine the effects of toxicants, although in recent years the use of in silico approaches (Computational Toxicology) has become popular. Experiments may consist of single exposure, as in case-control studies, or may have several exposure levels. The exposure level or the concentration level, that is, the amount of the chemical used in the experiment, is called the dose or the dosage level. Clearly, the dose is a crucial factor in the amount of toxicity, and determination of an efficient dose regimen is an important problem in the design of toxicological experiments. Several other factors play important roles in the extent of toxicity. One is the route of exposure, which could be by injection, oral (mixed in the diet), dermal, or by inhalation. Other factors are the frequency of exposure (how often the exposure occurs), duration of exposure, and the excretion rate of the chemical, often measured by half-life. Individuals respond differently to the same dosage, and other subject-specific variables such as age and gender add more variations to the outcome.
Toxicity is generally measured by the severity of the effect of the substance on the organism or the target tissue. The most fundamental method of measuring the toxicity of a substance is by using LD50, which is the dosage level of the substance that creates lethality in 50% of the subjects. In inhalation toxicity studies, air concentrations are usually used for exposure values and LD50 is utilized as a measure of toxicity. Another similar measure is £D50 or £CS„, which is the effective dose or concentration of the chemical that makes an observable endpoint of interest in 50% of subjects. These measures have often been used to compare and classify chemicals. Clearly, 50% is a nominal and convenient value corresponding to the median, and other percentiles of interest may also be used. That is, in general, LDm p and LCi00 („ where 0
p=0.01, then EDm refers to the dosage of the chemical that affects 1% of the subjects. Because humans are generally exposed to low levels of chemicals, much of the interest among toxicologists is to study the behavior and toxicity in the low-dose region. In fact, there was a large-scale experiment in the 1970s conducted by the National Center for Toxicological Research (NCTR) of the Food and Drug Administration (FDA) and reported by Staffa and Mehlman (1979), also referred to as the £D01 study. In that experiment, over 24,000 mice in several strains were exposed to the known carcinogen 2-acetylaminofluorene (2-AAF) to study the lethality of the chemical in low doses (see also Brown and Hoel, 1983a, b). However, LD50 and LC50 have limited usage as they cannot be directly extrapolated across species and to low doses. In fact, their application as a measure of toxicity has been criticized by many toxicologists (see Zbinden and Flury-Roversi, 1981; LeBeau, 1983). Alternative measures of toxicity are listed below:
a. Acceptable Daily Intake (ADI): For food additives and drugs.
b. Benchmark Dose (BMD): A dose of the toxin that produces a predetermined level (e.g. 5%) of change of the adverse effect.
c. Lowest-Observed-Effect-Level (LOEL): Lowest dose that causes an observable effect.
d. Lowest-Observed-Adverse-Effect-Level (LOAEL): Lowest dose that causes an observable adverse effect.
e. Maximum Tolerated Dose (MTD): Used mostly in chronic toxicology and represents highest dose with no health effects.
f. Median Tolerated Dose (TD50): Median toxic dose causing toxicity in 50% of exposed individuals.
g. No Toxic Effect Level (NTEL): Largest dose with no observed effect.
h. No-Observed-Effect-Level (NOEL): Highest dose with no effect.
i. No-Observed-Adverse-Effect-Level (NOAEL): Largest experimental dose that produces no undesirable outcome.
j. Reference Dose (RfD): Daily acceptable dose that produces no risk of adverse effect.
k. Tolerable Daily (Weekly) Intakes: For contaminants and additives not consumed intentionally.
l. Reference Intake: Used mainly for nutrients.
There is a large body of literature in toxicology that describes the properties and applications of each of the abovementioned measures of toxicity. In addition, the measures are not independent and many of them are interrelated. Several publications discuss some of the relationships. For example, Gaylor and Gold (1995) and Razzaghi and Gaylor (1996) discuss the relation between TDgo and MTD.
Emergence of Statistical Models
Although statisticians have always played an important role in toxicological research and made contributions towards the development of many of the toxicological results, the earliest evidence of modeling applications can perhaps be attributed to Bliss (1934), who used the probit regression dose- response modeling and calculation of some toxicological measures. His research was concerned with controlling the insects that fed on grape leaves. He further developed the application of probit regression in Bliss (1935) and Bliss (1938). Later, Berkson (1944) applied logistic regression as an alternative to the probit model. The publication of the seminal book of Finney (1947), collecting the results of many of his earlier articles, contributed significantly to how statistical models can be used to make advances in the toxicological sciences, especially with respect to calculating and estimating risk. From that point on, in the 1970s and 1980s, a myriad of publications evolved that demonstrated the application of statistical models. At the same time, the development of many new statistical methodologies in various topics such as linear models (McCullagh and Nelder, 1989) and experimental designs (Box et al., 1978) and the demonstration of their application in biological sciences encouraged a number of collaborative works between biostatisticians and toxicologists. In this respect, the interests and promotion of research by regulatory agencies such as the FDA, the National Institutes of Health (NIH), and the Environmental Protection Agency (EPA) are noteworthy. The establishment of research centers in these branches of the federal government, such as the NCTR by the FDA, the National Cancer Institute by the NIH, and the National Institute of Environmental Health Sciences by the EPA played a major role. The fundamental contributions of their statisticians were highly instrumental in the development of the application of statistical models and their extensions in toxicology. Their research and results were not only important and influential in the use of mathematical models, but more importantly encouraged numerous collaborative works between the government and researchers in academic institutions as well as private enterprises and industries, leading to a large number of publications with pivotal results. The seminal book of Collette (1991) is particularly noteworthy. Moreover, the advancement of computer technology and the ability to perform complex calculations further contributed to the creation of in silico toxicology, which is a type of toxicity assessment that uses computer models to simulate and visualize chemical toxicities. According to Raies and Bajic (2016), the modeling method is an important aspect of in silico toxicology, and steps to generate a prediction model include model generation, evaluation, and interpretation.
A publication of the National Research Council (NRC, 1983) points out that biostatistical models, particularly quantitative evaluation, of toxicity are crucial in toxicological research and can help in four ways:
- 1. Developing an overall testing strategy to set dose and exposure regimens.
- 2. Optimal design to extract maximum information.
- 3. Interpretation of data.
- 4. Verifying the underlying biological assumptions.
Today, regulatory agencies and pharmaceutical industries rely heavily on the biostatistical models that provide estimates of health risk to humans and other organisms, and research is ongoing to explore models that improve the accuracy of such estimates.
Scope of This Book
This book is about statistical models. An attempt is made to present an account of the most commonly used mathematical and statistical models in toxicology. Clearly, not every branch of toxicology is covered. Since the early 1970s, many statistical models have been developed for expressing toxicological processes mathematically. Hoel (2018) refers to this period as an "exciting time" because of the attention given by statisticians to the problem of estimating the human health risk of environmental and occupational exposures. Not only have the models enjoyed a high level of elegance and sophistication mathematically, they were widely used (and for the most part are still being used) by industry and government regulatory agencies. Thus, in this book we are primarily concerned with the models developed for the assessment of human health risk. Toxicologists have used animal models and bioassay experiments for a long time to understand the mechanism of toxicity in order to be able to estimate the risk of exposure. Therefore, the focus of this book is to describe the statistical models in environmental toxicology that facilitate the assessment of risk mainly in humans. For this purpose, the basic concepts and methods of risk assessment are described in Chapter 2. Since humans are rarely exposed to a single chemical in isolation and exposure is often to a mixture of chemicals, models that assess the risk of mixtures of chemicals are discussed in Chapter 3. In Chapters 4 through 7, we present statistical models that are developed for risk estimation in different aspects of environmental toxicology. The problem of modeling and risk analysis for cancer and exposure to carcinogenic substances is presented in Chapter 4. Note that the outcomes of carcinogenicity experiments are binary in nature since the experimental unit either contracts cancer or is free of cancer. In toxicology, methods for risk assessment of cancer endpoints differ from those of non-cancer endpoints. In Chapter 5, we discuss models for developmental and reproductive toxicity risk assessment. These experiments are designed and conducted to assess the health risk for the offspring when the mother is exposed to a toxin during pregnancy. In these experiments, toxicologists observe whether or not the offspring suffers from any abnormality such as malformation or death. Consequently, except in cases where fetal weight is under consideration, the outcomes from developmental toxicity experiments are, for the most part, dichotomous in nature. Chapter 6 is devoted to describing the statistical models for risk assessment in continuous outcomes. Specifically in experiments designed to assess the effect of toxic substances on the brain and the nervous system, the outcomes are continuous in nature. Finally, in Chapter 7, we present statistical models for developmental neurotoxicity. These models are developed for assessing the risk and possible effect on the nervous system of the offspring when the mother is exposed to harmful substances during pregnancy.
It should be emphasized that with the development of modern technology and the advent of the digital revolution, the protocol for some of the experiments designed for toxicity testing has been changed or modified. Thus, some of the models described in this book may no longer be in use several years from now. But, the fact is that the mathematical elegance of these models alone makes them quite interesting to study. Moreover, statistical models for more recent protocols have still not been fully developed, and research in those areas is ongoing.
Berkson, J. (1944). Application of the logistic function to bioassay. Journal of the American Statistical Association, 39,357-65.
Bliss, С. I. (1934). The method of probit analysis. Science, 79, 38-9.
Bliss, С. I. (1935). The calculation of dosage mortality curve. Annals of Applied Biology, 22,134-67.
Bliss, С. I. (1938). The determination of dosage mortality curve from small numbers. Quarterly Journal of Pharmacology, 11,192-216.
Box, G. E. P., Hunter, W. G., and Hunter, J. S. (1978). Statistics for experimenters: An Introduction to design, data analysis, and model building. Wiley, New York.
Brown, K. G. and Hoel, D. G. (1983a). Multistage prediction of cancer in serially dosed animals with application to the ED01 study. Toxicological Sciences, 3,470-77.
Brown, K. G. and Hoel, D. G. (1983b). Modeling time to tumor data: Analysis of EDm study. Toxicological Sciences, 3,458-69.
Collette, D. (1991). Modeling binary data. Chapman and Hall, London.
Finney, D. J. (1947). Probit analysis, 1st edition. Cambridge University Press, Cambridge.
Gaylor, D. W. and Gold, L. S. (1995). Quick estimate of the regulatory virtually safe dose based on the maximum tolerated dose for rodent bioassay. Regulatory Toxicology and Pharmacology, 22,57-63.
Hoel, D. G. (2018). Quantitative risk assessment in the 1970s: A personal remembrance. Dose-Response, 16,1559325818803230. PMID: 30302069.
LeBeau, J. E. (1983). The role of the LD50 determination in drug safety evaluation. Regulatory Toxicology and Pharmacology, 3,71-4.
McCullagh, P. and Nelder, J. (1989). Generalized linear models. Chapman and Hall, London.
National Research Council (1983). Risk assessment in the federal government: Managing the process. National Academy Press, Washington, DC.
Raies, A. B. and Bajic, V. B. (2016). In silico toxicology: Computation methods for the prediction of chemical toxicity. Wiley Interdisciplinary Reviews: Computational Molecular Sciences, 6,147-72.
Razzaghi, M. and Gaylor, D. G. (1996). On the correlation coefficient between the TD50 and the MTD. Risk Analysis, 16,107-13.
Staffa, J. A. and Mehlman, M. A. (1979). Innovations in cancer risk assessment (ED01 study). In Proceedings of a Symposium Sponsored by the National Center for Toxicological Research and the US Food and Drug Administration. J. A. Staffa and M. A. Mehlman eds. Pathotox Publishers, Inc.
Zbinden, G. and Flury-Roversi, M. (1981). Significance of the LD50-test for the toxicological evaluation of chemical substances. Archives of Toxicology, 49,99-103.