Historical Context: Computational Toxicology
Computational toxicology had its early roots in the 1980s in the process known as combinatorial chemistry, where rapid synthesis or computer simulation of a large number of different but structurally related molecules or materials (by building blocks) were generating large libraries of compounds for initial screening of potential hits against molecular targets. This was accomplished by highly parallel or split-pool chemical synthesis, resulting in the generation of thousands to millions of compounds. Initially, thousands of compounds were present in mixtures (liquid state or solid state) and de-convolution of the mixtures was accomplished by structural similarity categories and rank order elimination algorithms based on targeted screening of structural analogs. The key lessons in attempting to decipher potential safety concerns in large sets of structure data were that analog identification and categorization were crucial for unknowns and that structural features were related to chemical-biological effects. Early on, structure- activity relationships (SAR) and quantitative structure-activity relationships (QSAR) were found to be useful to fill data gaps, and were particularly useful to rank order individual compounds in a series so as to start the selection process for potential development leads. It was also generally recognized that there was a huge difference in rank ordering compounds and actually predicting endpoints, because of the limitations of chemical space within computational models. It was also determined that proper weighting of endpoint criteria was essential; these weighting criteria for toxicity later became the “filters” used to make decisions on potential drug candidates from analog series, for instance projected electrophilic metabolites as an indicator of potential toxicity and key physicochemical properties that predicted key absorption, distribution, metabolism, and excretion (ADME) properties of chemicals.