Thursday, September 22, 2011

Can Mathematics Cure Cancer?



“I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.”
President Richard M. Nixon (State of the Union Address, 1971)
This year, 2011, is the fortieth anniversary of the War on Cancer. President Richard Nixon signed the National Cancer Act on December 23, 1971 inaugurating the “War on Cancer.” Since 1971 the War on Cancer has consumed an astonishing $200 billion, with the annual budget of the National Cancer Institute alone now at over $5 billion. This is comparable to the annual rate of expenditure of the Manhattan Project which invented the first atomic bombs and nuclear reactors between 1939 and 1945, with most of the work and expenditure between 1942 and July 16, 1945, the date of the first atomic bomb test known as Trinity, continued for forty years. The Manhattan Project consumed about $20 billion in 2011 dollars, about $2 billion in 1940′s dollars. The War on Cancer is roughly comparable to ten Manhattan Projects. The results have clearly been very disappointing. Since cancer is one of the leading causes of death, almost everyone would almost certainly like to see much more impressive results than achieved so far.
The War on Cancer was inspired in part by the spectacular success of the wartime Manhattan Project, the subsequent development of the hydrogen bomb (1945-1952), and the then (1971) recent spectacular success of the Apollo Program (1962-1969). This inspired not only the War on Cancer but many other “new Manhattan Projects” such as research into tokamaks and inertial confinement fusion devices for fusion power. Like the War on Cancer, most of these “new Manhattan Projects” have yielded disappointing results, certainly nothing on the scale of the Manhattan Project or the Apollo Program. As discussed in the previous article “The Manhattan Project Considered as a Fluke,” the Manhattan Project appears to have been a fluke, atypical of major inventions and discoveries especially in the success of the first full system tests: the Trinity test explosion on July 16, 1945 and the atomic bombings of Hiroshima and Nagasaki. Theoretical mathematical calculations and primitive numerical simulations seem to have been unusually successful in the case of the Manhattan Project compared to other breakthroughs. The Manhattan Project was probably quite unusual among major inventions and discoveries in other ways as well, but this is less clear. In terms of funding levels and the serious life and death nature of the goal, the War on Cancer is one of the closest analogs to the Manhattan Project among the many “new Manhattan Projects” of the last forty years.
With the widespread availability of extremely powerful computers, there are increasing attempts to apply mathematics and computational methods to biology and to cancer. There is a burgeoning field of “quantitative biology,” which includes its own section on the popular arxiv.org electronic preprint server. In many respects, this is an attempt to replicate the apparent success of theoretical mathematical calculations and early computer simulations in the Manhattan Project (1939-1945), the development of the hydrogen bomb (1945-1952), and the Apollo Program (1962-1969).
This article discusses the application of mathematics to the cure of cancer, the possible use of systems of smart drugs to perform simple mathematical calculations to identify and kill cancer cells, and presents a possible mechanism, developed by the author several years ago, to selectively destroy cells with abnormal numbers of chromosomes (aneuploidy), something common in many forms of cancer.
What does forty years of failure mean?
Such long periods of repeated failure are common in the history of scientific and technological breakthroughs. In most cases, this repeated failure has reflected either a lack of fundamental knowledge or an incorrect assumption or group of assumptions that was widely, even universally, held. While these two categories are not sharply defined and blur together, in general a lack of fundamental knowledge means that the state of knowledge was simply far too primitive to solve the problem. An example of this would be the failure of alchemists for thousands of years to transform base metals into gold or produce an elixir of life, the two major goals of both Western and Eastern alchemy. Today, we can with great difficulty and at great cost convert base metals into gold and the elixir of life remains a distant dream. It is clear in retrospect that the alchemical theory that metals were a composition of mercury and sulphur was grossly in error as were many other concepts of alchemy. Nonetheless, alchemists made many significant technological advances including methods for creating alloys similar to gold, gold and gold-colored coatings, and the discovery of a wide variety of useful materials. These successes, unappreciated today, probably gave the alchemists a false confidence in their theories and knowledge.
The blurriness of the two categories is illustrated by asking what might have happened if the alchemists had questioned and abandoned the mercury-sulphur theory which in various forms was widely held for many centuries. Loosely, the mercury-sulphur theory of metals postulated that metals were composed of mercury and sulphur in varying proportions; gold being mostly mercury, for example. This theory is frequently attributed to the Islamic alchemist Jabir ibn Hayyan (born in about 721 in Tus, Iran, died in about 815 in Kufa, Iraq) also known as Geber in Latin.   Based on current knowledge, the alchemists would have had to have abandoned the mercury-sulphur theory and isolated a radioactive material such as uranium or invented batteries and other electrical technologies leading to particle accelerators to have had any hope of achieving their goal, both of which require performing very different experiments from the ones alchemists typically did. Batteries, in particular, could have been invented many centuries before they came into widespread use in the early nineteenth century.
In many cases, in retrospect, it is clear that this pronounced lack of progress in solving a scientific or technological problem was due to an assumption or group of assumptions that were incorrect and widely held. Indeed, often the assumption was something viewed as self-evident, something “everyone knows,” obvious, firmly established by extensive evidence, and so forth. Only in retrospect is it “obvious” that the assumption or assumptions were in error and not well supported by evidence, experience, or logic as most believed. Hence, one should ask whether some widely held, seemingly sensible belief or group of beliefs, supported by “overwhelming evidence” in modern scientific jargon, in biology is not, in fact, wrong.
In principle, the Internet and specific new technologies such as HTML or wiki’s should make it easier for researchers to collaborate and to list all assumptions, both stated and unstated, in a research field and their interdependencies, including links to all supporting raw data, experiments, and logical arguments: something like a Biology and Cancer Assumptions Wikipedia, but more rigorous than Wikipedia. Identifying and evaluating assumptions can be done much more systematically and thoroughly with hypertext, databases, and other Internet and computer software than was possible a few years ago using books, research papers, and conference presentations.
Questioning assumptions, especially assumptions “everyone knows,” is a social and political process. In modern Big Science, certain usually foundational assumptions are closely associated with high status individuals and institutions and are routinely presented with few qualifications or even as proven fact beyond any rational questioning to the public, business leaders, and policy makers: in Scientific American articles, PBS/Nova video programs, congressional testimony, private informal discussions where decisions are often actually made, and so forth.
In modern scientific research, there is pervasive rhetoric about “questioning assumptions” and “thinking outside the box,” but this usually does not apply to the foundational assumptions mentioned above. This rhetoric usually refers to subsidiary assumptions such as which protein to use in a biology lab experiment and similar non-threatening technical minutia. Indeed, many fields that have shown little or no practical results for decades, like cancer research, are periodically swept by fads and fashions in which subsidiary assumptions are replaced, modified, or added.
The discussion of mathematical approaches to curing cancer below generally assumes that modern biology has it right. A discussion of assumptions that might be in error or unorthodox biological concepts is mostly beyond the scope of this article, although the theory that abnormal numbers of chromosomes might play a more important role in cancer than generally thought is discussed. This general acceptance of current assumptions in biology is an important qualification that readers should keep in mind. At least historical experience with many “hard problems” in science and technology would suggest otherwise — that modern biology is “missing something” as physicists like to say.
Current Mathematical Approaches
There are a number of current attempts to apply mathematics to the cure or treatment of cancer. Quite a number are attempts to use differential equations to model the growth and spread of cancers and their response to various treatments and the immune system, fairly similar to the work of Dr. Swanson and Professor Levy described in detail below.
A well known example is Dr. Kristin Swanson’s work at the University of Washington:
We specialize in the mathematical modeling of pathological biosystems – specifically, primary brain tumors known as gliomas. We are currently working on several collaborative projects utilizing both clinical and experimental imaging techniques such as MRI and PET.
Our focus is on: 1) predicting patient-specific tumor growth, 2) seeking patient-specific markers of tumor progression, and, 3) identifying predictors of response to therapy in individual patients.
In plain English, this is an attempt to predict the growth and spread of certain brain tumors using a mathematical model, usually differential equations, in order to carefully target the spreading cancer with radiation or other methods.
Another well known example is the work of Professor Doron Levy at the University of Maryland, College Park. From Professor Levy’s web site:
Together with Peter Lee (Hematology, Stanford University) and Peter Kim (University of Utah) we have been working on combining new experimental data and mathematical models to develop new methods for treating leukemia patients. The type of leukmia that we have extensively studied is chronic myelogenous leukemia (CML).
Our research emphasizes the role of the immune system in the progression of the disease. By now it is known that most patients have an anti-leukemia immune response. It remains a mystery as of why this immune response is incapable of providing a sufficient response to the disease.
Our main work in this field was published in the June 2008 issue of PLOS Computational Biology. In this paper we proposed to vaccinating CML patients using their own blood in order to boost their anti-leukemia immune response. Using mathematical models we showed that the key issue is to time the cancer vaccine based on the dynamics of the immune response of the individual patient. A vaccination that is provided to early or too late in the process (i.e. after diagnosis and the initiation of drug-therapy) will have no noticeable effect. Our calculations suggest that such a procedure may ultimately be used to cure the disease. The work assumes that patients are treated with Gleevec (imatinib) starting from the diagnosis of the disease. A timed vaccine may allow them to stop the drug therapy.
There are many other attempts to apply similar mathematics to the cure or treatment of various cancers including work by Larry Norton at Memorial Sloan-Kettering Cancer Center, Lisette de Pillis at Harvey Mudd College, Vito Quaranta and Alexander “Sandy” Anderson at the Vanderbilt Integrative Cancer Biology Center and the University of Dundee in Scotland, Franziska Michor at Memorial Sloan-Kettering Cancer Center, Sofia Merajver at the University of Michigan, Paul Macklin at the University of Southern California, and many others.  An Internet search for “Mathematics and Cancer” will turn up many matches of this type.
The author is not too optimistic about these approaches, although they certainly have merit. It seems that one still would really prefer something that could identify and either safely destroy or somehow render harmless the actual cancer cells. Can mathematics do this or assist in doing this?
Smart Systems of Drugs
The current prevailing theory of cancer is the oncogene or “cancer gene” theory. This is viewed as a proven fact by many molecular biologists. The current (2011) Director of the National Cancer Institute, Dr. Harold Varmus, shared the Nobel Prize in Medicine with J. Michael Bishop for early work on this theory.
Cancer is now said to be hundreds, even thousands of different diseases. While a medical doctor or pathologist may identify something as “breast cancer” or “skin cancer” or a similar general category, at a molecular and genetic level, “breast cancer” is actually many different diseases. It is thought that cancer is caused by the accumulation of many mutations of many different oncogenes and tumor suppressor genes that control complex networks of proteins that direct the growth, functioning, and differentiation of cells. In biology, differentiation refers to the process by which cells “differentiate” during growth into various specialized types of cells such as neurons in the brain, blood cells, and so forth with different specific properties and functions.
One type of breast cancer may have genes A,B,C, and D mutated while another has genes W, X, Y, and Z mutated. Not only this, but the cancers are thought to be continually mutating and evolving in the body, developing immunity to chemotherapy drugs for example. Thus, there does not seem to be a common molecular target that an anti-cancer drug can target in the way that penicillin or other antibiotics can kill a wide range of different bacteria, for example. As a result the latest favored concept in cancer research and treatment has been “personalized” targeted drugs. If one can determine that patient X has genes A, B, C, and D mutated, then in principle one can select or produce a drug specific to this particular type of cancer with A, B, C, and D mutated. This, of course, requires developing not one or a few anti-cancer drugs, but hundreds or even thousands of anti-cancer drugs possibly further personalized for each patient.  See, for example, this recent article (“Targeted drugs: the future of cancer treatment?”, The Huffington Post, June 9, 2011).
Personalized cancer treatment has suffered a recent high profile black eye with the retraction of several papers from a research group at Duke University and a front page article in the Friday, July 8, 2011 New York Times (“How Bright Promise in Cancer Testing Fell Apart“, by Gina Kolata, The New York Times, July 8, 2011, page A1). Cancer research has long been characterized by a series of research and treatment fads, with one wonder drug or treatment after another heavily touted for a time, followed by disappointment, and then replaced by a new wonder drug: interferon, interleukin-2, and many others in the last forty years.
Nonetheless, medical doctors and pathologists going back to Hippocrates seem to have been able to identify a single disease as cancer long before modern genetic methods. It may be that there are system level features of cancer cells that do identify them as cancer cells. Traditional chemotherapy drugs were designed to kill dividing cells on the theory that cancer cells divide rapidly. However, healthy cells divide also and traditional chemotherapy has very limited benefits if at all. Only surgical removal of a tumor before it spreads — becomes metastatic in cancer jargon — appears to be able to cure cancer using the common sense definition of “cure”. While targeting cell division largely does not work, targeting other system level characteristics of cancer may work.
Many readers are probably familiar with the concept of nanotechnology and nanorobots, usually associated with K. Eric Drexler. One can envision tiny robots, nanorobots, that enter the blood stream, analyze each cell in turn, and selectively kill the cancer cells. The fabrication of such nanorobots is far beyond our current or near future technology. We are nowhere near implementing a computer central processing unit (CPU) or a robot at a molecular level. Even if we could, we do not know how to program a nanorobot to recognize a cancer cell and distinguish it from a normal healthy cell.
What we might be able to do, with great difficulty, is produce a small system of interacting drugs/molecules that perform some mathematical calculation in the cell and selectively kill cells identified as cancer cells or probable cancer cells while leaving the normal healthy cells alone. It is here that mathematics may be of use. To achieve success in the near future, the simpler the mathematics the better. Even engineering a single molecule such as genetically engineered insulin for diabetics is a daunting task at present. So a system of even a few molecules would be a substantial and difficult undertaking.
There are some current attempts to pursue this approach, notably the Cure Cancer Project associated with Dr. Arnold Glazier, Dr. Emil Frei and others.   The Cure Cancer Project accepts that the current main assumptions of biology and cancer research are correct. The notion is to identify an unchanging property of cancer cells that can be targeted by a system of smart drugs. In specific terms, this seems to refer to targeting certain patterns among proteins that are thought to be associated with certain general properties, proliferation and invasivenes, of malignant cancer cells. Dr. Glazier has written a book  and made some presentations on his ideas.
The author’s educated guess is that an approach based on a system of drugs, perhaps along the lines of the Cure Cancer Project, is the most likely approach to produce an effective cure or treatment for most common forms of cancer in the near future.  (This discussion is for educational and informational purposes only and is not an endorsement of the Cure Cancer Project.) An important caveat is that such approaches generally assume that current biological assumptions and theories such as the somatic mutation theory of cancer, the oncogene theory (a specific instance of the somatic mutation theory) and the Central Dogma of Molecular Biology (DNA is the boss) are correct, making a cure for cancer “just engineering.” Next, this article presents a possible method to destroy cells with abnormal numbers of chromosomes, a condition known as aneuploidy which is common in many cancers.
The Selective Destruction of Cells with Abnormal Numbers of Chromosomes
One common characteristic of many cancers is an abnormal number of chromosomes, known as aneuploidy. This is often an excess number of chromosomes. A normal healthy human cell has forty-six (46) chromosomes. Cancer cells often have more than forty-six chromosomes. This was discovered long before the modern genetic era. One historical theory, now out of favor, is that the abnormal number of chromosomes causes cancer. This theory is usually credited to the German biologist Theodor Boveri. The most prominent modern advocate of the role of aneuploidy and chromosomes in cancer is the extremely controversial researcher Peter Duesberg who has published some articles on his theories in cancer research journals and a popular article in Scientific American in 2007 (“Chromosomal Chaos and Cancer”, Scientific American, May, 2007).  A number of other researchers such as Angelika Amon at MIT have been investigating the role of chromosomes and aneuploidy in cancer in recent years; references are given below.  The abnormal number of chromosomes or the other chromosomal anomalies often seen in a wide range of cancers may be a system-level characteristic of cancer that could be targeted despite the extreme variation in gene-level mutations (part-level characteristics of cancer).
Even though there are over one-million research papers on cancer, it is difficult to get a clear picture of the role of aneuploidy in cancer. Most modern cancer research is conducted within the framework of the oncogene theory and an implicit assumption that the way to cure or treat cancer is to target either a protein generated by a cancer gene or the gene directly. Chromosomal anomalies, both abnormal numbers of chromosomes and the rearrangements of chromosomes that are common in many cancers, are usually discussed as an aside to the putative cancer genes. This translocation of chromosome X mutated the key cancer gene ABC, or the duplication of chromosome X resulted in two copies of the key cancer gene ABC.
It could be that killing cancer cells with the wrong number of chromosomes would have no effect on the disease. It would simply result in a cancer with the correct number of chromosomes in the surviving cancer cells. It could slow the disease if the abnormal number of chromosomes is related to the malignancy of the cancer cells. In the best case, it might cure the disease, if the abnormal number of chromosomes is either the cause of cancer or essential in some way to the malignant characteristics of the cancer cells.
It may be possible to kill cells with an abnormal number of chromosomes using a system of five molecules: a harmless precursor A, a source catalyst S, a cell killer B, a drain catalyst D, and a neutralized cell killer C that the cell can safely digest or excrete.
The source catalyst S is inactive until it bonds to a numerical or quantitative feature on the chromosomes such as the telomeres at the ends of the chromosomes or the centromeres at the center. It becomes an active catalyst S* when it bonds to the chromosomes. Then the activated catalyst S* catalyzes the conversion of a harmless precursor A into a cell killer B. The activated catalyst S* has a maximum throughput. If the concentration of the precusor A is high enough in the cells, the catalyst S* will add the cell killer to the cell at a rate proportional to the number of chromosomes in the cell.
The cell killer B is relatively harmless in low concentrations. It needs to build up to a high level to kill the cell. So far, this will happen in all cells. However, if there is a drain catalyst D that bonds to a numerical feature in the cell that is the same in both normal cells and abnormal cells (cancer cells) and becomes an active drain catalyst D* that removes the cell killer B by converting it to the neutralized cell killer C, then the concentration of B can be engineered to rise to lethal levels only in cells with too many chromosomes.
A ==>S*==> B
B ==>D*==> C
This system of drugs is like a bathtub with several running faucets, one for each chromosome, and a single drain. If there are too many faucets, chromosomes, the water level, the concentration of the cell killer B, will rise and overflow the bathtub. If there are the right number, forty-six, or too few, less than forty-six, faucets, the drain can remove the water being added and the water level never rises. The water level remains almost zero; the concentration of the cell killer B is way too low to harm the cell.
One can kill cells with too few chromosomes (less than forty-six) by swapping the roles of the drain and the source. The drain catalyst bonds to the chromosomes. The source catalyst bonds to the constant numerical feature of the cells. Thus, if there are too few chromosomes, there are not enough activated drains to remove the cell killer B produced by the source catalyst. The bathtub has one big faucet and many small drains, one for each chromosome.
In principle, one could eliminate all cells with either too many or too few chromosomes by first treating the patient with a system of drugs that kills cells with too many chromosomes and then a system of drugs that kills cells with too few chromosomes. Cancer cells are frequently reported to have too many chromosomes, but sometimes too few is also reported.
A computational system of this type would now (2011) be easy to implement using mechanical components like the gears and springs used in traditional mechanical clocks, vacuum tubes and other traditional analog electronics components, or an integrated circuit. The problem is that as simple as such a computational system is, it is extremely challenging to implement using our current ability to engineer proteins and molecular biological systems in the cell.
This video shows the build up of cell killer B in a cell with seven (7) chromosomes where the normal number of chromosomes is five (5). A smaller number of chromosomes than forty-six is used for demonstration purposes.  The cell membrane is represented by a sphere which begins to distort when the cell killer concentration reaches the lethal level.  The cell killer molecules are indicated by small green spheres that turn red when the lethal concentration is reached.  The membrane disintegrates, killing the cell, and the cell killer disperses.
cancer_07
This graph shows the concentration of the cell killer B as a function of time:
Cell Killer Concentration
Cell Killer Concentration

This video shows the lack of accumulation of cell killer B in a cell with five (5) chromosomes where the normal number of chromosomes is five (5).
cancer_05
This video shows the accumulation of cell killer B in a cell with three (3) chromosomes where the normal number of chromosomes is five (5). The sources and drains have been swapped as discussed above to kill cells with too few chromosomes.
cancer_03
This is the Octave script that simulates the bathtub mechanism and was used to make the videos above. This script runs successfully on a PC with Windows XP (Service Pack 2) using Octave 3.2.4.  GNU Octave is a free, open-source high-level interpreted language, primarily intended for numerical computations that is mostly compatible with MATLAB.