Blog

Science is best when verified by technology

An essay based on the article Sarewitz, 2016. Saving Science. The New Atlantis 

lllustration by Mauro Rebelo and Julia Back

During the COVID19 pandemic, once again, science has brought the solution to a health, humanitarian, social and economic crisis. Scientists, distributed in universities, research institutes, startups, large and small companies, have improved diagnostic methods, tested drugs, developed, produced and tested vaccines that will soon end the pandemic state in record time.

With billions of doses applied, we can say that society was impressed with the result of the vaccine development effort. Part of society legitimately wonders why for so many other problems, many older, well-documented and less complex, is science unable to offer a solution?

Scientific funding models

Looking at the recent advancement of science, the COVID19 vaccine seems more like the exception than the rule. John Horgan, in the book ‘The End of Science’, from 1999, suggests that scientific activity obeys a curve of diminishing returns and discusses the idea of limits to knowledge. But the reason for so few discoveries could be another: the science funding model itself, which does not encourage problem solving. 

The idea was first presented by Donald Stokes in the 1997 book ‘The Pasteur Quadrant’. More than 20 years later, little or nothing has changed in the way science funding is done. We continue to follow the model proposed by Vannevar Bush in 1945 for the creation of the National Science Foundation (NSF) in the United States, which served as a model for the formation of funding agencies in practically the whole world. This methodology is so ingrained that even mounting evidence of its inadequacy, such as those reported in John Ioannidis’ 2005 article ‘Why Most Scientific Findings Are False’, has not been enough to bring about change.

The article ‘Saving Science’ by Daniel Sarewitz, published in 2016, again makes a strong criticism of the model developed by Vannevar Bush. However, better than other authors, Sarewitz reports an alternative model of science funding, which coexisted with the NSF model. Not for its precedence, but for the effectiveness in obtaining results and efficiency in the management of resources. We analyzed the article and highlighted lessons that we can apply to solve economic and social problems based on science and technological development.

To whom is science accountable?

Daniel Sarewitz’s article in New Atlantis begins bluntly:

Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to economic growth to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion.

Sarewitz sums up the problem in a prominent premise of the famous report ‘Science: the endless frontier’ published by the illustrious Vannevar Bush in 1945:

Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.

The nobility of the premise and the reputation of its author made it unquestionable. Bush was an engineer with a clear vision of the importance of science to the welfare of society and national security. He was the presidency’s first scientific adviser, coordinated efforts of over 6000 scientists during the conflict, and initiated the Manhattan Project, which developed and built the atomic bombs, ensuring that he received top government priority.

I, like many, was deeply impacted when I first read his article “As We May Think” that describes the ‘Memex’ machine and the associative leaps with which we learned that influenced the creation of hypertexts.

And his assumption sounded like a statement of common sense.

As the war drew to a close, Bush envisioned transitioning American science to a new era of peace […] Pursuing “research in the purest realms of science” scientists would build the foundation for “new products and new processes” to deliver health, full employment, and military security to the nation.

However, in his urge to create an escape from the restrictive control of the military, Bush created a system in which scientists were accountable to no one but themselves:

Politicians delivered funding to scientists, but only scientists could evaluate the research they were doing. Outside efforts to guide the course of science would only interfere with its free and unpredictable advance.

The history of science is full of important discoveries, such as x-rays and penicillin, which reinforce this fundamental role of serendipity.

And, at first glance, investment in science seems to have largely paid off society:

When Bush wrote his report, nothing made by humans was orbiting the earth; software didn’t exist; smallpox still did.

More recently, we can add to this list the discovery of gravitational waves, the human genome and the internet.

But Sarewitz suggests another explanation. Less romantic, but more robust, well documented and objective: the induction of scientific discovery by the technological demands of the US Department of Defense (DoD).

When Vannevar Bush created the NSF in the mold of free intellects exploring the unknown by pursuing their curiosity, the DoD did not fail to fund and encourage science in its own way, which can be defined as the ‘military-industrial complex’. His logic was that the cost was less important than the objective: ensuring that the US military technology was the best in the world.

Often the DoD did not foster innovation through non-refundable funding, as foundations and funding agencies such as the NSF do, but as a customer, a beta customer, willing to pay a lot for prototypes and Minimum Viable Products (MVP) that had limited functionality and low efficiency. This protected the bold innovations the military needed from the ‘market’ rationale that would have doomed most radical and super-expensive projects:

For example, the first digital computer—built in the mid-1940s to calculate the trajectories of artillery shells and used to design the first hydrogen bomb—cost about $500,000 (around $4.7 million today), operated billions of times more slowly than modern computers, took up the space of a small bus, and had no immediate commercial application. […] The earliest jet engines, back in the 1940s, needed to be overhauled about every hundred hours and were forty-five times less fuel-efficient than piston engines. [But] military planners knew that jet power promised combat performance greatly superior to planes powered by piston engines. For decades the Air Force and Navy funded research and development in the aircraft industry to continually drive improvement of jet engines.

For him, Americans, but not only Americans, idolize the stereotype Einstein ‘head in the clouds’ scientists and Garage entrepreneurs like Steve Jobs or Bill Gates, but the inconvenient truth is that much of today’s technology exists because of investment and the direction of science given by the military:

Science has been important for technological development, of course. Scientists have discovered and probed phenomena that have turned out to have enormously broad technological applications. But the miracles of modernity in the above list came not from “the free play of free intellects,” but from the leashing of scientific creativity to the technological needs of the US Department of Defense (DOD).

In Brazil, the health industrial complex, proposed by Carlos Gadelha da Fiocruz under José Gomes Temporão at the Ministry of Health, was a similar model. With the Product Development Partnerships – PDP, the government financed innovation as a client and not as a development agency. A dozen biological medicines had their production cycle dominated by Brazilian startups, such as Hygea, which would never have obtained private capital investment for this development if they had not had future purchase contracts signed by the Ministry of Health. It is regrettable that the PdPs are suspended.

The technological counter-proof

But it was not discipline, money, or military motivation that guaranteed the success of scientific investigation. It was the counter-proof of the technological application. Sarewitz suggests that technology was a way of measuring the progress (or efficiency, or effectiveness) of science:

Science has been such a wildly successful endeavor over the past two hundred years in large part because technology blazed a path for it to follow. Not only have new technologies created new worlds, new phenomena, and new questions for science to explore, but technological performance has provided a continuous, unambiguous demonstration of the validity of the science being done. The electronics industry and semiconductor physics progressed hand-in-hand not because scientists, working “in the manner dictated by their curiosity for exploration of the unknown,” kept lobbying new discoveries over the lab walls that then allowed transistor technology to advance,

And he goes further: without technology, there is no way to measure the advancement of science:

Technology is what links science to human experience; it is what makes science real for us. A light switch, a jet aircraft, or a measles vaccine, these are cause-and-effect machines that turn phenomena that can be described by science—the flow of electrons, the movement of air molecules, the stimulation of antibodies—into reliable outcomes: the light goes on, the jet flies, the child becomes immune. The scientific phenomena must be real or the technologies would not work.

In fact, without the technology to measure the progress of science, it is left adrift at the whims of researchers:

The professional incentives for academic scientists to assert their elite status are perverse and crazy, and promotion and tenure decisions focus above all on how many research dollars you bring in, how many articles you get published, and how often those articles are cited in other articles . […] Universities—competing desperately for top faculty, the best graduate students, and government research funds—hype for the news media the results coming out of their laboratories, encouraging a culture in which every scientist claims to be doing path-breaking work that will solve some urgent social problem. […] The scientific publishing industry does not exist to disseminate valuable information but to allow the ever-increasing number of researchers to publish more papers—now on the order of a couple million peer-reviewed articles per year—so that they can advance professionally. […] Bias is an inescapable attribute of human intellectual endeavor, and it creeps into science in many different ways, from bad statistical practices to poor experimental or model design to mere wishful thinking. If biases are random then they should more or less balance each other out through multiple studies. But as numerous close observers of the scientific literature have shown, there are powerful sources of bias that push in one direction: come up with a positive result, show something new, different, eye-catching, transformational, something that announces you as part of the elite. […] A survey of more than 1,500 scientists published by Nature in May 2016 shows that 80 percent or more believe that scientific practice is being undermined by such factors as “selective reporting” of data, publication pressure, poor statistical analysis, insufficient attention to replication, and inadequate peer review.

The consequence of this is poor quality science:

The number of retracted scientific publications rose tenfold during the first decade of this century, […] poor quality, unreliable, useless, or invalid science may in fact be the norm in some fields, and the number of scientifically suspect or worthless publications may well be counted in the hundreds of thousands annually. […] Richard Horton, editor-in-chief of The Lancet, puts it like this: “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious darkness, science has taken a turn towards. […] an economic analysis published in June 2015 estimates that $28 billion per year is wasted on biomedical research that is unreproducible. Science is not self-correcting; it’s self-destructing.

In fact, not all phenomena are amenable to scientific conclusions in the same way. Some are more context dependent than others and this can impact an experiment so much that it becomes impossible to run with the proper controls. In these cases, it is very important to restrict the context of the phenomenon that is being observed/experienced as much as possible so that we can legitimately conclude something. This is what the physicist Alvin Weinberg called in a 1972 paper (Science and Trans-Science) trans-science:

Weinberg observed that society would increasingly be calling upon science to understand and address the complex problems of modernity—many of which, of course, could be traced back to science and technology. But he accompanied this recognition with a much deeper and more powerful insight: that such problems “hang on the answers to questions that can be asked of science and yet which cannot be answered by science.” He called research into such “trans-science” questions. If traditional sciences aim for precise and reliable knowledge about natural phenomena, trans-science pursues realities that are contingent or in flux. The objects and phenomena studied by trans-science—populations, economies, engineered systems—depend on many different things, including the particular conditions under which they are studied at a given time and place, and the choices that researchers make about how to define and study them. This means that the objects and phenomena studied by trans-science are never absolute but instead are variable, imprecise, uncertain—and thus always potentially subject to interpretation and debate. By contrast, Weinberg argues, natural sciences such as physics and chemistry study objects that can be characterized by a small number of measurable variables. […] This combination of predictable behavior and invariant fundamental attributes is what makes the physical sciences so valuable in contributing to technological advance—the electron, the photon, the chemical reaction, the crystalline structure, when confined to the controlled environment of the laboratory or the engineered design of a technology, behaves as it is supposed to behave pretty much all the time.

He fears that the predictive power that science has for some disciplines may simply not exist for others:

But many other branches of science study things that cannot be predictably characterized and that may not behave even under controlled conditions—things like a cell or a brain, or a particular site in the brain, or a tumor, or a psychological condition. Or a species of bird. Or a toxic waste dump. Or the classroom. Or “the economy.” Or the earth’s climate. Such things may differ from one day to the next, from one place or one person to another. Their behavior cannot be described and predicted by the sorts of general laws that physicists and chemists call upon, since their characteristics are not invariable but rather depend on the context in which they are studied and the way they are defined. Of course scientists work hard to come up with useful ways to characterize the things they study, like using the notion of a species to classify biologically distinct entities, or GDP to define the scale of a nation’s economy, or IQ to measure a person’s intelligence, or biodiversity to assess the health of an ecosystem, or global average atmospheric temperature to assess climate change. Or they use statistics to characterize the behavior of a heterogeneous class of things, for example the rate of accidents of drivers of a certain age, or the incidence of a certain kind of cancer in people with a certain occupation, or the likelihood of a certain type of tumor to metastasize in a mouse or a person. But these ways of naming and describing objects and phenomena always come with a cost—the cost of being at best only an approximation of the complex reality. Thus scientists can breed a strain of mouse that tends to display loss of cognitive function with aging, and the similarities between different mice of that strain may approximate the kind of homogeneity possessed by the objects studied by physics and chemistry. This makes the mouse a useful subject for research. But we must bear the cost of that usefulness: the connection between the phenomena studied in that mouse strain and the more complex phenomena of human diseases, such as Alzheimer’s, is tenuous—or even, as Susan Fitzpatrick worries, nonexistent.

Weinberg’s solution didn’t seem viable even for him: that scientists would develop an altruistic honesty, recognizing the limits of their research and conclusions.

To ensure that science does not become completely infected with bias and personal opinion, Weinberg recognized that it would be essential for scientists to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” But doing so would require “the kind of selfless honesty which a scientist or engineer with a position or status to maintain finds hard to exercise.” Moreover, this is “not at all easy since experts will often disagree as to the extent and reliability of their expertise.”

That’s why technology needs to regain the role of validating science, so that we don’t have to rely on the altruistic honesty of scientists.

[If you fund] scientists and left them alone to do their work, [you’d] end up with a lot of useless knowledge and a lot of unsolved problems.

The current dominant paradigm will continue to crumble under the weight of its own contradictions, but it will also continue to hog most of the resources and insist on its elevated social and political status.

In the absence of a technological application that can select for useful truths that work in the real world of light switches, vaccines, and aircraft, there is often no “right” way to discriminate among or organize the mass of truths scientists create.

“Have no constituency in the research community, have it only in the end-user community.” if your constituency is society, not scientists, then the choice of what data and knowledge you need has to be informed by the real-world context of the problem to be solved.

The innovation industrial complex

My reading of the article is that science needs to be more entrepreneurial and operate through mechanisms more similar to entrepreneurship: love the problem and not its solution or the area of ​​action:

In the future, the most valuable science institutions will be closely linked to the people and places whose urgent problems need to be solved; they will cultivate strong lines of accountability to those for whom solutions are important; they will encourage scientists to care about the problems more than the production of knowledge. They will link research agendas to the quest for improved solutions—often technological ones—rather than understanding for their own sake. The science they produce will be of higher quality, because it will have to be.

It is necessary to create new incentives, an industrial innovation complex in which the parties are encouraged to solve society’s problems and are accountable for the resources invested in solving these problems, not to themselves, but at least to each other. Brazil has everything to launch or take off in this new model. We have incentive laws and a strong industry that provides the resources for research and development. With the end of the contingency of FNDCT resources, we should no longer have resource problems for research and development funding. We have a solid base of scientists and good infrastructure for research, albeit with many supply and supply management issues. Perhaps our biggest problem is mistrust, but we learned from Bitcoin that with well-distributed incentives, we can negotiate trustlesslly.