Science as a Way of Knowing: Critical Thinking about the Environment

Science as a Way of Knowing: Critical Thinking about the Environment

What Science Is—and What It Isn’t?

Thinking about the environment is as old as our first human ancestors. Before humans developed the technology to deal with their environment, their very survival depended on knowledge of it. The environment also plays a crucial role in the development of each of us; normal human development does not occur in the absence of environmental stimuli.

 

However, thinking scientifically about the environment is only as old as science itself. Science had its roots in the ancient civilizations of Babylonia and Egypt, where observations of the environment were carried out primarily for practical reasons, such as planting crops, or for religious reasons, such as using the positions of the planets and stars to predict human events. Ancient precursors of science differed from modern science in that they did not distinguish between science and technology, nor between science and religion.

 

These distinctions first appeared in classical Greek science. Because of their general interest in ideas, the Greeks developed a more theoretical approach to science, in which knowledge for its own sake became the primary goal. At the same time, their philosophical approach began to move science away from religion and toward philosophy. Modern science is usually considered to have begun toward the end of the 16th and the beginning of the 17th centuries with the development of the scientific method by Gilbert (magnets), Galileo (physics of motion), and Harvey (circulation of blood). Earlier classical scientists had asked “Why?” in the sense of “For what purpose?” But these three made important discoveries by asking “How?” in the sense of “How does it work?” Galileo also pioneered in the use of numerical observations and mathematical models. The scientific method, which quickly proved very successful in advancing knowledge, was first described explicitly by Francis Bacon in 1620. Although not a practicing scientist himself, Bacon recognized the importance of the scientific method, and his writings did much to promote scientific research.

 

Our cultural heritage, therefore, gives us two ways of thinking about the environment: the kind of thinking we do in everyday life and the kind of thinking scientists try to do. There are crucial differences between these two ways of thinking, and ignoring these differences can lead to invalid conclusions and serious errors in making critical decisions about the environment. We can look at the world from many points of view, including religious, aesthetic, and moral. They are not science, however, because they are based ultimately on faith, beliefs, and cultural and personal choices, and are not open to disproof in the scientific sense. The distinction between a scientific statement and a nonscientific statement is not a value judgment—there is no implication that science is the only “good” kind of knowledge. The distinction is simply a philosophical one about kinds of knowledge and logic. Each way of viewing the world gives us a different way of perceiving and of making sense of our world, and each is valuable to us.

 

Science as a Way of Knowing

Science is a process, a way of knowing. It results in conclusions, generalizations, and sometimes scientific theories and even scientific laws. Science begins with questions arising from curiosity about the natural world, such as: How many birds nest at Mono Lake? What species of algae live in the lake? Under what conditions do they live? Modern science does not deal with things that cannot be tested by observation, such as the ultimate purpose of life or the existence of a supernatural being. Science also does not deal with questions that involve values, such as standards of beauty or issues of good and evil—for example, whether the scenery at Mono Lake is beautiful. On the other hand, the statement that “more than 50% of the people who visit Mono Lake find the scenery beautiful” is a hypothesis  that can be tested by public-opinion surveys and can be treated as a scientific statement if the surveys confirm it.

 

Science is a process of discoverya continuing process whose essence is change in ideas. The fact that scientific ideas change is frustrating. Why can’t scientists agree on what is the best diet for people? Why a chemical is considered dangerous in the environment for a while and then determined not to be? Why do scientists in one decade consider forest fires undesirable disturbances and in a later decade decide forest fires are natural and in fact important? Are we causing global warming or not? Can’t scientists just find out the truth and give us the final word on all these questions once and for all, and agree on it?

 

The answer is no—because science is a continuing adventure during which scientists make better and better approximations of how the world works. Sometimes changes in ideas are small, and the major context remains the same. Sometimes a science undergoes a fundamental revolution in ideas.

 

Science makes certain assumptions about the natural world: that events in the natural world follow patterns that can be understood through careful observation and scientific analysis, which we will describe later; and that these basic patterns and the rules that describe them are the same throughout the universe.

 

Observations, the basis of science, may be made through any of the five senses or by instruments that measure beyond what we can sense. Inferences are generalizations that arise from a set of observations. When everyone or almost everyone agrees with what is observed about a particular thing, the inference is often called a fact.

We might observe that a substance is a white, crystalline material with a sweet taste. We might infer from these observations alone that the substance is sugar. Before this inference can be accepted as fact, however, it must be subjected to further tests. Confusing observations with inferences and accepting untested inferences as facts are kinds of sloppy thinking described as

 

“Thinking makes it so.” When scientists wish to test an inference, they convert it into a hypothesis, which is a statement that can be disproved. The hypothesis continues to be accepted until it is disproved.

 

For example, a scientist is trying to understand how a plant’s growth will change with the amount of light it receives. She proposed a hypothesis that a plant can use only so much light and no more—it can be “saturated” by an abundance of light. She measures the rate of photosynthesis at a variety of light intensities. The rate of photosynthesis is called the dependent variable because it is affected by, and in this sense depends on, the amount of light, which is called the independent variable. The independent variable is also sometimes called a manipulated variable because it is deliberately changed, or manipulated, by the scientist. The dependent variable is then referred to as a responding variable—one that responds to changes in the manipulated variable. These values are referred to as data (singular: datum). They may be numerical, quantitative data, or nonnumerical, qualitative data. In our example, qualitative data would be the species of a plant; quantitative data would be the tree’s mass in grams or the diameter in centimeters. The result of the scientist’s observations: The hypothesis is confirmed: The rate of photosynthesis increases to a certain level and does not go higher at higher light intensities.

 

Controlling Variables

In testing a hypothesis, a scientist tries to keep all relevant variables constant except for the independent and dependent variables. This practice is known as controlling variables. In a controlled experiment, the experiment is compared to a standard, or control—an exact duplicate of the experiment except for the one variable being tested (the independent variable). Any difference in outcome (dependent variable) between the experiment and the control can be attributed to the effect of the independent variable.

 

An important aspect of science, but one frequently overlooked in descriptions of the scientific method, is the need to define or describe variables in exact terms that all scientists can understand. The least ambiguous way to define or describe a variable is in terms of what one would have to do to duplicate the measurement of that variable. Such definitions are called operational definitions. Before carrying out an experiment, both the independent and dependent variables must be defined operationally. Operational definitions allow other scientists to repeat experiments exactly and to check on the results reported. Science is based on inductive reasoning, also called induction: It begins with specific observations and then extends to generalizations, which may be disproved by testing them. If such a test cannot be devised, then we cannot treat the generalization as a scientific statement. Although new evidence can disprove existing scientific theories, science can never provide absolute proof of the truth of its theories.

 

The Nature of Scientific Proof

One source of serious misunderstanding about science is the use of the word proof, which most students encounter in mathematics, particularly in geometry. Proof in mathematics and logic involves reasoning from initial definitions and assumptions. If a conclusion follows logically from these assumptions, or premises, we say it is proven. This process is known as deductive reasoning.

 

An example of deductive reasoning is the following syllogism, or series of logically connected statements:

Premise: A straight line is the shortest distance between two points.

Premise: The line from A to B is the shortest distance between points A and B.

Conclusion: Therefore, the line from A to B is a straight line.

 

Note that the conclusion in this syllogism follows directly from the premises.

Deductive proof does not require that the premises be true, only that the reasoning be foolproof. Statements that are logically valid but untrue can result from false premises:

Premise: Humans are the only toolmaking organisms.

Premise: The woodpecker finch uses tools.

Conclusion: Therefore, the woodpecker finch is a human being.

 

In this case, the concluding statement must be true if both of the preceding statements are true. However, we know that the conclusion is not only false but ridiculous. If the second statement is true (which it is), then the first cannot be true.

 

The rules of deductive reasoning govern only the process of moving from premises to conclusion. Science, in contrast, requires not only logical reasoning but also correct premises. Returning to the example of the woodpecker finch, to be scientific the three statements should be expressed conditionally (that is, with reservation):

 

If humans are the only toolmaking organisms and the woodpecker finch is a toolmaker,

Then the woodpecker finch is a human being.

When we formulate generalizations based on a number of observations, we are engaging in inductive reasoning. To illustrate: One of the birds that feed at Mono Lake is the eared grebe. The “ears” are a fan of golden feathers that occur behind the eyes of males during the breeding season. Let us define birds with these golden feather fans as eared grebes. If we always observe that the breeding male grebes have this feather fan, we may make the inductive statement “All male eared grebes have golden feathers during the breeding season.”

 

What we really mean is “All of the male eared grebes we have seen in the breeding season have golden feathers.” We never know when our very next observation will turn up a bird that is like a male eared grebe in all ways except that it lacks these feathers in the breeding season. This is not impossible; it could occur somewhere due to a mutation.

 

Proof in inductive reasoning is therefore very different from proof in deductive reasoning. When we say something is proven in induction, what we really mean is that it has a very high degree of probability. Probability is a way of expressing our certainty (or uncertainty)—our estimation of how good our observations are, how confident we are of our predictions.

 

Theory in Science and Language

A common misunderstanding about science arises from confusion between the use of the word theory in science and its use in everyday language. A scientific theory is a grand scheme that relates and explains many observations and is supported by a great deal of evidence. In contrast, in everyday usage a theory can be a guess, a hypothesis, a prediction, a notion, a belief. We often hear the phrase “It’s just a theory.” That may make sense in everyday conversation but not in the language of science. In fact, theories have tremendous prestige and are considered the greatest achievements of science.

 

One of the most important misunderstandings about the scientific method pertains to the relationship between research and theory. Theory is usually presented as growing out of research, but in fact theories also guide research. When a scientist makes observations, he or she does so in the context of existing theories. At times, discrepancies between observations and accepted theories become so great that a scientific revolution occurs: The old theories are discarded and are replaced with new or significantly revised theories.

 

Knowledge in an area of science grows as more hypotheses are supported. Ideally, scientific hypotheses are continually tested and evaluated by other scientists, and this provides science with a built-in self-correcting feedback system. This is an important, fundamental feature of the scientific method. If you are told that scientists have reached a consensus about something, you want to check carefully to see if this feedback process has been used correctly and is still possible. If not, what began as science can be converted to ideology—a way that certain individuals, groups, or cultures may think despite evidence to the contrary.

 

Models and Theory

Scientists use accumulated knowledge to develop explanations that are consistent with currently accepted hypotheses. Sometimes an explanation is presented as a model. A model is “a deliberately simplified construct of nature.”

 

It may be a physical working model, a pictorial model, a set of mathematical equations, or a computer simulation. For example, the U.S. Army Corps of Engineers has a physical model of San Francisco Bay. Open to the public to view, it is a miniature in a large aquarium with the topography of the bay reproduced to scale and with water flowing into it in accordance with tidal patterns. Elsewhere, the Army Corps develops mathematical equations and computer simulations, which are models and attempt to explain some aspects of such water flow.

 

As new knowledge accumulates, models may no longer be consistent with observations and may have to be revised or replaced; with the goal of finding models more consistent with nature.5 Computer simulation of the atmosphere has become important in scientific analysis of the possibility of global warming. Computer simulation is becoming important for biological systems as well, such as simulations of forest growth.

 

Some Alternatives to Direct Experimentation

Environmental scientists have tried to answer difficult questions using several approaches, including historical records and observations of modern catastrophes and disturbances.

 

Historical Evidence

Ecologists have made use of both human and ecological historical records. A classic example is a study of the history of fire in the Boundary Waters Canoe Area (BWCA) of Minnesota, 1 million acres of boreal forests, streams, and lakes well known for recreational canoeing.

 

Murray (“Bud”) Heinselman had lived near the BWCA for much of his life and was instrumental in having it declared a wilderness area. A forest ecological scientist, Heinselman set out to determine the past patterns of fires in this wilderness. Those patterns are important in maintaining the wilderness. If the wilderness has been characterized by fires of a specific frequency, then one can argue that this frequency is necessary to maintain the area in its most “natural” state.

 

Heinselman used three kinds of historical data: written records, tree-ring records, and buried records (fossil and prefossil organic deposits). Trees of the boreal forests, like most trees that are conifers or angiosperms (flowering plants), produce annual growth rings. If a fire burns through the bark of a tree, it leaves a scar, just as a serious burn leaves a scar on human skin. The tree grows over the scar, depositing a new growth ring for each year.

 

By examining cross sections of trees, it is possible to determine the date of each fire and the number of years between fires. From written and tree-ring records, Heinselman found that the frequency of fires had varied over time but that since the 17th century the BWCA forests had burned, on average, once per century. Furthermore, buried charcoal dated using carbon-14 revealed that fires could be traced back more than 30,000 years.

 

The three kinds of historical records provided important evidence about fire in the history of the BWCA. At the time Heinselman did his study, the standard hypothesis was that fires were bad for forests and should be suppressed. The historical evidence provided a disproof of this hypothesis. It showed that fires were a natural and an integral part of the forest and that the forest had persisted with fire for a very long time. Thus, the use of historical information meets the primary requirement of the scientific method—the ability to disprove a statement. Historical evidence is a major source of data that can be used to test scientific hypotheses in ecology.

 

Modern Catastrophes and Disturbances as Experiments

Sometimes a large-scale catastrophe provides a kind of modern ecological experiment. The volcanic eruption of Mount St. Helens in 1980 supplied such an experiment, destroying vegetation and wildlife over a wide area. The recovery of plants, animals, and ecosystems following this explosion gave scientists insights into the dynamics of ecological systems and provided some surprises. The main surprise was how quickly vegetation recovered and wildlife returned to parts of the mountain. In other ways, the recovery followed expected patterns in ecological succession.

 

It is important to point out that the greater the quantity and the better the quality of ecological data prior to such a catastrophe, the more we can learn from the response of ecological systems to the event. This calls for careful monitoring of the environment.

 

Uncertainty in Science

In science, when we have a fairly high degree of confidence in our conclusions, we often forget to state the degree of certainty or uncertainty. Instead of saying, “There is a 99.9% probability that . . . ,” we say, “It has been proved that . . .” Unfortunately, many people interpret this as a deductive statement, meaning the conclusion is absolutely true, which has led to much misunderstanding about science. Although science begins with observations and therefore inductive reasoning, deductive reasoning is useful in helping scientists analyze whether conclusions based on inductions are logically valid. Scientific reasoning combines induction and deduction— different but complementary ways of thinking.

 

Leaps of Imagination and Other Nontraditional Aspects of the Scientific Method

What we have described so far is the classic scientific method. Scientific advances, however, often happen somewhat differently. They begin with instances of insight—leaps of imagination that are then subjected to the stepwise inductive process. And some scientists have made major advances by being in the right place at the right time, noticing interesting oddities, and knowing how to put these clues together. For example, penicillin was discovered “by accident” in 1928 when Sir Alexander Fleming was studying the pus-producing bacterium Staphylococcus aureus. When a culture of these bacteria was accidentally contaminated by the green fungus Penicillium notatum, Fleming noticed that the bacteria did not grow in areas of the culture where the fungus grew. He isolated the mold, grew it in a fluid medium, and found that it produced a substance that killed many of the bacteria that caused diseases. Eventually this discovery led other scientists to develop an injectable agent to treat diseases. Penicillium notatum is a common mold found on stale bread. No doubt many others had seen it, perhaps even noticing that other strange growths on bread did not overlap with Penicillium notatum. But it took Fleming’s knowledge and observational ability for this piece of “luck” to occur.

 

Measurements and Uncertainty

A Word about Numbers in Science

We communicate scientific information in several ways. The written word is used for conveying synthesis, analysis, and conclusions. When we add numbers to our analysis, we obtain another dimension of understanding that goes beyond qualitative understanding and synthesis of a problem. Using numbers and statistical analysis allows us to visualize relationships in graphs and make predictions. It also allows us to analyze the strength of a relationship and in some cases discover a new relationship.

 

People in general put more faith in the accuracy of measurements than do scientists. Scientists realize that all measurements are only approximations, limited by the accuracy of the instruments used and the people who use them. Measurement uncertainties are inevitable; they can be reduced but never completely eliminated. For this reason, a measurement is meaningless unless it is accompanied by an estimate of its uncertainty.

 

Consider the loss of the Challenger space shuttle in 1986, the first major space shuttle accident, which appeared to be the result of the failure of rubber O-rings that were supposed to hold sections of rockets together.

Imagine a simplified scenario in which an engineer is given a rubber O-ring used to seal fuel gases in a space shuttle. The engineer is asked to determine the flexibility of the O-rings under different temperature conditions to help answer two questions: At what temperature do the

O-rings become brittle and subject to failure? And at what temperature(s) is it unsafe to launch the shuttle? After doing some tests, the engineer says that the rubber becomes brittle at –1°C (30°F). So, can you assume it is safe to launch the shuttle at 0°C (32°F)?

 

At this point, you do not have enough information to answer the question. You assume that the temperature data may have some degree of uncertainty, but you have no idea how great a degree. Is the uncertainty ±5°C, ±2°C, or ±0.5°C? To make a reasonably safe and economically sound decision about whether to launch the shuttle, you must know the amount of uncertainty of the measurement.

 

Dealing with Uncertainties

There are two sources of uncertainty. One is the real variability of nature. The other is the fact that every measurement has some error. Measurement uncertainties and other errors that occur in experiments are called experimental errors. Errors that occur consistently, such as those resulting from incorrectly calibrated instruments, are systematic errors.

 

Scientists traditionally include a discussion of experimental errors when they report results. Error analysis often leads to greater understanding and sometimes even to important discoveries. For example, scientists discovered the eighth planet in our solar system, Neptune, when they investigated apparent inconsistencies— observed “errors”—in the orbit of the seventh planet, Uranus.

 

Accuracy and Precision

A friend inherited some land on an island off the coast of Maine. However, the historical records were unclear about the land’s boundaries, and to sell any portion of the land, he first had to determine where his neighbor’s land ended and his began. There were differences of opinion about this. In fact, some people said one boundary went right through the house, which would have caused a lot of problems! Clearly what was needed was a good map that everybody could agree on, so our friend hired a surveyor to determine exactly where the boundaries were.

 

The original surveyor’s notes from the early 19th century had vague guidelines, such as “beginning at the mouth of Marsh brook on the Eastern side of the bars at a stake and stones. . . thence running South twenty six rods to a stake & stones. . . .” Over time, of course, the shore, the brook, its mouth, and the stones had moved and the stakes had disappeared. The surveyor was clear about the total distance (a rod, by the way, is an old English measure equal to 16.5 feet or 5.02 meters), but “South” wasn’t very specific. So where and in exactly which direction was the true boundary? (This surveyor’s method was common in early-19th-century New England.

 

One New Hampshire survey during that time began with “Where you and I were standing yesterday.

.” Another began, “Starting at the hole in the ice [on the pond] . . .”).

 

The 21st-century surveyor who was asked to find the real boundary used the most modern equipment—laser and microwave surveying transits, GPS devices—so he knew where the line he measured went to in millimeters.

 

He could re-measure his line and come within millimeters of his previous location. But because the original starting point couldn’t be determined within many meters, the surveyor didn’t know where the true boundary line went; it was just somewhere within 10 meters or so of the line he had surveyed. So the end result was that even after this careful, modern, high technology survey, nobody really knew where the original boundary lines went. Scientists would say that the modern surveyor’s work was precise but not accurate.

 

Accuracy refers to what we know; precision to how well we measure. With such things as this land survey, this is an important difference.

Accuracy also has another, slightly different scientific meaning. In some cases, certain measurements have been made very carefully by many people over a long period, and accepted values have been determined. In that kind of situation, accuracy means the extent to which a measurement agrees with the accepted value. But as before, precision retains its original meaning, the degree of exactness with which a quantity is measured. In the case of the land in Maine, we can say that the new measurement had no accuracy in regard to the previous (“accepted”) value.

 

Although a scientist should make measurements as precisely as possible, this friend’s experience with surveying his land shows us that it is equally important not to report measurements with more precision than they warrant. Doing so conveys a misleading sense of both precision and accuracy.

 

Misunderstandings about Science and Society

Science and Decision Making

Like the scientific method, the process of making decisions is sometimes presented as a series of steps:

  1. Formulate a clear statement of the issue to be decided.
  2. Gather the scientific information related to the issue.
  3. List all alternative courses of action.
  4. Predict the positive and negative consequences of each course of action and the probability that each consequence will occur.
  5. Weigh the alternatives and choose the best solution. Such a procedure is a good guide to rational decision making, but it assumes a simplicity not often found in real-world issues. It is difficult to anticipate all the potential consequences of a course of action, and unintended consequences are at the root of many environmental problems.

 

Often the scientific information is incomplete and even controversial. For example, the insecticide DDT causes eggshells of birds that feed on insects to be so thin that unhatched birds die. When DDT first came into use, this consequence was not predicted. Only when populations of species such as the brown pelican became seriously endangered did people become aware of it.

 

In the face of incomplete information, scientific controversies, conflicting interests, and emotionalism, how can we make sound environmental decisions? We need to begin with the scientific evidence from all relevant sources and with estimates of the uncertainties in each. Avoiding emotionalism and resisting slogans and propaganda are essential to developing sound approaches to environmental issues. Ultimately, however, environmental decisions are policy decisions negotiated through the political process. Policymakers are rarely professional scientists; generally, they are political leaders and ordinary citizens. Therefore, the scientific education of those in government and business, as well as of all citizens, is crucial.

 

Science and Technology

Science is often confused with technology. As noted earlier, science is a search for understanding of the natural world, whereas technology is the application of scientific knowledge in an attempt to benefit people. Science often leads to technological developments, just as new technologies lead to scientific discoveries. The telescope began as a technological device, such as an aid to sailors, but when Galileo used it to study the heavens, it became a source of new scientific knowledge. That knowledge stimulated the technology of telescope-making, leading to the production of better telescopes, which in turn led to further advances in the science of astronomy.

 

Science is limited by the technology available. Before the invention of the electron microscope, scientists were limited to magnifications of 1,000 times and to studying objects about the size of one-tenth of a micrometer. (A micrometer is 1/1,000,000 of a meter, or 1/1,000 of a millimeter.) The electron microscope enabled scientists to view objects far smaller by magnifying more than 100,000 times. The electron microscope, a basis for new science, was also the product of science. Without prior scientific knowledge about electron beams and how to focus them, the electron microscope could not have been developed.

 

Most of us do not come into direct contact with science in our daily lives; instead, we come into contact with the products of science—technological devices such as computers, iPods, and microwave ovens. Thus, people tend to confuse the products of science with science itself. As you study science, it will help if you keep in mind the distinction between science and technology.

 

Science and Objectivity

One myth about science is the myth of objectivity, or value free science—the notion that scientists are capable of complete objectivity independent of their personal values and the culture in which they live, and that science deals only with objective facts. Objectivity is certainly a goal of scientists, but it is unrealistic to think they can be totally free of influence by their social environments and personal values. It would be more realistic to admit that scientists do have biases and to try to identify these biases rather than deny or ignore them. In some ways, this situation is similar to that of measurement error: It is inescapable, and we can best deal with it by recognizing it and estimating its effects.

 

To find examples of how personal and social values affect science, we have only to look at recent controversies about environmental issues, such as whether or not to adopt more stringent automobile emission standards. Genetic engineering, nuclear power, global warming, and the preservation of threatened or endangered species involve conflicts among science, technology, and society. When we function as scientists in society, we want to explain the results of science objectively. As citizens who are not scientists, we want scientists to always be objective and tell us the truth about their scientific research.

 

That science is not entirely value-free should not be taken to mean that fuzzy thinking is acceptable in science. It is still important to think critically and logically about science and related social issues. Without the high standards of evidence held up as the norm for science, we run the risk of accepting unfounded ideas about the world. When we confuse what we would like to believe with what we have the evidence to believe, we have a weak basis for making critical environmental decisions that could have far-reaching and serious consequences.

 

The great successes of science, especially as the foundation for so many things that benefit us in modern technological societies—from cell phones to CAT scans to space exploration—give science and scientists a societal authority that makes it all the more difficult to know when a scientist might be exceeding the bounds of his or her scientific knowledge. It may be helpful to realize that scientists play three roles in our society: first, as researchers simply explaining the results of their work; second, as almost priest like authorities who often seem to speak in tongues the rest of us can’t understand; and third, as what we could call expert witnesses. In this third role, they will discuss broad areas of research that they are familiar with and that are within their field of study, but about which they may not have done research themselves. Like an expert testifying in court, they are basically saying to us, “Although I haven’t done this particular research myself, my experience and knowledge suggest to me that . . .”

 

The roles of researcher and expert witness are legitimate as long as it is clear to everybody which role a scientist is playing. Whether you want a scientist to be your authority on everything, within science and outside of science, is a personal and value choice. In the modern world, there is another problem about the role of scientists and science in our society. Science has been so potent that it has become fundamental to political policies. As a result, science can become politicized, which means that rather than beginning with objective inquiry, people begin with a belief about something and pick and choose only the scientific evidence that supports that belief. This can even be carried to the next step, where research is funded only if it fits within a political or an ethical point of view.

 

Scientists themselves, even acting as best they can as scientists, can be caught up in one way of thinking when the evidence points to another. These scientists are said to be working under a certain paradigm, a particular theoretical framework. Sometimes their science undergoes a paradigm shift: New scientific information reveals a great departure from previous ways of thinking and from previous scientific theories, and it is difficult, after working within one way of thinking, to recognize that some or all of their fundamentals must change. Paradigm shifts happen over and over again in science and lead to exciting and often life-changing results for us. The discovery and understanding of electricity are examples, as is the development of quantum mechanics in physics in the early decades of the 20th century.

 

We can never completely escape biases, intentional and unintentional, in fundamental science, its interpretation, and its application to practical problems, but understanding the nature of the problems that can arise can help us limit this misuse of science. The situation is complicated by legitimate scientific uncertainties and differences in scientific theories. It is hard for us, as citizens, to know when scientists are having a legitimate debate about findings and theories, and when they are disagreeing over personal beliefs and convictions that are outside of science. Because environmental sciences touch our lives in so many ways, because they affect things that are involved with choices and values, and because these sciences deal with phenomena of great complexity, the need to understand where science can go astray is especially important.

 

Science, Pseudoscience,

and Frontier Science

Some ideas presented as scientific are in fact not scientific, because they are inherently untestable, lack empirical support, or are based on faulty reasoning or poor scientific methodology, as illustrated by the case of the mysterious crop circles. Such ideas are referred to as pseudoscientific (the prefix pseudo– means false).

 

Environmental Questions and the Scientific Method

Environmental sciences deal with especially complex systems and include a relatively new set of sciences. Therefore, the process of scientific study has not always neatly followed the formal scientific method discussed earlier in this chapter. Often, observations are not used to develop formal hypotheses. Controlled laboratory experiments have been the exception rather than the rule. Much environmental research has been limited to field observations of processes and events that have been difficult to subject to controlled experiments.

 

Environmental research presents several obstacles to following the classic scientific method. The long time frame of many ecological processes relative to human lifetimes, professional lifetimes, and lengths of research grants poses problems for establishing statements that can in practice be subject to disproof. What do we do if a theoretical disproof through direct observation would take a century or more? Other obstacles include difficulties in setting up adequate experimental controls for field studies, in developing laboratory experiments of sufficient complexity, and in developing theory and models for complex systems. Throughout this text, we present differences between the “standard” scientific method and the actual approach that has been used in environmental sciences.

Leave a Reply

Your email address will not be published. Required fields are marked *