Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Earth's Rotation: How Does Our Earth Rotate?

Many ancient peoples explained the change from day into night and the changing positions of the stars by saying that the sky moved around Earth. From ancient times through the Middle Ages, people believed that Earth stood motionless. It is now known that Earth is not still. The apparent daily movement of the stars in the heavens is due to the rotation of the planet about its axis. Earth makes a single complete rotation in 23 hours, 56 minutes, 4.09 seconds.


The French physicist Jean-Bernard-Léon Foucault provided one of the best proofs of the planet's rotation. He first performed his pendulum experiment in 1851. Foucault suspended a steel wire from the dome of the Pantheon (a public building in Paris). He then attached a heavy weight to the end of the wire. A pin was fastened on the end of the weight. The pinpoint rested on a circular layer of sand on the floor.

Foucault set the pendulum swinging. The pendulum moved to and fro, in the same plane. The pin at the end of the weight began to trace lines in the sand. As the pendulum continued to swing, the lines followed different directions.

There could be only one explanation. The pendulum did not change direction. Therefore, it must be the ridge of sand that was turning. The sand rested on the floor of the Pantheon, and the Pantheon itself rested on Earth. Foucault concluded that Earth itself must be rotating.

Foucault's device is known as the Foucault pendulum. One of these pendulums has been erected in the General Assembly Building of the United Nations in New York City. Another interesting effect due to the rotation of Earth can be produced with a camera.

Focus a camera on the North Star and leave the shutter open for several hours. The photographed stars will not appear as points -they will be curved lines. This is because Earth, on which the camera rests, has been rotating on its axis.

The alternation of day and night is also due to the rotation of Earth about its axis. As the planet turns, a given place on its surface will be in sunlight or in darkness. It depends on whether the surface is facing toward the Sun or away from the Sun. In addition, Earth's rotation effects air currents. Air currents turn toward the right in the Northern Hemisphere, and to the left in the Southern. This phenomenon is called the Coriolis effect.

Age of Earth: How Old Is Our Beautiful Earth?

Only recently has science provided a relatively exact method for estimating Earth's age. As late as the 1700s, Earth was thought to be between 6,000 to 75,000 years old. But the young science of geology began to unravel the meaning of fossils in the early 1800s. Experts eventually accepted that Earth had a very long history. The discovery of radioactivity in the late 1800s provided the best means for determining Earth's age.


The technique -called radiometric dating- is based on the radioactivity of certain minerals. In these minerals, one or more chemical elements decay radioactively. Their atoms give off very small particles and other radiation. During this process the radioactive elements change slightly and become other elements. A given element may also have different forms, or isotopes. Isotopes have different atomic weights. Some of these isotopes may be radioactive and undergo change.

Each radioactive-decay process occurs regardless of external circumstances. Each decay process takes a fixed length of time, known as its half-life. The length of time depends on the isotope and its atomic weight. As the element decays, its quantity in the rock or mineral becomes smaller. The amount of the element it is changing into becomes greater. Scientists can therefore calculate the age of the rock or mineral. They simply need to know the decay times of elements and the proportions of these elements and of their end products.

Several radioactive elements and isotopes are commonly used to date ancient objects. These elements are: carbon-14 (carbon isotope with atomic weight 14); rubidium-87; potassium-40; strontium-90; and uranium-235 and -238. The quantities of isotopes in a sample are measured by radiation detectors and other methods.

The age of a rock may be based on one or more isotopes. However, these methods are not perfect. There may be an uncertainty of as much as several hundred million years. There may be disagreement in ages measured with different isotopes.

Nevertheless, the radioactive-decay method is the most exact dating method known to science. It has been used to determine the age of Earth. The oldest known rock was formed nearly 4.2 billion years ago. Planet Earth itself is thought to be about 4.6 billion years old.

Three Parts of Our Earth Explained in a Simple Way

Earth is made up of air, water, and solid ground. Scientists call these areas the atmosphere, hydrosphere, and lithosphere. The scientific terms are derived from Greek roots: atmos means vapor; hydro, water; and lithos, stone.

 

The Atmosphere


Earth is surrounded by layers of gases. These are collectively called air, or the atmosphere. The air surrounding Earth is composed of gases that are essential to animal and plant life. Earth's air is about 78 percent nitrogen, 21 percent oxygen, and 1 percent other gases. Dust is also present. The atmosphere is subdivided into five layers. They are, from lowest to highest, the troposphere, stratosphere, mesosphere, thermosphere, and exosphere.

The atmosphere also includes water vapor that is essential to life. It dissipates carbon dioxide and other waste products of life and of human industries. The atmosphere transmits the radiation from the Sun (solar radiation) that is essential to photosynthesis. At the same time, it shields Earth from lethal ultraviolet radiation. It protects the planet from powerful cosmic rays and from the meteors that constantly shower down on the planet from space.

Furthermore, the atmosphere acts as a blanket. It moderates the planet's climate and maintains a generally higher temperature on the planet than would occur without it.

The Hydrosphere


Earth appears to be the only planet that contains large amounts of liquid water. In fact, pictures from space show Earth as a blue planet. About three-fourths of the surface is covered by the oceans. These bodies of water, together with large inland lakes, constitute the hydrosphere.The hydrosphere contributes great amounts of water vapor to the air. It plays a large part in the atmospheric changes known as weather.

Of Earth's surface water, almost 96 percent by weight is made up of hydrogen and oxygen. Sodium, chlorine, and many other elements are also found in oceanic waters. Highly sensitive instruments would probably reveal traces of many more elements.

The plants and animals found in the sea are an immensely valuable resource. They provide food, fertilizers, and industrial materials. The ocean is also a vast storehouse of minerals. These include common salt (sodium chloride), magnesium, manganese, gold, iron, copper, uranium, and silver. Salt and magnesium are obtained from the sea in quantity. Others will no doubt be made available as more-efficient methods for extracting them are developed.

The Lithosphere


The solid part of Earth is its lithosphere. It is made up of three types of rock -igneous, sedimentary, and metamorphic- and soil. Soil consists of rock debris combined with organic materials. Igneous rock is derived from the molten, rock-producing matter called magma.

Sedimentary rock consists principally of rock fragments. The fragments have accumulated through untold millennia and have been pressed together. Igneous rock or sedimentary rock is altered through changes in temperature and pressure and other forces within Earth. The process gives rise to metamorphic rock.

Geologists study the rocks and minerals of Earth's surface. Geophysicists use the methods of physics to study Earth's interior. Geophysicists do not know for certain what is found at the center of the planet. However, they have gathered information about the core from various indirect sources such as seismic waves. Seismic waves are shock waves that radiate through Earth when its layers shift, as during earthquakes.

Two kinds of seismic waves travel through the interior: primary, or P, waves, which travel very quickly through any material; and secondary, or S, waves, which move about half as fast as P waves and only travel through solids. The slowest-moving seismic waves are the long, or L, waves, which ripple just beneath Earth's surface. By studying the movements of seismic waves, scientists have developed a fairly clear picture of Earth's interior.

Earth has a solid outer mantle about 1,740 miles (2,800 kilometers) thick. The crust makes up only a small part of this mantle. Earth's crust extends only 25 miles (40 kilometers) or so below the surface. Beneath the mantle is the core. The core is some 4,400 miles (7,100 kilometers) in diameter. The core is made up of a solid inner core and a molten outer core.

Gravity and Magnetism on Earth

Sir Isaac Newton formulated his famous law of gravitation in the 1600s. It states that every particle in the universe attracts every other particle. In addition, the force of attraction varies directly as the product of their masses, and inversely as the square of the distance between them. This is the statement of universal gravitation.


Gravity and magnetism are still mysterious forces in many respects. Yet scientists have gathered considerable information about them. The term "gravity" -more accurately, "terrestrial gravity"- is applied to the gravitational force exerted by Earth.

Gravity is the force that pulls all materials toward the center of Earth. This force becomes weaker as one moves away from the center. When people weigh themselves, they are really measuring the force of gravity.

A person weighing 143 pounds (65 kilograms) is being pulled toward the center of Earth with a force of 143 pounds. Weight decreases farther from the center. So the same person will weigh slightly less on the top of a mountain than in a deep valley.

In 1600 the English physician Sir William Gilbert suggested that Earth behaves like a huge magnet, with north and south poles. This was confirmed by the German mathematician Carl Friedrich Gauss. In the early 1800s, Gauss demonstrated that the planet generates its own main magnetic field internally. The minor fluctuations that do occur are due to solar activity. When using a compass, the needle falls along the lines of force that run from one magnetic pole to the other. The magnetic poles do not correspond exactly to the geographic poles.

How to Determine Earth's Mass and Density?

In 1735 the mathematician Pierre Bouguer was on an expedition in South America. He measured the extent to which a plumb line was deflected by the gravitational pull of a mountain, the peak called Chimborazo in Ecuador. Since he could estimate the mass of the mountain, he was able to estimate the mass of Earth after the deflection was measured.


The mass of an object represents the concentration of matter in it. Mass is a constant value, as opposed to weight. Weight is actually a measure of gravity, and that changes from place to place.

Various methods have been used to determine Earth's mass. Today a sensitive instrument called a torsion balance is generally used for this calculation. The attraction of a large ball of known mass to a small ball is compared with the attraction of Earth to the small ball. According to a recent estimate, the mass of Earth is 6.59 × 1021 tons (5.98 × 1021 metric tons).

To determine Earth's density, its mass in grams is divided by the volume in cubic centimeters. The volume of Earth is 1.083 × 1027 cubic centimeters. When 5.98 × 1021 is divided by 1.083 × 1021, the result is approximately 5.5 grams per cubic centimeter. This is the figure for the density of Earth. Said another way, if the planet's air, water, and rock were all mixed together, the mixture would weigh about 5.5 times the same quantity of water. Earth is therefore the densest of all the planets.

The Shape of Earth, the Home Planet

Earth: The Home Planet


For centuries, astronomers and geologists proposed theories to explain how Earth originated. Scientists are now fairly certain about Earth's origins. They have reviewed information returned by both piloted and robotic visits to other worlds of the solar system in recent decades. Experts are now certain that Earth and its planetary neighbors formed from a spinning disk of debris left over from the birth of the Sun.

Three planets in the solar system are smaller than planet Earth. Four are considerably larger. In terms of mass, therefore, Earth is not an outstanding member of the vast solar family. But it is the home planet and the site of all life. Earth is the vantage point from which the universe is viewed.

The Shape of Earth


For centuries, Earth was thought to be flat. But early scientists used their keen observations to correct this mistake about Earth's shape. For example, they noticed that distant ships seemed to drop below the horizon. Travelers to faraway lands saw that the position of stars changed relative to the horizon. And early astronomers watched the movement of a lunar eclipse. As a lunar eclipse advanced, Earth cast a curved shadow on the Moon. This meant that Earth was a sphere.


Geodesy is the study and measurement of Earth's shape. Modern geodetic scientists have the advantage of photographs. Photographs of Earth have been taken from rockets, satellites, and other spacecraft far above the surface. The photos reveal that the planet is nearly round.

But Earth is not a perfect sphere. Instead, it is slightly flattened at the poles. This shape is probably caused by the planet's force of rotation. The rotational force deforms the somewhat plastic Earth into a form that is in balance with the forces of rotation and gravity. Earth's diameter is 7,900 miles (12,700 kilometers) from pole to pole; it is 7,920 miles (12,750 kilometers) around the equator. Recent measurements also indicate that Earth is slightly more flattened at the South Pole than at the North Pole, which makes it slightly pear-shaped.

Commercial Uses of Cryptology

Outside of national security, the most important use of cryptology today is in electronic commerce (or e-commerce). New cryptosystems, called public-key ciphers, permit the secure transmission of credit-card numbers from a buyer to a seller without their having had to exchange keys in advance.


In addition the growth of computer and satellite communications, which has escalated the potential for hacking and interception, has made it ever more necessary for businesses to protect their data. Because new cryptosystems are cheap and easy to use, multinational corporations are increasingly encrypting their satellite messages, e-mail, and data banks.

An automated teller machine (ATM) encrypts the account numbers and requests that it transmits to the bank's mainframe, which similarly encrypts the balances and withdrawal authorizations that it sends back to the ATM. Subscription television jumbles its signals so only those who have paid for a decoder can receive clear pictures.

Perhaps out of fear of negative publicity if exposed, companies have rarely intercepted a competitor's messages. Encrypted bank messages are not known to have been solved, probably because there are easier ways to get the money. At least one government has read at least one oil company's encrypted messages, however, and used the information in bargaining over an exploration contract.

Cryptology in Modern Diplomatic and Military Signals Intelligence

The most important single encrypted message that was intercepted and solved was the Zimmermann telegram, named for the German foreign minister who sent it. During World War I Arthur Zimmermann, fearing American belligerency because German U-boats (submarines) had been ordered to sink U.S. ships, cabled Mexico with a proposition. If Mexico would wage war upon the United States, Mexico would get back its "lost territory" of Texas, New Mexico, and Arizona. On Jan. 17, 1917, Great Britain intercepted and, several days later, solved the coded German message. Britain passed it on to Pres. Woodrow Wilson, who made it public. The nation's outrage at Germany's presumptuousness in promising to give away a part of America crystallized in a declaration of war a month later, helping to defeat Germany and turning the United States into a major power.


Also of great significance during World War I was the Russian failure to distribute "cryptosystems" to some army units in 1914. The messages that the Germans intercepted were thus uncoded. The Germans used them to encircle and destroy a Russian army at the Battle of Tannenberg (August 25–30). Although the Russians later enciphered their messages, the Germans easily solved them, greatly contributing to Germany's defeat of czarist Russia. Thus German signals intelligence helped deliver Russia to the Bolsheviks.

During World War II massive cryptanalysis of German military and naval cryptograms helped the Allies win that conflict. Solutions of German naval messages that were enciphered using the cipher machine Enigma revealed orders to and reports from U-boats at sea. This intelligence enabled convoys traveling from America to Britain to detour around the "wolf packs" (groups of submarines that made coordinated attacks on shipping). Later it permitted Allied warships to pinpoint the submarines in the vastness of the ocean and sink them.

In the land campaigns in North Africa and Europe, the breaking of German Air Force and Army messages enciphered in Enigma, among other systems, prepared the Allies to win many battles. In Normandy on Aug. 6, 1944, for example, a decoded message revealed that the Germans would soon launch a heavy attack from around Mortain with fighter plane protection. Owing in part to this warning, the U.S. 30th Infantry Division repelled the onslaught.

Likewise, American "codebreakers" significantly shortened World War II in the Pacific Ocean. The reading of coded Japanese naval messages told the Americans as much about some Japanese plans as the captains of Japanese naval vessels knew and enabled them to ambush the Japanese fleet at Midway Island in the central Pacific on June 4, 1942. The battle there turned the tide of the war in the Pacific. Other solutions facilitated American submarines' sinking of Japanese cargo ships, bringing the island empire to the verge of collapse.

While the solution of the chief Japanese diplomatic cipher machine, called PURPLE by the Americans, did not avert the surprise of Pearl Harbor -because no messages even hinting at the attack ever went to the diplomats- it contributed enormously to wartime intelligence through the reading of reports sent home by Japanese diplomats in Berlin. These revealed Adolf Hitler's thoughts and plans. The Soviet Union, too, broke this machine. Its solutions disclosed time and again that Japan was not planning to attack the USSR from the rear, thus easing Joseph Stalin's decision making.

The value of cryptanalysis during World War II was summarized in 1944 by U.S. Army chief of staff Gen. George C. Marshall: "The conduct of General Eisenhower's campaign [in Europe] and of all operations in the Pacific are closely related in conception and timing to the information we secretly obtain through these intercepted codes. They contribute greatly to the victory."

During the Korean and Vietnamese conflicts, U.S. signals intelligence was responsible for reducing American casualties. And at least once during Operation "Desert Storm" in 1991, intercepted Iraqi communications were sent to U.S. tanks in time to warn of Iraqi artillery that was about to target them, enabling them not only to take evasive action but also to counterattack. More generally, signals intelligence played a considerable part in the victory of the United States and its allies.

Communications intelligence also helps in peacetime. During the naval disarmament conference held in Washington in 1921 and 1922, American solution of Japanese diplomatic cablegrams helped U.S. diplomats to compel Japan to accept the equivalent of a battleship and a half less than it wanted. During the tense American-Japanese negotiations in the spring of 1995 over automobile imports, American interception of telephone conversations between the executives of Toyota and Nissan and Japan's trade minister told the United States trade representative and his staff how far the Japanese could be pressed and helped bring about an accord. By warning of possible hostile actions, cryptanalysts continue to give policymakers time to plan actions and thus figure importantly in stabilizing the international system.

The Island Rule: How Insular Populations become Dwarfs and Giants

In the 1726 satire Gulliver's Travels, readers encounter both the island of Lilliput, populated by six-inch- (15-mm-) tall natives, and the island of Brobdingnag, ruled by 60-foot (18-meter) giants. While the scale of Gulliver's adventures is more fantasy than fact, the author Jonathan Swift came very close to depicting the modern biologist's "island rule," which contends that in adapting to island environments plants and animals tend to develop into new dwarf or giant subspecies.


First proposed by the researcher J. Bristol Foster in 1964, the island rule demonstrated that as mammals adapted to insular island ecosystems, they would dramatically increase or decrease in size relative to their closest mainland cousins. Moreover, based on Foster's survey of 116 island-dwelling mammals, a simple trend emerged: larger mammals grew smaller on the islands, while smaller mammals grew larger, as if both types of species were progressing toward some ideal statistical mean. This trend became known as the "island rule."

In 1967 the island rule graduated from a rudimentary observation to a legitimate field of study, thanks to the publication of The Theory of Island Biogeography by Robert H. MacArthur and Edward O. Wilson. This book set forth the basic principles of the newly defined subfield of island biogeography, which deals with the ecological development of living creatures within insular environments. Among the principal preoccupations of island biogeography are the twin phenomena of "insular dwarfism" and "insular gigantism," the technical terms for the respective decrease or increase in size of island-dwelling species as compared with their closest mainland relatives.

Put simply, island biogeography is directly concerned with turning the hypothetical island rule into a full-fledged science. In the 40 years since the publication of Foster's original two-page survey paper, myriad exceptions to, elaborations upon, and contradictions to the original island rule have come to light, replacing the basic concept of progressing toward the mean with a complex array of observations and guidelines that have yet to coalesce into a coherent whole. The process of evolution is never easy, not even on a tropical island.

The most glaring problems facing biogeographers are oppositional examples of insular dwarfism and gigantism, wherein one island offshoot of a mainland species grows large, and a laterally related cousin on another island grows small. These cases indicate that a species may not be necessarily obligated toward gigantism or dwarfism and that the attributes unique to each island environment can tip the balance either way.

Giant versions of a given species have advantages against predators. They can sustain themselves by preying on large animals and can give birth to more numerous groups of young, but they must pay for these advantages by consuming more food, which makes them more vulnerable to the effects of drought and famine. In contrast, dwarf species consume less food and can camouflage themselves or take refuge in a wider variety of terrains, but they give up the predatory and reproductive advantages of larger size. While the island rule is generally accurate in predicting which way many species will likely adapt to island environments, there is no hard-and-fast guarantee.

As the science of biogeography endures these growing pains, it has been presented with its most tantalizing research subject yet--Homo floresiensis, a cousin species to H. sapiens that may offer the first concrete example of insular dwarfism in protohumans. Discovered in late 2003, these so-called hobbits stood a mere 3 feet (0.9 meters) tall, had brains one-third the size of those of H. sapiens, and lived as recently as 18,000 years ago on the Indonesian island of Flores--meaning that they may have shared the environment with full-sized humans. Faced with such a complex and contradictory evolutionary puzzle, H. floresiensis may forever change the way biogeography looks at human beings, and vice versa.

Note on the 1918 Influenza Pandemic

The 1918 influenza pandemic began as World War I was winding down to its bloody conclusion. The pandemic led to death and suffering on a scale not seen since the Black Death, the infectious scourge of the Middle Ages. For an understanding of how overwhelming the death toll was, one need only consider that an estimated 15 million people died during the four-year period of World War I, while in less than a year almost 30 million succumbed to influenza. Contributing to the high mortality rate was the fact that the influenza outbreak was truly pandemic; some researchers estimate that 20% of the world's population was infected.


Where did it all begin? Current research traces the likely locus of the outbreak to Haskell county, Kans., near Camp Funston (later, Fort Riley). A review of news sources from January--February 1918 reveals reports of an unusually virulent flu in the county at this time. It is generally believed that the outbreak jumped from the local population to the troops mobilizing at Camp Funston. These soldiers were in turn sent to Europe through the French port of Brest, which was the locus of the outbreak in Europe. Spreading rapidly through the European population, the disease then moved on to Asia and the rest of the world. Spain was particularly hard hit, as is evidenced by the many news reports that referred to the disease as the "Spanish Flu."

A second outbreak of the disease occurred in the latter part of 1918, when American troops returned home through the port of Boston on their way to Fort Devens. At the height of this outbreak, a doctor based at Fort Devens corresponding with a colleague reported that he was recording 100 deaths per day. He noted that the soldiers would "rapidly develop the most viscous type of pneumonia that has ever been seen" and die shortly thereafter. In fact it was the pneumonia that was the actual cause of death. At that time little was known about the infectious agents that cause disease; therefore, doctors had almost no tools to use in fighting the pneumonia. Remedies such as the administration of quinine and hot mineral baths had little effect. In affected areas worldwide, business activity was severely restricted and social gatherings were brought to a halt. People avoided interpersonal contact and began to wear gauze masks.

In an effort to explain why the 1918 flu was so devastating, researchers have conducted tests on tissue samples preserved from victims of the pandemic. The particular virus responsible for the 1918 disease is thought to be of a type originating in birds, an avian flu similar in some ways to the avian flu strains discussed in the news in recent years. Researchers theorize that the virus probably jumped from birds to swine or horses and then passed to humans. Tissue samples reveal that the 1918 influenza contained surface proteins uncommon to human flu strains, both then and now. Because these proteins were foreign to the human immune system, almost none of the human population of 1918 would have had any immune system resistance to the disease, thus accounting for its rapid and universal spread.

Although we now have more tools to treat influenza and even some antiviral medications to treat the cause, we are still at risk. Recent experience with avian flu and severe acute respiratory syndrome (SARS) suggests that humankind is still vulnerable to opportunistic viruses. Indeed, with international travel more rapid and more common than it was at the beginning of the 20th century, we may, in fact, be at greater risk now than we were in 1918.

Sinking Cities: The Hidden Dangers of Modern Flood Control

Before Hurricane Katrina struck the U.S. Gulf Coast on Aug. 29, 2005, few Americans had ever heard the terms subsidence or sedimentation. Today, subsidence and sedimentation bear directly on the lives of thousands of refugees displaced by Katrina and have forced a radical reconsideration of the nation's flood-control policies and technologies.


The biblical verse Matthew 7:26 admonishes the foolish man who builds his house upon the sand, or, in the case of New Orleans, upon the sand, silt, and soft clay. The "Big Easy" sits atop a geologic foundation of soft earth, and the weight of the city literally causes it to sink about 3 feet (0.9 meters) every century. This sinking feeling is known in geological circles as subsidence, and perhaps nowhere in the United States is subsidence a more precarious issue than in the Mississippi River delta.

Before the early 20th century, delta subsidence was counterbalanced--albeit destructively--by seasonal flooding of the Mississippi River, which deposited roughly equal layers of the aforementioned sand, silt, and clay atop the delta region, rebuilding the land even as it continued to slowly compact. This flood-powered rebuilding process is known as sedimentation.

The counterbalanced effects of subsidence and sedimentation were only barely understood in 1889, when the federal government first began to finance the construction of levees to control flooding along major rivers in the United States. That levee system would prove grossly inadequate when the great Mississippi Flood of 1927 claimed the lives of 700,000 Americans throughout the Mississippi River valley.

The 1927 floods led to the 1928 Flood Control Act and related congressional measures that saw the federal government's role expand from merely underwriting levees to overseeing the design, construction, and maintenance of almost the entire national flood-control infrastructure. Unfortunately the vast and complicated levee systems built under these federal initiatives paid almost no heed to the balance of subsidence and sedimentation, especially in the Mississippi delta. The delta levees effectively cut off many "protected" lands from the supply of flood sediments that would have rebuilt the subsiding earth beneath them.

In simplest terms, protecting certain areas behind flood-proof levees actually made them more vulnerable to flooding. Denied the benefits of sedimentation, these lands dropped farther and farther below the flood plain, making them more dependent on protective levees. An "arms" race had thus begun between the U.S. Army Corps of Engineers, which endeavored to build ever-stronger levees, and the inexorable processes of nature. This precise circumstance was writ large in the city of New Orleans, most of which sat several feet below sea level on a narrow strip of land between the Mississippi River and Lake Pontchartrain.

Surrounded by high water on nearly every side, all that stood between the "Crescent City" and certain destruction were decades-old levees and pumping systems, none of which were designed to withstand a direct hit by anything stronger than a Category 3 hurricane. When Katrina made landfall just east of New Orleans as a Category 4 storm, the results were devastating. The levees protecting New Orleans were breeched, allowing the water to finally reclaim the city that for decades it had been denied.

Flood defenses put in place after the epic 1927 flood made New Orleans safer in the short term but placed the city in grave danger over the long term. In the wake of America's worst natural disaster of the young 21st century, the entire Gulf Coast, the scientific community, and the federal government must learn another hard lesson about flood control--one that enables them to build a city and levee system that work with Mother Nature, rather than against her.

Seeing Red: How Primates Evolved beyond Color Blindness

Human beings suffering from red-green color blindness may represent as little as 2% of the world's population, but they are among the majority of mammals on the planet. Only certain species of primates---humans among them---are genetically endowed with trichromatic color vision, which allows them to not only perceive color but also distinguish between red and green. The scarcity of trichromatic color vision among mammals, and the evolutionary impetus behind its development, has interested scientists for years.


All mammals capable of perceiving color, with the exception of certain primates, have dichromatic color vision. This means that their retinas contain only two types of cone receptors instead of the three varieties of cones that enable trichromatic vision. The various types of cones in the eye are tuned to react to different wavelengths of visible light, which create nerve signals that the brain interprets as color. Dichromatic vision employs groups of "short" and "long" cones, which decipher the color of light based on where it falls between the high and low wavelength extremes. Trichromatic vision adds the benefit of "middle" cones, which allow the perception of color to become more finely tuned, particularly the ability to distinguish between red and green.

The vast majority of evolutionary anthropologists agree that primates evolved trichromatic color vision as a means of locating edible plants. The need to identify food species against a backdrop of dense foliage meant that color, as well as shape, became a crucial marker. The ability to locate food in this way provided the competitive advantage necessary to encourage the gradual development of trichromatic color vision. Scientists continue to debate, however, whether trichromatic color vision evolved specifically to allow primates to select the most nutritious leaves, many of which have red hues instead of green, or if it developed to enable them to select the most ripe and nutritious fruits available, which also tend to have a red hue.

Further complicating the "leaves-versus-fruits" debate is the inconsistent distribution of the full-color vision trait across primate species. All but one species of trichromatic mammals are so-called Old World (Eastern Hemisphere) primates (the exception being the New World howler monkey). The distribution of color vision in New World (Western Hemisphere) primates, however, is gender-specific, with only females possessing trichromatic vision. In these cases, the trichromatic trait requires two X chromosomes, which only occurs in females. The competitive advantage of the color vision trait should have been comparable in both New and Old World environments, but color vision did not develop simultaneously in both locations. While it is likely that contemporary primates use trichromatic vision to locate red, immature, protein-rich leaves, this may not have always been the case, and the possibility that different factors may have influenced the development of trichromatic color vision on separate continents has clouded the evolutionary debate.

Regardless of whether the identification of fruits or leaves compelled certain primate species to evolve trichromatic vision, there is mounting evidence that the development of extraordinary color perception resulted in a reduction of a sense of smell in those same species. Vision operates accurately at a much greater distance than smell, particularly when it comes to the detection of sexual maturity in potential mates. The rise of color vision likely allowed for more "long-range" sexual cues than pheromone detection did, which is why humans and our close primate cousins often possess a less acute olfactory sense than do other primates. Seeing red, it seems, is not without a sensory cost. Thus while humans and other primates enjoy the advantages of full color vision, the reasons why this trait developed have yet to be definitively determined.

Ring of Fire! Facts You Don't Know

Plotted on the world map, the location of active, land-based volcanoes on the world map reveals that most volcanoes occur on the edges of continents or along island chains. In fact, many of the world's volcanoes may be found along the circumference of the Pacific Ocean, forming the so-called Ring of Fire.


This phenomenon can be explained using the theory of plate tectonics. Nearly 200 million years ago, the gigantic landmass known as Pangaea cracked apart deep inside the earth at the lithosphere layer; the lithosphere then split into several tectonic plates, and the tectonic plates began moving apart. According to plate tectonic theory, the plates move slowly apart, and after 200 million years or so they begin to move back together again, rather like pieces in a geological game of bumper cars. The great majority of the world's earthquakes and active volcanoes occur near the boundaries of these shifting plates.

When two plates collide, one slides beneath the other. The rock that is consequently pushed down toward the earth’s center melts; melted rock rises as magma, forming volcanoes near the collision point. The long arc of the Ring of Fire is the result of the huge Pacific plate sliding under several continental plates. Volcanoes also result from seafloor spreading; where crustal plates are separating, magma pushes up between the plates, creating mid-ocean ridges such as the Mid-Atlantic Ridge.

Around the world a number of regions have been identified as hot spots, areas in the middle of plates where magma rises and breaks through the earth’s crust. Such hot spots include Reunion Island and the island of Hawaii, where periodic eruptions are anticipated. Other areas, where volcanic activity may not occur for several human life spans may deceive folks by the long intervals -in human perspective- between blowups.

Examples of the latter include Mount Saint Helens in the United States, which erupted explosively in 1980 and covered the ground within a 10-mile radius with up to 10 inches of fine ash and pumice, and areas up to 300 miles away with as much as 1/2 inch of ash. Ash also reached an altitude of 80,000 feet in less than 15 minutes. Much of the eastern portion of Washington state was covered with a 6-inch layer of fine ash. Most residents had never experienced a volcanic eruption before and drove their vehicles through it—to their sorrow, as volcanic ash can be extremely destructive of mechanical gears.

Reborn from the Ashes: The Promise of Smoke-Induced Seed Germination

An old truism teaches us that "where there's smoke, there's fire," but recent findings by Australian botanists are helping us understand why for certain plants, where there is smoke, there is also rebirth. Scientists have known for years that certain species of plants can be coaxed into accelerating their reproductive cycle by exposing them to certain types of smoke--an adaptation that these plants evolved as a means of surviving wildfires; but the discovery of a chemical called butenolide may be the missing link in exploiting this behavior without ever invoking combustion.


Smoke-induced germination is a well-established practice in commercial horticulture. Exposing the seeds of many plant varieties to controlled doses of smoke-either by directly immersing them in smoke or by submerging them in smoke-infused water-can invoke group germination, meaning that an entire crop or nursery batch will grow and flower in relative unison. Plants native to Mediterranean-type climatic regions, such as the southern California chaparral and large swaths of the Australian bush, most commonly respond to smoke-induced germination. These areas are known for their extended low-rainfall seasons that create dry, fire-prone underbrush, forcing the native flora to evolve fire-adapted reproductive countermeasures.

In some respects wildfire can be viewed as an opportunity for these plants to exploit catastrophe. The competitive environment is effectively "reset" by wildfire; access to available growing areas is wiped clean, and contact with groundwater and open exposure to visible sunlight is now up for grabs. Plants primed to germinate after a fire has died out to a smoking smolder are often best-suited to thrive in this reset post-wildfire landscape. Yet, for all the promise of this trait, smoke-induced germination also holds the potential for dire environmental consequences.

One of the primary chemical components of the smoke produced by burning wood is nitrogen dioxide, and initial insights suggested that nitrogen dioxide is one of the principal chemical triggers for smoke-induced seed germination. This discovery worried many environmentalists, however, because nitrogen dioxide is also a common component of smog. If smog forces plant seeds to germinate too soon, when an area has not been cleared by fire, the seeds are effectively wasted on an overly competitive growing environment. Could urban air pollution could be overstimulating and thereby disrupting the natural reproductive cycles of exposed plants? Scientists have a compelling incentive to answer these questions: If they can isolate and identify every component in wildfire smoke that induces seed germination, they could use that knowledge hopefully to counteract the effects of smog.

After 11 years of research, scientists at the University of Western Australia have presented convincing evidence that they have found the primary trigger for many species of smoke-sensitive plants: butenolide, a heterocyclic compound produced by burning plants and wood. What makes this discovery so intriguing is that butenolide exposure induces germination even in plants that are not native to Mediterranean climates, including lettuce, parsley, and the popular herbal remedy Echinacea. This suggests that butenolide-based fertilizers might be used successfully on a staggeringly wide array of commercial plants-even those not commonly preyed upon by wildfires.

This discovery, in turn, could also help control the effects of smog on nitrogen dioxide-sensitive plants. With judicious use of butenolide fertilizer in smog-prone areas, botanists could theoretically encourage mass seed germination during key time periods, compensating for reproductive disruption caused by smog. Thanks to savvy Australian biochemistry, agriculture may soon reap the benefits of wildfire-adapted evolution without ever needing to set even a single plant aflame.