The study of cells--cell biology--began in 1660, when English physicist Robert Hooke melted strands of spun glass to create lenses that he focused on bee stingers, fish scales, fly legs, feathers, and any type of insect he could hold still. When he looked at cork, which is the bark from a type of oak tree, it appeared to be divided into little boxes, which were remnants of cells that were once alive. Hooke called these units “cells” because they looked like the cubicles （cellae） where monks studied and prayed. Although Hooke did not realize the significance of his observation, he was the first person to see the outlines of cells. …
Despite the accumulation of microscopists’ drawings of cells made during the seventeenth and eighteenth centuries, the cell theory --the idea that the cell is the fundamental unit of all life--did not emerge until the nineteenth century.
Historians attribute the delay to poor technology--for example, crude microscopes and a lack of procedures to preserve and study living cells without damaging them. Neither the evidence itself nor early interpretations of it suggested that all organisms were composed of cells. Hooke had not observed actual cells but rather what they had left behind: the cell walls. Leeuwenhoek made important observations, but he did not methodically describe or categorize the structures that cells had in common. …
In the nineteenth century, more powerful microscopes, with better magnification and illumination, revealed details of life at the subcellular level. In the early 1830s, Scottish surgeon Robert Brown noted a roughly circular structure in cells from orchid plants. Finding the structure in every orchid cell, he then identified it in all cells from a variety of other organisms. He named it the “nucleus,” a term that had remained in use. Brown memorialized the importance of the structure he discovered, but today we know the nucleus houses DNA for complex cells. …
Many cell biologists extended Schleiden and Schwann’s observations and ideas. German physiologist Rudolph Virchow added the important corollary in 1855 that all cells come from preexisting cells, contradicting the still-popular idea that life can arise from the nonliving or from nothingness. Virchow’s statement also challenged the popular concept that cells develop on their own from the inside out, the nucleus forming a cell body around itself, and then the cell body growing a cell membrane. Virchow’s observation set the stage for descriptions of cell division in the 1870s and 1880s. Virchow was ahead of his time because he hypothesized that abnormal cells cause diseases that affect the whole body.
Many think that the reason so many animals live with others of their species is that social creatures are higher up the evolutionary scale and so are better adapted and leave more offspring than do animals that live solitary lives.
However, in each and every species, generation after generation, relatively social and relatively solitary types compete unconsciously with one another in ways that determine who leaves more offspring on average. In some species, the more social individuals have won out, but in a large majority, it is the solitary types that have consistently left more surviving descendants on average.
Social groups also offer opportunities for reproductive interference. Breeding males that live in close association with more attractive rivals may lose their mates to these individuals. In addition, sociality has two other potential disadvantages. The first is heightened competition for food, which occurs in animals as different as colonial fieldfares （a kind of songbird） and groups of lions, whose females are often pushed from their food by hungry males. The second is increased vulnerability to parasites and disease, which plague social species of all sorts. While it is true that some social animals have evolved special responses designed to combat parasites and disease, those responses can only reduce, but cannot totally eliminate, the damage caused by those threats, and the responses may even carry their own costs. Thus, honeybees warm their hives in response to an infestation by a fungal pathogen, which apparently helps kill the heat-sensitive fungus, but at the price of time and energy expended by the heat-producing workers.
The most widespread fitness benefit for social animals, however, probably is improved protection against predators. Many studies have shown that animals in groups gain by reducing the individual risk of being captured, or by spotting danger sooner, or by attacking their enemies in groups. Males in nesting colonies of bluegill sunfish cooperate in driving egg-eating bullhead catfish away from their nests at the bottom of a freshwater lake. While bluegills have adopted social behavior to avoid predation, closely related species that nest alone have evolved means to protect themselves while nesting alone. Thus, the solitary pumpkinseed sunfish, a member of the same genus as the bluegill, has powerful biting jaws and so can repel egg-eating enemies on its own, whereas bluegills have small, delicate mouths good only for inhaling small, soft-bodied insect larvae.
Pumpkinseed sunfish are in no way inferior to or less well adapted than bluegills because they are solitary; they simply gain less through social living, which makes solitary nesting the adaptive tactic for them.
Comets are among the most interesting and unpredictable bodies in the solar system. They are made of frozen gases （water vapor, ammonia, methane, carbon dioxide, and carbon monoxide） that hold together small pieces of rocky and metallic materials. Many comets travel in very elongated orbits that carry them far beyond Pluto. These long-period comets take hundreds of thousands of years to complete a single orbit around the Sun. However, a few short-period comets （those having an orbital period of less than 200 years）， such as Halley’s Comet, make a regular encounters with the inner solar system.
The observation that the tail of a comet points away from the Sun in a slightly curved manner led early astronomers to propose that the Sun has a repulsive force that pushes the particles of the coma away, thereby forming the tail. Today, two solar forces are known to contribute to this formation. One, radiation pressure, pushes dust particles away from the coma. The second, known as solar wind, is responsible for moving the ionized gases, particularly carbon monoxide. Sometimes a single tail composed of both dust and ionized gases is produced, but often two tails—one of dust, the other, a blue streak of ionized gases—are observed.
Comets apparently originate in two regions of the outer solar system. Most short-period comets are thought to orbit beyond Neptune in a region called the Kuiper belt, in honor of the astronomer Gerald Kuiper. During the past decade over a hundred of these icy bodies have been discovered. Most Kuiper belt comets move in nearly circular orbits that lie roughly in the same plane as the planets. A chance collision between two comets, or the gravitational influence of one of the Jovian planets—Jupiter, Saturn, Uranus, and Neptune—may occasionally alter the orbit of a comet in these regions enough to send it to the inner solar system and into our view.
The most famous short-period comet is Halley’s Comet, named after English astronomer Edmond Halley. Its orbital period averages 76 years, and every one of its 30 appearances since 240 B.C. has been recorded by Chinese astronomers.
When seen in 1910, Halley’s Comet had developed a tail nearly 1.6 million kilometers （1 million miles） long and was visible during daylight hours. Its most recent approach occurred in 1986.
propose=offer the theory
In the northern American colonies, especially New England, tight-knit farming families, organized in communities of several thousand people, dotted the landscape by the mid-eighteenth century. New Englanders staked their future on a mixed economy. They cleared forests for timber used in barrels, ships, houses, and barns. They plumbed the offshore waters for fish to feed local populations. And they cultivated and grazed as much of the thin-soiled, rocky hills and bottomlands as they could recover from the forest.
In the North, the broad ownership of land distinguished farming society from every other agricultural region of the Western world. Although differences in circumstances and ability led gradually toward greater social stratification, in most communities, the truly rich and terribly poor were few and the gap between them small compared with European society. Most men other than indentured servants （servants contracted to work for a specific number of years） lived to purchase or inherit a farm of at least 50 acres. With their family’s labor, they earned a decent existence and provided a small inheritance for each of their children. Settlers valued land highly, for owning land ordinarily guaranteed both economic independence and political rights.
The decreasing fertility of the soil compounded the problem of dwindling farm size in New England. When land had been plentiful, farmers planted crops in the same field for three years and then let it lie fallow （unplanted） in pasture seven years or more until it regained its fertility. But on the smaller farms of the eighteenth century, farmers had reduced fallow time to only a year or two. Such intense use of the soil reduced crop yields, forcing farmers to plow marginal land or shift to livestock production.
Wherever they took up farming, northern cultivators engaged in agricultural work routines that were far less intense than in the south. The growing season was much shorter, and the cultivation of cereal crops required incessant labor only during spring planting and autumn harvesting. This less burdensome work rhythm let many northern cultivators to fill out their calendars with intermittent work as clockmakers, shoemakers, carpenters, and weavers.
In discussing the growth of cities in the United States in the nineteenth century, one cannot really use the term “urban planning,” as it suggests modern concerns for spatial and service organization which, in most instances, did not exist before the planning revolution called the City Beautiful Movement that began in the 1890s. While there certainly were urban areas that were “planned” in the comprehensive contemporary sense of the word before that date, most notably Washington, D.C., these were the exception. Most “planned” in the nineteenth century was limited to areas much smaller than a city and was closely associated with developers trying to make a profit from a piece of land. Even when these small-scale plans were well designed, the developers made only those improvements that were absolutely necessary to attract the wealthy segment of the market. Indeed, it was the absence of true urban planning that allowed other factors to play such an important role in shaping the nineteenth-century American city.
Demographic patterns also affected urbanization in two ways: first, urban populations grew steadily throughout the century due to immigration from rural areas, principally by those seeking factory work, and emigration from abroad.
Therefore cities expanded as new housing had to be provided. Secondly, at the same time that new residents were surging into cities, many urbanites, particularly those of the middle classes, began to leave. While a preference for rural living explained part of this exodus, it was also due to the perception that various urban problems were becoming worse.
Problems of fire and poor sanitation were inextricably linked with the last major urban problem of the nineteenth century—lack of coordination in the physical expansion of cities and their infrastructure systems （systems for providing services such as water, gas, electricity, and sewage）。 Typically, development was both unplanned and unrestricted, with landowners making all choices of lot size, services, and street arrangement based only on their individual needs in the marketplace. Distortions of streets and abrupt changes in the distance of houses from the street in urban areas, which so clearly delineate where one development ended and another began, were just the most obvious problems that this lack of coordination created.
plague= cause trouble for
Industrial output increased smartly across nearly all of Europe between 1450 and 1575. Although trade with the Americas had something to do with this, the main determinants of this industrial advance lay within Europe itself.
Population grew from 61 million in 1500 to 78 million a century later, and the proportion of Europeans living in cities of 10,000 or more—and thus dependent on the market for what they consumed—expanded from less than 6 percent to nearly 8 percent during the same period. More important than sheer numbers, many Europeans’ incomes rose. This was especially true among more fully employed urban groups, farmers who benefited from higher prices and the intensifying commercialization and specialization in agriculture （which also led them to shed much non-agricultural production in favor of purchased goods）， and landlords and other property owners who collected mounting rents. Government activities to build and strengthen the state were a stimulus to numerous industries, notably shipbuilding, textiles, and metallurgy. To cite just one example, France hastened to develop its own iron industry when the Hapsburgs—the family that governed much of Europe, and whom France fought repeatedly in the sixteenth century—came to dominate the manufacture of weapons in Germany and the cities of Liege and Milan, which boasted Europe’s most advanced technology.
In metals and mining, technical improvements were available that saved substantially on raw materials and fuel, causing prices to drop. The construction of ever-larger furnaces capable of higher temperatures culminated in the blast furnace, which used cheaper ores and economized on scarce and expensive wood, cutting costs per ton by 20 percent while boosting output substantially. A new technique for separating silver from copper allowed formerly worthless ores to be exploited. Better drainage channels, pumps, and other devices made it possible to tunnel more deeply into the earth as surface deposits began to be exhausted. In most established industries, however, technological change played little role, as in the past, new customers were sought by developing novel products based on existing technologies, such as a new type of woolen cloth with the texture of silk. …
diffusion = dispersal
perfected = completed
A major question in the archaeology of the southwestern region of the United States is why so many impressive settlements, and even entire regions, were abandoned in prehistoric times. Archaeologist Tim Kohler has suggested that the nature of human-environmental interaction was an important reason in the case of the Anasazi people. The actual case study that Kohler relies on is from the Dolores River basin of southwest Colorado, where the Anasazi seem to have moved in about A.D. 600. Over the following couple of centuries, the population increased, and they aggregated （or gathered） into villages, but by about A.D. 900 the area began to be abandoned. Other archaeologists have identified the immediate cause of this abandonment to be a series of short growing seasons that would have put pressure on corn production at that high an altitude. Kohler, however, assets that a growing population led to human-environmental interactions that caused people to live in villages, intensify agrarian food production, deforest the region, deplete the local soils, and ultimately abandon the area.
This evidence has convinced Kohler of the importance of human impact in degrading the local environment. His interpretation of the situation is that by about A.D. 840, people had aggregated into villages in favorable settings because of their competitive organizational advantages over smaller units in the face of growing population and depletion of local wild resources. Hence, the very nature of the initial slash-and-burn agriculture encouraged a further dependence on agriculture and the aggregation of people into denser settlements. However, there are costs to aggregation, such as the increasing distance to usable fields, the heavier pressure on local soils, and the accompanying increase in agricultural risk. The Anasazi responded to this by further intensification, such as water-control mechanisms, to feed the increasing population. Such a trajectory is fraught with risks, but it is also pushed forward by advantages it bestows on its participants who organize and cooperate. …
As the first cities formed in Mesopotamia in the Middle East, probably around 3000 B.C., it became necessarily to provide food for larger populations, and thus to find ways of increasing agricultural production. This, in turn, led to the problem of obtaining sufficient water.
Salinization is caused by an accumulation of salt in the soil near its surface. This salt is carried by river water from the sedimentary rocks in the mountains and deposited on the Mesopotamian fields during natural flooding or purposeful irrigation. Evaporation of water sitting on the surface in hot climates is rapid, concentrating the salts in the remaining water that then descends through the soil to the underlying water table. In southern Mesopotamia, for example, the natural water table comes to within roughly six feet of the surface. Conditions of excessive irrigation bring the water table to eighteen inches, and water can rise further to the root zone, where the high concentration of salts would kill most plants.
Growing agrarian societies often tried to meet their food-producing needs by farming less-desirable hill slopes surrounding the favored low-lying valley bottoms. Since bringing irrigation water to a hill slope is usually impractical, the key is effective utilization of rainfall. Rainfall either soaks into the soil or runs off of it due to gravity. A soil that is deep, well-structured, and covered by protective vegetation and much will normally absorb almost all of the rain that falls on it, provided that the slope is not too steep. However, soils that have lost their vegetative cover and surface mulch will absorb much less, with almost half the water being carried away by runoff in more extreme conditions. This runoff carries with it topsoil particles, nutrients, and humus （decayed vegetable matter） that are concentrated in the topsoil. The loss of this material reduces the thickness of the rooting zone and its capacity to absorb moisture for crop needs. …
There are both great similarities and considerable diversity in the ecosystems that evolved on the islands of Oceania in and around the Pacific Ocean. The islands, such as New Zealand, that were originally parts of continents still carry some small plant and animal remnants of their earlier biota （animal and plant life）， and they also have been extensively modified by evolution, adaptation, and the arrival of new species. By contrast, the other islands, which emerged via geological processes such as volcanism, possessed no terrestrial life, but over long periods, winds, ocean currents, and the feet, feathers, and digestive tracts of birds brought the seeds of plants and a few species of animals. Only those species with ways of spreading to these islands were able to undertake the long journeys, and the various factors at play resulted in diverse combinations of new colonists on the islands. One estimate is that the distribution of plants was 75 percent by birds, 23 percent by floating, and 2 percent by wind.
Finally, a fourth major factor in species distribution, and indeed in the shaping of Pacific ecosystems, was wind. It takes little experience on Pacific islands to be aware that there are prevailing winds. To the north of the equator these are called north-easterlies, while to the south they are called south-easterlies. Further south, from about 30°south, the winds are generally from the west. As a result, on nearly every island of significant size there is an ecological difference between its windward and leeward （away from the wind） sides. Apart from the wind action itself on plants and soils, wind has a major effect on rain distribution. The Big Island of Hawaii offers a prime example; one can leave Kona on the leeward side in brilliant sunshine and drive across to the windward side where the city of Hilo is blanketed in mist and rain. …
Cosmologists attempt to understand the origin and structure of the universe as a whole. They begin their search with an assumption about the nature of the universe—namely, that in looking out from our vantage point in the cosmos, we see essentially the same kind of universe that an observer stationed in any other part of it, no matter how remote, would see. As far as our telescopes can reach, we see galaxies and clusters of galaxies distributed in more or less the same way in every direction. This assumption that the universe is uniform on a large scale is called “the cosmological principle.”
The essential idea of the evolutionary cosmology is that there was a beginning—a moment of creation at which the universe came into existence in a hot, violent explosion—the Big Bang. In the beginning, the universe was very hot, very dense, and very tiny. As the explosion evolved, the temperature dropped, the distribution of matter and energy thinned, and the universe expanded. From the current observed rate of expansion, we conclude that the creation event occurred between ten and twenty billion years ago.
In an expanding universe, the galaxies move away from each other, spreading matter more thinly over space. On the other hand, the perfect cosmological principle requires that the density of matter in the universe remain constant over time. To make the steady-state theory compatible with the expanding universe, its proponents introduced the notion of continuous creation. As the universe expands and the galaxies move farther apart, new matter—in the form of hydrogen—is introduced into the universe. The rate at which the hypothesized new matter is created is far too small for this creation to be detected with available instruments, but continuous creation provides just enough matter to form new stars and galaxies that fill in the space left by the old ones. Thus, in the steady-state universe there is evolution of stars and galaxies, but the general character and the overall density of the universe remains unchanged over time. In this special sense, the steady-state universe itself does not evolve.
Quasars are such distant objects that the light now reaching us from quasars left them billions of years ago. This means that when we observe quasars today we are seeing that state of the universe billions of years ago. Thus, the fact that almost all quasars are very far away implies that earlier in the history of the universe quasars were developing more frequently than they are now. This evolution is consistent with the Big Bang theory. But it violates the perfect cosmological principle, and so it is inconsistent with the steady-state view.
Seaweeds are multicellular algae that inhabit the oceans. Despite their evolutionary distance from each other, seaweeds—such as brown algae, red algae, and green algae—have in common many aspects of their biology and contributions to the ecology of the seas.
The environmental factors that are most influential in governing the distribution of seaweeds are light and temperature. Some other abiotic （nonliving） factors critical in governing the distribution of seaweeds are duration of tidal exposure and desiccation （drying out）， wave action and surge, salinity, and availability of mineral nutrients. The areas of the world most favorable to seaweed diversity include both sides of the North Pacific Ocean, Australia, southwestern Africa, and the Mediterranean Sea.
The concept of chromatic adaptation was proposed in 1883, and the hypothesis was accepted for about 100 years, until it was realized that such zonation did not necessarily occur and that the distribution of seaweeds depended more on herbivory （the consumption of plant material）， competition, varying concentration of the specialized pigments, and the ability of seaweeds to alter their forms of growth. …
duration=length of time
About 13 percent of bird species, including most seabirds, nest in colonies.
Colonial nesting evolves in response to a combination of two environmental conditions: （1） a shortage of nesting sites that are safe from predators and （2） abundant or unpredictable food that is distant from safe nest sites. First and foremost, individual birds are safer in colonies that are inaccessible to predators, as on small rocky islands. In addition, colonial birds detect predators more quickly than do small groups or pairs and can drive the predators from the vicinity of the nesting area. Because nests at the edges of breeding colonies are more vulnerable to predators than those in the centers, the preference for advantageous central sites promotes dense centralized packing of nests.
Coordinated social interactions tend to be weak when a colony is first forming, but true colonies provide extra benefits. Synchronized nesting, for example, produces a sudden abundance of eggs and chicks that exceeds the daily needs of local predators. Additionally, colonial neighbors can improve their foraging by watching others. This behavior is especially valuable when the off-site food supplies are restricted or variable in location, as are swarms of aerial insects harvested by swallows. The colonies of American cliff swallows, for example, serve as information centers from which unsuccessful individual birds follow successful neighbors to good feeding sites. Cliff swallows that are unable to find food return to their colony, locate a neighbor that has been successful, and then follow that neighbor to its food source. All birds in the colony are equally likely to follow or to be followed and thus contribute to the sharing of information that helps to ensure their reproductive success.
Among the costs, colonial nesting leads to increased competition for nest sites and mates, the stealing of nest materials, and increased physical interference among other effects. In spite of food abundance, large colonies sometimes exhaust their local food supplies and abandon their nests. Large groups also attract predators, especially raptors, and facilitate the spread of parasites and diseases. The globular mud nests in large colonies of the American cliff swallow, for example, are more likely to be infested by fleas or other bloodsucking parasites than are nests in small colonies. Experiments in which some burrows were fumigated to kill the parasites showed that these parasites lowered survivorship by as much as 50 percent in large colonies but not significantly in small ones. The swallows inspect and then select parasite-free nests in large colonies, they tend to build new nests rather than use old, infested ones. …
contribute to=add to