5.5分：《IELTS READING 雅思阅读》
6.0分：《Collins Reading for IELTS》
来自《IELTS READING 雅思阅读》p34
Albatrosses are the largest seabirds in existence, with wingspans which extend to over three metres in width. They represent a small subset of the larger group known as tube-nosed petrels, which have strong, curved sharp beaks which they use for catching fish and squid on the surface of the ocean. While there is some debate about the exact taxonomy of the species, it is agreed that there are somewhere between 21 and 24 species of albatrosses.
Of these species, approximately half breed in New Zealand and about 80 per cent breed or fish within New Zealand’s territorial waters. Six species breed only in New Zealand or on its offshore islands. One of only two mainland nesting sites for these birds in the world, for the northern royal albatross, is on the Otago Peninsula in the South Island of New Zealand; it is a popular tourist destination. Visitors can view the albatross colony from a special building which has been established beside the nesting ground and, while the site is closed during breeding season, at other times it is often possible to see parents and their chicks living and feeding only metres away from human observers.
Albatrosses spend most of their lives at sea, coming to land only to mate and raise their chicks. Male and female birds cooperate in raising their offspring. At the Taiaroa nesting site in New Zealand, eggs are laid in October or November each year. Incubation takes about 11 weeks, and during this time both parents take turns to sit on the eggs for periods of up to three weeks, while the other bird goes off to sea to eat. It takes the chicks up to five or six days to hatch from their tough shell. Once they are hatched, the parents take turns in looking after them for about five or six weeks. After this time, they are left alone except for regular feeding until they get their feathers and are ready to fly, at about eight months of age.
Once the young birds are ready to fly, they are off to sea. Albatrosses spend about 80 per cent of their lives at sea, soaring over the waves and feeding off surface fish and squid. Some albatrosses travel long distances over the pelagic, or deep, ocean, while others find food closer to land over areas of continental shelf. They can fly at great speed, at bursts of up to 140km/hour, and they can cover huge distances in one day, even as much as 1800 km.
The royal albatrosses at Taiaroa Head stay at sea for the first three years of their lives, after which they return to the colony once a year for several years before finding a mate and beginning to breed at around the age of eight. Albatrosses are faithful birds; they mate for life and raise one chick every two years on average. They are also long lived, and birds have been recorded still laying eggs into their 50s and even 6Os. However, their relatively low reproductive rate is one of the factors which make them vulnerable to the threat of extinction.
There are also risks to albatross chicks on land. Natural predators such as seagulls can eat eggs and young birds, and in mainland areas there are also threats from dogs, cats and other land animals. On some offshore islands, sea lions have been observed raiding nests for eggs. It is thought that this is a new behaviour.
The main threats to the adult albatross occur at sea, and most of these are man-made. Albatrosses like to travel close to fishing boats, to eat the leftover scraps of fish that are dropped over the side of the boat. Sometimes, however, they also eat the bait and accidentally ingest fish hooks, or get dragged along on fishing lines and drown. The number of albatrosses that any one boat catches is small, but because there are so many fishing boats, this may have a long term impact on population numbers. It is estimated that at least 100,000 albatrosses die in this way each year. As for all sea bird species, there are other threats, such as drift nets, oil spills and rubbish such as plastic in the ocean. While there are international agreements and fishing conventions to try and protect sea birds, albatrosses are among the million or so sea birds that get caught in drift nets and die each year.
The albatross is a magnificent, beautiful and awe-inspiring creature. We need to work together to protect this bird and others from threats posed by human activity.
来自《Collins Reading for IELTS》p49/p56
Light is important to
organisms for two different reasons. Firstly it is used as a cue for the timing of daily and seasonal rhythms in both plants and animals, and secondly it is used to assist growth in plants.
Breeding in most organisms occurs during a part of the year only, and so a reliable cue is needed to trigger breeding behaviour. Day length is an excellent cue, because it provides a perfectly predictable pattern of change within the year. In the
temperate zone in spring, temperatures
fluctuate greatly from day to day, but day length increases steadily by a predictable amount. The seasonal impact of day length on physiological responses is called photoperiodism, and the amount of experimental evidence for this phenomenon is considerable. For example, some species of birds’ breeding can be
induced even in midwinter simply by increasing day length
artificially (Wolfson 1964). Other examples of photoperiodism occur in plants. A short-day plant flowers when the day is less than a certain
critical length. A long-day plant flowers after a certain critical day length is
exceeded. In both cases the critical day length differs from species to species. Plants which flower after a period of vegetative growth, regardless of
photoperiod, are known as day-neutral plants.
Breeding seasons in animals such as birds have
evolved to occupy the part of the year in which offspring have the greatest chances of survival. Before the breeding season begins, food reserves must be built up to support the energy cost of reproduction, and to provide for young birds both when they are in the nest and after
fledging. Thus many temperate-zone birds use the increasing day lengths in spring as a cue to begin the nesting cycle, because this is a point when adequate food resources will be
adaptive significance of photoperiodism in plants is also clear. Short-day plants that flower in spring in the temperate zone are adapted to
seedling growth during the growing season. Long-day plants are adapted for situations that require fertilization by insects, or a long period of seed
ripening. Short-day plants that flower in the autumn in the temperate zone are able to build up food reserves over the growing season and over winter as seeds. Day-neutral plants have an
evolutionary advantage when the connection between the
favourable period for reproduction and day length is much less certain. For example, desert annuals
germinate, flower and seed whenever suitable rainfall occurs, regardless of the day length.
The breeding season of some plants can be delayed to extraordinary lengths. Bamboos are
perennial grasses that remain in a vegetative state for many years and then suddenly flower, fruit and die (Evans 1976). Every bamboo of the species Chusquea
abietifolia on the island of Jamaica flowered, set seed and died during 1884. The next generation of bamboo flowered and died between 1916 and 1918, which suggests a vegetative cycle of about 31 years. The climatic trigger for this flowering cycle is not yet known, but the adaptive significance is dear. The simultaneous production of masses of bamboo seeds (in some cases lying 12 to 15 centimetres deep on the ground) is more than all the seed-eating animals can
cope with at the time, so chat some seeds escape being eaten and grow up to form the next generation (Evans 1976).
The second reason light is important to organisms is that it is essential for
photosynthesis. This is the process by which plants use energy from the sun to convert carbon from soil or water into organic material for growth. The rate of photosynthesis in a plant can be measured by calculating the rate of its uptake of carbon. There is a wide range of photosynthetic responses of plants to variations in light
intensity. Some plants reach maximal photosynthesis at one-quarter full sunlight, and others, like
sugarcane, never reach a maximum, but continue to increase photosynthesis rate as light intensity rises.
Plants in general can be divided into two groups: shade-tolerant species and shade-intolerant species. This classification is commonly used in forestry and
horticulture. Shade-tolerant plants have lower photosynthetic rates and
hence have lower growth rates than those of shade-intolerant species. Plant species become adapted to living in a certain kind of habitat, and in the process
evolve a series of characteristics that prevent them from occupying other habitats. Grime (1966) suggests that light may be one of the major components direrting these adaptations. For example, eastern
hemlock seedlings are shade-tolerant. They can survive in the forest understorey under very low light levels because they have a low photosynthetic rate.
Bats have a problem: how to find their way around in the dark. They hunt at night, and cannot use light to help them find
prey and avoid
obstacles. You might say that this is a problem of their own making, one that they could avoid simply by changing their habits and hunting by day. But the daytime economy is already heavily
exploited by other creatures such as birds. Given that there is a living to be made at night, and given that alternative daytime trades are thoroughly occupied, natural selection has favoured bats that make ago of the night-hunting trade. It is probable that the
nocturnal trades go way back in the ancestry of all mammals. In the time when the dinosaurs dominated the daytime economy, our
mammalian ancestors probably only managed to survive at all because they found ways of
scraping a living at night. Only after the mysterious mass extinction of the dinosaurs about 65 million years ago were our ancestors able to emerge into the daylight in any substantial numbers.
Bats have an engineering problem: how to find their way and find their prey in the absence of light. Bats are not the only creatures to face this difficulty today. Obviously the night-flying insects that they prey on must find their way about somehow. Deep-sea fish and whales have little or no light by day or by night. Fish and dolphins that live in extremely muddy water cannot see because, although there is light, it is
scattered by the dirt in the water. Plenty of other modern animals make their living in conditions where seeing is difficult or impossible.
Given the questions of how to man
oeuvre in the dark, what solutions might an engineer consider? The first one that might occur to him is to manufacture light, to use a
lantern or a searchlight. Fireflies and some fish (usually with the help of
bacteria) have the power to manufacture their own light, but the process seems to consume a large amount of energy.
Fireflies use their light for attracting mates. This doesn’t require a
prohibitive amount of energy: a male’s tiny
pinprick of light can be seen by a female from some distance on a dark night, since her eyes are exposed directly to the light source itself. However; using light to find one’s own way around requires vastly more energy, since the eyes have to detect the tiny
fraction of the light that
bounces off each part of the scene. The light source must therefore be
immensely brighter if it is to be used as a headlight to illuminate the path, than if it is to be used as a signal to others. In any event, whether or not the reason is the energy
expense, it seems to be the case that, with the possible exception of some weird deep-sea fish, no animal apart from man uses manufactured light to find its way about.
What else might the engineer think of? Well, blind humans sometimes seem to have an
uncanny sense of
obstacles in their path. It has been given the name ‘facial vision’, because blind people have reported that it feels a bit like the sense of touch, on the face. One report tells of a totally blind boy who could ride his
tricycle at good speed round the block near his home, using facial vision. Experiments showed that, in fact, facial vision is nothing to do with touch or the front of the face, although the
sensation maybe referred to the front of the face, like the referred pain in a
phantom limb. The
sensation of facial vision, it turns out, really goes in through the ears. Blind people, without even being aware of the fact, are actually using echoes of their own footsteps and of other sounds, to sense the
obstacles. Before this was discovered, engineers had already built instruments to
exploit the principle, for example to measure the depth of the sea under a ship. After this technique had been invented, it was only a matter of time before weapons designers adapted it for the detection of submarines. Both sides in the Second World War relied heavily on these devices, under such code names as Asdic (British) and Sonar (American), as well as Radar (American) or RDF (British), which uses radio echoes rather than sound echoes.
The Sonar and Radar pioneers didn’t know it then, but all the world now knows that bats, or rather natural selection working on bats, had perfected the system tens of millions of years earlier; and their ‘radar’ achieves
feats of detection and navigation that would strike an engineer dumb with admiration. It is technically incorrect to talk about bat ‘radar’, since they do not use radio waves. It is sonar. But the
underlying mathematical theories of radar and sonar are very similar; and much of our scientific understanding of the details of what bats are doing has come from applying radar theory to them. The American zoologist Donald Griffin, who was largely responsible for the discovery of sonar in bats, coined the term ‘echolocation’ to cover both sonar and radar, whether used by animals or by human instruments.
来自《IELTS READING 雅思阅读》p111
来自《Collins Reading for IELTS》p22
The joke comes over the headphones: ‘Which side of a dog has the most hair? The left’. No, not funny. Try again. ‘Which side of a dog has the most hair? The outside’. Hah! The
punchline is silly yet
tempting a smile, even a laugh. Laughter has always
struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler
dubbed it the
luxury reflex: ‘unique in that it serves no apparent biological purpose’.
Theories about humour have an ancient
pedigree. Plato expressed the idea that humour is simply a
delighted feeling of
superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic
tension which is safely
punctured by the
ludicrousness of the punchline. But most modern humour theorists have settled on some version of Aristotle’s belief that jokes are based on a reaction to or resolution of
incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning.
Graeme Ritchie, a
computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humour but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising
conceptual shift. A comedian will present a situation followed by an unexpected
interpretation that is also apt.
So even if a punchline sounds silly, the listener can see there is a clever
semantic fit and that sudden mental ‘Aha!’ is the
buzz that makes us laugh. Viewed from this angle, humour is just a form of creative insight, a sudden
leap to a new perspective.
However, there is another type of laughter, the laughter of social
appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce
ultrasonic squeaks to prevent their
nasty. Chimpanzees have a “play-face” - a gaping expression accompanied by a
panting ‘ah, ah’ noise. In humans, these signals have
mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these
instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel
amused or not.
Both social and cognitive types of laughter tap into the same expressive
machinery in our brains, the emotion and motor
circuits that produce smiles and excited
vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity.
Psychologist Vinod Goel investigated humour using the new technique of ‘single event’ functional
magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second ‘snapshots’ of all sorts of reasoning and problem-solving activities.
Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener’s prefrontal
cortex lit up, particularly the right
prefrontal believed to be
critical for problem solving. But there was also activity in the
temporal lobes at the side of the head (consistent with attempts to
rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area
sprang to life - the
orbital prefrontal cortex. This patch of brain
tucked behind the
orbits of the eyes is associated with evaluating information.
Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and
arousal levels may need to be
retuned in the blink of an eye. These
abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goers experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain’s sub-cortical arousal
apparatus and centres of
All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a
shudder of pleased recognition. Creative discovery being
pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person’s outlook.
Humour may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: M like to think of humour as the
distorted mirror of the mind. Its creative,
lingual. If we can figure out how the mind processes humour, then we’ll have a pretty good handle on how it works in general.’
The sense of smell, or
olfaction, is powerful.
Odours affect us on a physical, psychological and social level. For the most part, however, we breathe in the
aromas which surround us without being
consciously aware of their importance to us. It is only when the
faculty of smell is
impaired for some reason that we begin to realise the essential role the sense of smell plays in our sense of well-being.
conducted by Anthony Synott at Montreal’s Concordia University asked participants to comment on how important smell was to them in their lives. It became apparent that smell can
evoke strong emotional responses. A
scent associated with a good experience can bring a rush of joy, while a
foul odour or one associated with a bad memory may make us
grimace with disgust. Respondents to the survey noted that many of their
olfactory likes and dislikes were based on emotional associations. Such associations can be powerful enough so that odours that we would generally label unpleasant become agreeable, and those that we would generally consider
fragrant become disagreeable for particular individuals. The perception of smell, therefore, consists not only of the sensation of the odours themselves, but of the experiences and emotions associated with them.
Odours are also essential cues in social
respondent to the survey believed that there is no true emotional bonding without touching and smelling a loved one. In fact,
infants recognise the odours of their mothers soon after birth and adults can often identify their children or
scent. In one well-known test, women and men were able to distinguish by smell alone clothing
worn by their marriage partners from similar clothing worn by other people. Most of the subjects would probably never have given much thought to odour as a cue for identifying family members before being involved in the test, but as the experiment revealed, even when not consciously considered, smells register.
In spite of its importance to our emotional and
sensory lives, smell is probably the most undervalued sense in many cultures. The reason often given for the low regard in which smell is held is that, in comparison with its importance among animals, the human sense of smell is feeble and undeveloped. While it is true that the
olfactory powers of humans are nothing like as fine as those possessed by certain animals, they are still
acute. Our noses are able to recognise thousands of smells, and to
perceive odours which are present only in extremely small quantities.
Smell, however, is a highly
elusive phenomenon. Odours, unlike colours, for instance, cannot be named in many languages because the specific vocabulary simply doesn’t exist. 'It smells like … ', we have to say when describing an odour, struggling to express our olfactory experience. Nor can odours be recorded: there is no effective way to either capture or store them over time. In the realm of
olfaction, we must make do with descriptions and recollections. This has
implications for olfactory research.
Most of the research on smell undertaken to date has been of a physical scientific nature. Significant advances have been made in the understanding of the biological and chemical nature of olfaction, but many fundamental questions have yet to be answered. Researchers have still to decide whether smell is one sense or two - one responding to odours proper and the other registering odourless chemicals in the air. Other unanswered questions are whether the nose is the only part of the body affected by odours, and how smells can be measured
objectively given the nonphysical components. Questions like these mean that interest in the psychology of smell is
inevitably set to play an increasingly important role for researchers.
However, smell is not simply a biological and psychological phenomenon. Smell is cultural, hence it is a social and historical phenomenon. Odours are invested with cultural values: smells that are considered to be
offensive in some cultures may be perfectly acceptable in others. Therefore, our sense of smell is a means of, and model for, interacting with the world. Different smells can provide us with
intimate and emotionally charged experiences and the value that we attach to these experiences is
interiorised by the members of society in a deeply personal way. Importantly, our commonly held feelings about smells can help distinguish us from other cultures. The study of the cultural history of smell is, therefore, in a very real sense, an investigation into the
essence of human culture.
来自《IELTS READING 雅思阅读》p3
The education of our young people is one of the most important aspects of any community, and ideas about what and how to teach reflect the accepted attitudes and unspoken beliefs of society. These ideas change as local customs and attitudes change, and these changes are reflected in the
curriculum, teaching and assessment methods and the expectations of how both students and teachers should behave.
Teaching in the late 1800s and early 1900s was very different from today. Rules for teachers at the time in the USA covered both the teacher’s duties and their
conduct out of class as well. Teachers at that time were expected to set a good example to their pupils and to behave in a very
virtuous and proper manner. Women teachers should not marry, nor should they ‘keep company with men’. They had to wear long dresses and no bright colours and they were not permitted to dye their hair. They were not allowed to
loiter downtown in an ice cream store, and women were not allowed to go out in the evenings unless to a school function, although men were allowed one evening a week to take their girlfriends out if they went to church regularly. No teachers were allowed to drink alcohol. They were allowed to read only good books such as the Bible, and they were given a pay increase of 25c a week after five years of work for the local school.
As well as this long list of ‘dos’ and ‘don’ts’, teachers had certain duties to perform each day. In country schools, teachers were required to keep the coal bucket full for the classroom fire, and to bring a bucket of water each day for the children to drink. They had to make the pens for their students to write with and to sweep the floor and keep the classroom tidy. However, despite this list of duties, little was
stipulated about the content of the teaching, nor about assessment methods.
Teachers would have been expected to teach the three ‘r’s—reading, writing and arithmetic, and to teach the children about Christianity and read from the Bible every day. Education in those days was much simpler than it is today and covered basic
literacy skills and religious education. They would almost certainly have used
corporal punishment such as a stick or the strap on naughty or unruly children, and the children would have sat together in pairs in long rows in the classroom. They would have been expected to sit quietly and to do their work, copying long rows of letters or doing basic maths sums. Farming children in country areas would have had only a few years of schooling and would probably have left school at 12 or 14 years of age to join their parents in farm work.
Compare this with a country school in the USA today! If you visited today, you would see the children sitting in groups round large tables, or even on the floor. They would be working together on a range of different activities, and there would almost certainly be one or more computers in the classroom. Children nowadays are allowed and even expected to talk quietly to each other while they work, and they are also expected to ask their teachers questions and to actively engage in finding out information for themselves, instead of just listening to the teacher.
There are no rules of conduct for teachers out of the classroom, and they are not expected to perform caretaking duties such as cleaning the classrooms or making pens, but nevertheless their jobs are much harder than they were in the 1900s. Teachers today are expected to work hard on planning their lessons, to teach creatively and to stimulate children’s minds, and there are strict protocols about assessment across the whole of the USA. Corporal punishment is illegal, and any teacher who hit a child would be dismissed instantly. Another big difference is that most state schools in western countries are
secular, so religious teaching is not part of the curriculum.
These changes in educational methods and ideas reflect changes in our society in general. Children in western countries nowadays come from all parts of the globe and they bring different cultures, religions and beliefs to the classroom. It is no longer considered acceptable or appropriate for state schools to teach about religious beliefs. Ideas about the value and purpose of education have also changed and with the increasing
sophistication of workplaces and life skills needed for a successful career, the curriculum has also expanded to try to prepare children for the challenges of a diverse working community. It will be interesting to see how these changes continue into the future as our society and culture grows and develops.
来自《Collins Reading for IELTS》p67
Adults and children are frequently
confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic
illustration to which children might
readily relate is the estimate that rainforests are being destroyed at a rate
equivalent to one thousand football fields every forty minutes – about the duration of a normal classroom period. In the face of the frequent and often
vivid media coverage, it is likely that children will have formed ideas about rainforests – what and where they are, why they are important, what
endangers them – independent of any formal tuition. It is also possible that some of these ideas will be mistaken.
Many studies have shown that children
harbor misconceptions about ‘pure’,curriculum science. These misconceptions do not remain isolated but become
incorporated into a
multifaceted, but organized,
conceptual framework, making it and the component ideas, some of which are
robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers.
extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children’s ideas in this area,the aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programs in environmental studies in their schools.
The study surveys children’s scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are
self-evident from the term ‘rainforest’. Some children described them as
damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were
continents or countries: Africa(given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the
Responses to question three concerned the importance of rainforests. The
dominant idea, raised by 64% of the pupils, was that rainforests provide animals with
habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the
indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats.
Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally
consistent with our previous studied of pupils’ views about the use and conservation of rainforests, in which girls were shown to be more
sympathetic to animals and expressed views which seem to place an
intrinsic value on non-human animal life.
The fourth question concerned the causes of the destruction of rainforests. Perhaps
encouragingly, more than half of the pupil (59%) identified that it is human activities which are destroying rainforests, some
personalizing the responsibility by the use of terms such as ‘we are’. About 18% of the pupils referred specifically to logging activity.
One misconception, expressed by some 10% of the pupils, was that
acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also
embraced the misconception that rainforest destruction would reduce
atmospheric oxygen, making the
atmosphere incompatible with human life on Earth.
In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important.
The results of this study suggest that certain ideas
predominate in the thinking of children about rainforests. Pupils’ responses indicate some misconceptions in basic scientific knowledge of rainforests’ ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests.
Pupils did not volunteer ideas that suggested that they
appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the
rage of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate value and
evaluate conflicting views. Environmental education offers an
arena in which these
sills can be developed, which is essential
fore these children as future decision –makers.
Japan has a significantly better record in terms of average mathematical
attainment than England and Wales. Large sample international comparisons of pupils’ attainments since the 1960s have established that not only did Japanese pupils at age 13 have better scores of average attainment, but there was also a larger proportion of ‘low’ attainers in England, where, incidentally, the variation in attainment scores was much greater. The percentage of Gross National Product spent on education is reasonably similar in the two countries, so how is this higher and more consistent attainment in maths achieved?
Lower secondary schools in Japan cover three school years, from the seventh grade (age 13) to the ninth grade (age 15). Virtually all pupils at this stage attend state schools: only 3 per cent are in the private sector. Schools are usually modern in design, set well back from the road and
spacious inside. Classrooms are large and pupils sit at single desks in rows. Lessons last for a standardised 50 minutes and are always followed by a 10-minute break, which gives the pupils a chance to let off steam. Teachers begin with a formal address and
mutual bowing, and then concentrate on whole-class teaching.
Classes are large - usually about 40 - and are
unstreamed. Pupils stay in the same class for all lessons throughout the school and develop considerable class identity and loyalty. Pupils attend the school in their own neighbourhood, which in theory removes ranking by school. In practice in Tokyo, because of the relative concentration of schools, there is some competition to get into the ‘better’ school in a particular area.
Traditional ways of teaching form the basis of the lesson and the remarkably quiet classes take their own notes of the points made and the examples demonstrated. Everyone has their own copy of the textbook supplied by the central education authority, Monbusho, as part of the concept of free compulsory education up to the age of 15. These textbooks are, on the whole, small, presumably inexpensive to produce, but well set out and logically developed. (One teacher was particularly keen to introduce colour and pictures into maths textbooks: he felt this would make them more accessible to pupils brought up in a cartoon culture. ) Besides approving textbooks, Monbusho also decides the highly
centralised national curriculum and how it is to be delivered.
Lessons all follow the same pattern. At the beginning, the pupils put solutions to the homework on the board, then the teachers comment, correct or elaborate as necessary. Pupils mark their own homework: this is an important principle in Japanese schooling as it enables pupils to see where and why they made a mistake, so that these can be avoided in future. No one minds mistakes or ignorance as long as you are prepared to learn from them. After the homework has been discussed, the teacher explains the topic of the lesson, slowly and with a lot of
repetition and elaboration. Examples are demonstrated on the board; questions from the textbook are worked through first with the class, and then the class is set questions from the textbook to do individually. Only rarely are
distributed in a maths class. The impression is that the logical nature of the textbooks and their comprehensive coverage of different types of examples, combined with the relative
homogeneity of the class,
renders work sheets unnecessary. At this point, the teacher would
circulate and make sure that all the pupils were coping well.
It is remarkable that large, mixed-ability classes could be kept together for maths throughout all their compulsory schooling from 6 to 15. Teachers say that they give individual help at the end of a lesson or after school, setting extra work if necessary. In observed lessons, any strugglers would be assisted by the teacher or quietly seek help from their neighbour. Carefully
fostered class identity makes pupils keen to help each other - anyway, it is in their interests since the class progresses together.
scarcely seems adequate help to enable slow learners to keep up. However, the Japanese attitude towards education runs along the lines of ‘if you work hard enough, you can do almost anything’. Parents are kept closely informed of their children’s progress and will play a part in helping their children to keep up with class, sending them to ‘Juku’ (private evening tuition) if extra help is needed and encouraging them to work harder. It seems to work, at least for 95 per cent of the school population.
So what are the major contributing factors in the success of maths teaching? Clearly, attitudes are important. Education is valued greatly in Japanese culture; maths is recognised as an important compulsory subject throughout schooling; and the
emphasis is on hard work coupled with a focus on accuracy.
Other relevant points relate to the supportive attitude of a class towards slower pupils, the lack of competition within a class, and the positive emphasis on learning for oneself and improving one’s own standard. And the view of
repetitively boring lessons and learning the facts by heart, which is sometimes quoted in relation to Japanese classes, may be unfair and unjustified. No poor maths lessons were observed. They were mainly good and one or two were
来自《IELTS READING 雅思阅读》p93
Techno-wizardry sounds like something for the future, but actually homes with advanced technological ability are already in existence. If you want a home that is not only convenient but far safer than a conventional one, then a techno-savvy home is for you. A techno-savvy house is basically a network of appliances, light switches and various assorted items which inter-communicate, so that the whole house operates a lot more efficiently and smoothly.
Cutting edge technology is being integrated into homes everywhere. In simple terms, a techno-savvy house has a ‘brain’. Techno-savvy systems rely on a control panel, switches or a touch screen to access the desired function. The connections are made using cabling within walls, ceilings and under floors of the house, or an internal wireless system or a combination of both of these.
In order for the system to meet the needs of the home’s occupants, it should not be too complex; it must be both convenient and time saving. This means the architect, developer and home owner have to co-plan very carefully in order to achieve a truly integrated, easy-to-use system. An integrated house system operates and manages all the electrical equipment in a home to increase comfort, flexibility, communication, safety and security, and also to reduce energy consumption.
A techno-savvy home can have a tremendous impact on the occupants’ lives. Many chores or jobs can be done more simply, as it allows all sorts of electronic gadgets and appliances to perform a variety of tasks. For example, an alarm clock can be programmed to send a message to the coffee maker to begin brewing the morning coffee. In another example, the refrigerator can suggest what could be eaten as a snack based on what it has inside. It then communicates with the microwave or oven to suggest a cooking time. It seems hard to believe that these types of refrigerators already exist. They can talk to the Internet and download recipes; they can ever order new groceries as required, because they are able to scan and log bar codes of food items taken from inside.
Although there are many smart appliances available on the market and many more becoming available, probably one of the first aspects that is fully automated in a home is the entertainment system. While it is not necessarily making the lives of the occupants easier or making them any safer, it is fun being able to change channels by speaking to the TV, and to use the Internet in conjunction with the television.
A techno-savvy house can save energy by lowering the temperature setting and switching off appliances and lights that are not required. It can also manage heaters, the air conditioning and fans in such a way as to save energy. For example, if the outside temperature is only slightly more than the setting on the thermostat then a smart home will use fans instead of the air conditioner, which uses a lot more energy. Also, if the television is not in use, then it will completely turn off the energy outlet, which also saves a small amount of energy. Over an extended period of time these actions can mean a considerable saving.
Being able to monitor security from a central system makes the home a safe haven for all occupants. With a single push of a button an alarm system puts the entire home into security mode. All the windows and doors close and lock, and the security systems are activated. Absent owners can check their security system via the Internet, due to hidden surveillance cameras around the house which send information. A further useful feature is that lights can be programmed to go on and off at random times when nobody is at home to make it look like somebody is there. This feature acts as a major deterrent to criminals.
In an emergency, people can panic and not react in the best possible manner. However, a techno-savvy house can help here. For example at the time of a fire, the fire alarm would activate and the techno-savvy house’s ‘brain’ immediately calls the fire brigade. It would also turn on the lights that lead to an exit and unlock all the windows and doors to make the escape route easier.
However, any techno-savvy home has a major vulnerability; it relies on a power supply. If this were to be interrupted, chaos would prevail. Being connected to a battery system is essential, so there is a back up energy supply should there be a power cut. It is essential, that safe entry and exit points to the home are always available. Provided the system is safe, it will save power and increase security and pleasure for house occupants of the future.
来自《Collins Reading for IELTS》p41
After years in the wilderness, the term 'artificial intelligence’ (AI) seems
poised to make a
comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has
ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers,
executives and marketing people are now using the expression without
inverted commas. And it is not always
hype. The term is being applied, with some
justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the
rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research.
The field was launched, and the term ‘artificial intelligence’
coined, at a
conference in 1956 by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but
informative name for a research programme that
encompassed such previously disparate fields as operations research,
cybernetics, logic and computer science. The goal they shared was an attempt to capture or
mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI
unified the field in name only. But it was a term that captured the public imagination.
Most researchers agree that AI peaked around 1985. A public
reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had
implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating ‘artificial intelligence, would be substantially solved. Prototypes of
medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to
materialise, and a
ensued. There was
undue optimism in the early 1980s’, says David Leake, a researcher at Indiana University. Then when people realised these were hard problems, there was
retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who
opted instead to
align themselves with specific
sub-disciplines such as
neural networks, agent technology, case-based reasoning, and so on’.
Ironically, in some ways AI was a victim of its own success. Whenever an apparently
mundane problem was solved, such as building a system that could land an aircraft
unattended, the problem was
deemed not to have been AI in the first place. ‘If it works, it can’t be AI’, as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to ‘blue-sky’ research that was still years away from commercialisation. Researchers joked that AI stood for ‘almost implemented’. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research.
tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency,
reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a
duster of 30 processors, could be used to
camouflaged vehicles on a battlefield or
extract a voice signal from a noisy background - tasks humans can do well, but computers cannot. ‘Whether or not their technology lives up to the claims made for it, the fact that HNC are
emphasising the use of AI is itself an interesting development’, says Dr Leake.
Another factor that may
prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload,
exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information - classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge.
The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lip-read. HAL thus
encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or
transcribe speech, but a general theory of machine intelligence still remains
elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. ‘People are beginning to realise that there are impressive things that these systems can do’, says Dr Leake hopefully.
Seldom is the weather more dramatic than when thunderstorms strike. Their electrical fury
inflicts death or serious injury on around 500 people each year in the United States alone. As the clouds roll in, a leisurely round of golf can become a terrifying
dice with death - out in the open, a lone golfer may be a lightning bolt’s most inviting target. And there is damage to property too. Lightning damage costs American power companies more than $100 million a year.
But researchers in the United States and Japan are planning to hit back. Already in laboratory trials they have tested strategies for
neutralising the power of thunderstorms, and this winter they will brave real storms, equipped with an armoury of lasers that they will be pointing towards the heavens to discharge thunderclouds before lightning can strike.
The idea of forcing storm clouds to discharge their lightning on command is not new. In the early 1960s, researchers tried firing rockets trailing wires into thunderclouds to set up an easy discharge path for the huge electric charges that these clouds generate. The technique survives to this day at a test site in Florida run by the University of Florida, with support from the Electrical Power Research Institute (EPRI), based in California. EPRI, which is funded by power companies, is looking at ways to protect the United States’ power grid from lightning strikes. ‘We can cause the lightning to strike where we want it to using rockets’, says Ralph Bernstein, manager of lightning projects at EPRI. The rocket site is providing precise measurements of lightning
voltages and allowing engineers to check how electrical equipment
But while rockets are fine for research, they cannot provide the protection from lightning strikes that everyone is looking for. The rockets cost around $1, 200 each, can only be fired at a limited frequency and their failure rate is about 40 per cent. And even when they do trigger lightning, things still do not always go according to plan. ‘Lightning is not perfectly well behaved’, says Bernstein. ‘
Occasionally, it will take a branch and go someplace it wasn’t supposed to go’.
And anyway, who would want to fire streams of rockets in a populated area? ‘What goes up must come down’, points out Jean-Claude Diels of the University of New Mexico. Diels is leading a project, which is backed by EPRI, to try to use lasers to discharge lightning safely - and safety is a basic requirement since no one wants to put themselves or their expensive equipment at risk. With around $500, 000 invested so far, a promising system is just emerging from the laboratory.
The idea began some 20 years ago, when high-powered lasers were revealing their ability to extract electrons out of atoms and create
ions. If a laser could generate a line of
ionisation in the air all the way up to a storm cloud, this conducting path could be used to guide lightning to Earth, before the electric field becomes strong enough to break down the air in an uncontrollable surge. To stop the laser itself being struck, it would not be pointed straight at the clouds. Instead it would be directed at a mirror, and from there into the sky. The mirror would be protected by placing lightning
conductors close by. Ideally, the cloud-zapper (gun) would be cheap enough to be installed around all key power installations, and
portable enough to be taken to international sporting events to
beam up at
brewing storm clouds.
However, there is still a big stumbling block. The laser is no
nifty portable: it’s a monster that takes up a whole room. Diels is trying to cut down the size and says that a laser around the size of a small table is in the
offing. He plans to test this more
manageable system on live thunderclouds next summer.
Bernstein says that Diels’s system is attracting lots of interest from the power companies. But they have not yet come up with the $5 million that EPRI says will be needed to develop a commercial system, by making the lasers yet smaller and cheaper. ‘I cannot say I have money yet, but I’m working on it’, says Bernstein. He reckons that the
forthcoming field tests will be the turning point - and he’s hoping for good news. Bernstein predicts ‘an avalanche of interest and support’ if all goes well. He expects to see cloud-zappers eventually costing $50, 000 to $100, 000 each.
Other scientists could also benefit. With a lightning ‘switch’ at their fingertips, materials scientists could find out what happens when mighty currents meet matter. Diels also hopes to see the birth of 'interactive
meteorology’ - not just forecasting the weather but controlling it. 'If we could discharge clouds, we might affect the weather, ’ he says.
And perhaps, says Diels, we’ll be able to confront some other
meteorological menaces. ‘We think we could prevent
hail by inducing lightning’, he says. Thunder, the
shock wave that comes from a lightning flash, is thought to be the trigger for the
torrential rain that is
typical of storms. A laser thunder factory could shake the
moisture out of clouds, perhaps preventing the
formation of the giant hailstones that threaten crops. With luck, as the storm clouds gather this winter, laser-toting researchers could, for the first time, strike back.
来自《IELTS READING 雅思阅读》p70
What is incredibly beautiful, yet absolutely terrifying and deadly at the same time? For anyone above the snowline in the mountains, there is little doubt about the answer. Avalanche—the word strikes fear into the heart of any avid skier or climber. For those unfortunate enough to be caught up in one, there is virtually no warning or time to get out of danger and even less chance of being found. The ‘destroyer’ of the mountains, avalanches can uproot trees, crush whole buildings and bury people metres deep under solidified snow. Around the world, as more and more people head to the mountains in winter, there are hundreds of avalanche fatalities every year.
A snow avalanche is a sudden and extremely fast-moving ‘river’ of snow which races down a mountainside (there can also be avalanches of rocks, boulders, mud or sand). There are four main kinds. Loose snow avalanches, or sluffs, form on very steep slopes. These usually have a ‘teardrop’ shape, starting from a point and widening as they collect more snow on the way down. Slab avalanches, which are responsiblefor about 90% of avalanche-related deaths, occur when a stiff layer of snow fractures or breaks off and slides downhill at incredible speed. This layer may be hundreds of metres wide and several metres thick. As it tends to compact and set like concrete once it stops, it is extremely dangerous for anyone buried in the flow. The third type is an isothermal avalanche, which results from heavy rain leading to the snowpack becoming saturated with water. In the fourth type, air mixes in with loose snow as the avalanche slides, creating a powder cloud. These powder snow avalanches can be the largest of all, moving at over 300 kmh, with 10,000,000 or more tonnes of snow. They can flow along a valley floor and even a short distance uphill on the other side.
Three factors are necessary for an avalanche to form. The first relates to the condition of the snowpack. Temperature, humidity and sudden changes in weather conditions all affect the shape and condition of snow crystals in the snowpack which, in turn, influences the stability of the snowpack. In some cases, weather causes an improvement in avalanche conditions. For example, low temperature variation in the snowpack and consistent below-freezing temperatures enable the crystals to compress tightly. On the other hand, if the snow surface melts and refreezes, this can create an icy or unstable layer.
The second vital factor is the degree of slope of the mountain. If this is below 25 degrees, there is little danger of an avalanche. Slopes that are steeper than 60 degrees are also unlikely to set off a major avalanche as they ‘sluff’ the snow constantly, in a cascade of loose powdery snow which causes minimal danger or damage. This means that slabs of ice or weaknesses in the snowpack have little chance to develop. Thus the danger zone covers the 25 to 60 degree range of slopes, with most avalanches being slab avalanches that begin on slopes of 35 to 45 degrees.
Finally, there is the movement or event that triggers the avalanche. In the case of slab avalanches, this can be a natural trigger, such as a sudden weather change, a falling tree or a collapsing ice or snow overhang. However, in most fatal avalanches, it is people who create the trigger by moving through an avalanche-prone area. Snowmobiles are especially dangerous. On the other hand, contrary to common belief, shouting is not a big enough vibration to set off a landslide.
Anyone moving through snow in the mountains should understand the danger signals and follow some basic rules. Taking an approved avalanche safety course is an essential first step. Skiers and climbers should be up-to-date with local warning systems and check any avalanche forecast hotline or website. They should also be aware of their surroundings, avoid areas that have signs of previous avalanche activity and monitor the weather conditions carefully. Basic equipment should include a rescue beacon with fresh batteries, an inexpensive inclinometer to measure the angle of slopes and an avalanche probe.
Beautiful but deadly, avalanches kill increasingly numbers of winter sports enthusiasts every year as more and more people enjoy the mountains in winter. As it is easier to avoid an avalanche than to survive one, it is vital for snow enthusiasts to recognise the three basic factors which contribute to avalanches. An awareness of the condition of the snowpack, the angle of the slope and the ways in which an avalanche may be triggered can be the difference between life and death in the mountains.
来自《Collins Reading for IELTS》p58
fertile land of the Nile delta is being
eroded along Egypt’s
Mediterranean coast at an
astounding rate，in some parts estimated at 100 metres per year. In the past, land
scoured away from the coastline by the currents of the Mediterranean Sea used to be replaced by
sediment brought down to the delta by the River Nile, but this is no longer happening.
Up to now, people have blamed this loss of delta land on the two large
dams at Aswan in the south of Egypt, which hold back virtually all of the sediment that used to flow down the river. Before the dams were built, the Nile flowed freely carrying huge quantities of sediment north from Africa’s interior to be
deposited on the Nile delta. This continued for 7,000 years, eventually covering a region of over 22,000 square kilometres with layers of fertile silt. Annual flooding brought in new, nutrient-rich soil to the delta region, replacing what had been washed away by the sea, and
dispensing with the need for
fertilizers in Egypt’s richest food-growing area. But when the Aswan dams were constructed in the 20th century to provide electricity and
irrigation, and to protect the huge population centre of Cairo and its surrounding areas from annual flooding and drought, most of the sediment with its naturaI fertilizer
accumulated up above the dam in the southern, upstream half of Lake Nasser, instead of passing down to the delta.
Now, however, there turns out to be more to the story.It appears that the sediment-free water
emerging from the Aswan dams picks up
silt and sand as it erodes the river bed and banks on the 800-kilometre trip to Cairo. Daniel Jean Stanley of the Smithsonian Institute noticed that water samples taken in Cairo, just before the river enters the delta, indicated that the river sometimes carries more than 850 grams of sediment per cubic metre of water — almost half of what it carried before the dams were built. ‘I’m
ashamed to say that the significance of this didn’t strike me until after I had read 50 or 60 studies’, says Stanley in Marine Geology. ‘There is still a lot of sediment coming into the delta, but virtually no sediment comes out into the Mediterranean to
replenish the coastline. So this sediment must be trapped on the delta itself’.
Once north of Cairo, most of the Nile water is
diverted into more than 10,000 kilometres of
irrigation canals and only a small proportion reaches the sea directly through the rivers in the delta. The water in the irrigation canals is still or very slow-moving and thus cannot carry sediment, Stanley explains. The sediment sinks to the bottom of the canals and then is added to fields by farmers or pumped with the water into the four large freshwater
lagoons that are located near the outer edges of the delta. So very little of it actually reaches the coastline to replace what is being washed away by the Mediterranean currents.
The farms on the delta
plains and fishing and
aquaculture in the lagoons account for much of Egypt’s food supply. But by the time the sediment has come to rest in the fields and lagoons it is
municipal, industrial and agricultural waste from the Cairo region, which is home to more than 40 million people. ‘Pollutants are building up faster and faster’, says Stanley.
Based on his investigations of sediment from the delta lagoons, Frederic Siegel of George Washington University concurs. ‘In Manzalah Lagoon, for example, the increase in
mercury, lead, copper and
zinc coincided with the building of the High Dam at Aswan, the availability of cheap electricity, and the development of major power-based industries’, he says. Since that time the concentration of mercury has increased significantly. Lead from engines that use leaded fuels and from other industrial sources has also increased dramatically. These poisons can easily enter the food chain, affecting the productivity of fishing and farming. Another problem is that agricultural wastes include fertilizers which stimulate increases in plant growth in the lagoons and upset the ecology of the area, with serious effects on the fishing industry.
According to Siegel, international environmental organisations are beginning to pay closer attention to the region, partly because of the problems of
erosion and pollution of the Nile delta, but
principally because they fear the impact this situation could have on the whole Mediterranean coastal ecosystem. But there are no easy solutions. In the immediate future, Stanley believes that one solution would be to make artificial floods to flush out the delta waterways, in the same way that natural floods did before the construction of the dams. He says, however, that in the long term an alternative process such as
desalination may have to be used to increase the amount of water available. ‘In my view, Egypt must devise a way to have more water running through the river and the delta’, says Stanley. Easier said than done in a desert region with a rapidly growing population.
This book will provide a detailed examination of the Little Ice Age and other climatic shifts, but, before I
embark on that, let me provide a historical context. We tend to think of climate - as opposed to weather - as something unchanging, yet humanity has been at the
mercy of climate change for its entire existence, with at least eight
glacial episodes in the past 730,000 years. Our ancestors adapted to the universal but irregular global warming since the end of the last great Ice Age, around 10,000 years ago, with dazzling opportunism. They developed strategies for surviving
harsh drought cycles, decades of heavy rainfall or
unaccustomed cold; adopted agriculture and stock-raising, which revolutionized human life; and founded the world’s first pre-industrial civilizations in Egypt,
Mesopotamia and the Americas. But the price of sudden climate change, in
famine, disease and suffering, was often high.
The Little Ice Age lasted from roughly 1300 until the middle of the nineteenth century. Only two centuries ago, Europe experienced a cycle of bitterly cold winters; mountain
glaciers in the Swiss Alps were the lowest in-recorded memory, and pack ice surrounded Iceland for much of the year. The climatic events of the Little Ice Age did more than help shape the modern world. They are the deeply important context for the current
unprecedented global warming. The Little Ice Age was far from a deep freeze, however; rather an irregular
seesaw of rapid climatic shifts, few lasting more than a quarter-century, driven by complex and still little understood interactions between the atmosphere and the ocean. The seesaw brought cycles of
intensely cold winters and
easterly winds, then switched
abruptly to years of heavy spring and early summer rains,
mild winters, and frequent Atlantic storms, or to periods of droughts, light northeasterly winds, and summer heat waves.
Reconstructing the climate changes of the past is extremely difficult, because systematic weather observations began only a few centuries ago, in Europe and North America. Records from India and tropical Africa are even more recent. For the time before records began, we have only ‘proxy records’ reconstructed largely from tree rings and ice cores,
supplemented by a few incomplete written accounts. We now have hundreds of tree-ring records from throughout the northern
hemisphere, and many from south of the equator, too,
amplified with a growing body of temperature data from ice cores drilled in Antarctica, Greenland the
Peruvian Andes, and other locations. We are close to knowledge of annual summer and winter temperature variations over much of the northern hemisphere going back 600 years.
This book is a narrative history of climatic shifts during the past ten centuries, and some of the ways in which people in Europe adapted to them. Part One describes the
Medieval Warm Period, roughly 900 to 1200. During these three centuries,
Norse voyagers from Northern Europe explored northern seas, settled Greenland, and visited North America. It was not a time of uniform warmth, for then, as always since the Great Ice Age, there were constant shifts in rainfall and temperature. Mean European temperatures were about the same as today, perhaps slightly cooler.
It is known that the Little Ice Age cooling began in Greenland and the Arctic in about 1200. As the Arctic ice pack spread southward, Norse voyages to the west were rerouted into the open Atlantic, then ended altogether. Storminess increased in the North Atlantic and North Sea. Colder, much wetter weather descended on Europe between 1315 and 1319, when thousands
perished in a continent-wide famine. By 1400, the weather had become decidedly more unpredictable and stormier, with sudden shifts and lower temperatures that
culminated in the cold decades of the late sixteenth century. Fish were a vital
commodity in growing towns and cities, where food supplies were a constant concern. Dried
herring were already the
staples of the European fish trade, but changes in water temperatures forced fishing fleets to work further offshore. The
Basques, Dutch, and English developed the first offshore fishing boats adapted to a colder and stormier Atlantic. A
gradual agricultural revolution in northern Europe
stemmed from concerns over food supplies at a time of rising populations. The revolution involved
intensive commercial farming and the growing of animal
fodder on land not previously used for crops. The increased productivity from farmland made some countries
self-sufficient in grain and
livestock and offered effective protection against famine.
Global temperatures began to rise slowly after 1850, with the beginning of the Modern Warm Period. There was a vast migration from Europe by land-hungry farmers and others, to which the famine caused by the Irish potato
blight contributed, to North America, Australia, New Zealand, and southern Africa. Millions of
hectares of forest and woodland fell before the newcomers’ axes between 1850 and -1890, as intensive European farming methods expanded across the world. The unprecedented land
clearance released vast quantities of carbon dioxide into the atmosphere, triggering for the first time humanly caused global warming. Temperatures climbed more rapidly in the twentieth century as the use of fossil fuels
proliferated and greenhouse gas levels continued to
soar. The rise has been even steeper since the early 1980s. The Little Ice Age has given way to a new climatic
regime, marked by
prolonged and steady warming. At the same time, extreme weather events like Category 5 hurricanes are becoming more frequent.